Justin Cooksey

I've been investigating Pelican a static site generator to replace my current GatsbyJS generated site. Im far more at home and familiar with Python and Jinja2, so that was part of my reasoning for taking a look in to it.

So working along the basis of replacing my current GatsbyJS created site and posts with one created by Pelican, the following cover my initial issues and solutions. I've only added brief notes on what was done, without going in to detail as the plugin sites cover correct use in detail.

Redirect old site paths

In investigting moving to a new static site gernerator I also decided to change the sturcure a little. Since not many articles existed at the time of this posting minimal redirects will have to be created.

So how does Pelican handle this on a per article basis?

Enter the pelican-redirect plugin. Installing it is as simple as installling from PyPi

pip install pelican-redirect

Then by adding an additional line in to the metadata of each post

original_url: blog/hacktoberfest-2019.html

Pelican will create an HTML file at the URL location specified that will redirect to the new post location that it will create.

Canonical

In order to add the canonical header entry on articles using the SITEURL variable does not create what you need if RELATIVE_URL = True is set. To get around this and always use a full URL you can copy the SITEURL variable in to CANONICALURL and then use that variable in the base.html template.

pelicanconf.py

SITEURL = "https://jscooksey.github.io/Pelican"
CANONICALURL = SITEURL

base.html

<head>
  <title>{{SITENAME}}</title>
  {% if article %}
    <link rel="canonical" href="{{ CANONICALURL }}/{{ article.url }}" />
  {% endif%}

Sitemap

To produce sitemap files for SEO add the pelican-sitemap plugin:

pip install pelican-sitemap

and then adding a SITEMAP variable to the pelicanconf.py as described in the README of the Repo

Social Media Shares

I also had on my curent site social media sharing links at the bottom of every article, allowing the reader to share the article on there own social media streams.

The share-post plugin does this and again is simply installed using pip

pip install pelican-share-post

Then in the article.html template add link to atricle.share_post attribute

<a href="{{article.share_post\['twitter'\]}}">...</a>
<a href="{{article.share_post\['facebook'\]}}">...</a>
<a href="{{article.share_post\['linkedin'\]}}">...</a>

RSS/Atom Feed

Adding in Atom (or RSS) feeds is as easy as changing a few options, as this is built in to Pelcon. Changing a few options in the pelicanconf.py

FEED_MAX_ITEMS = 20
FEED_ALL_ATOM = "feeds/all.atom.xml"
CATEGORY_FEED_ATOM = "feeds/{slug}.atom.xml"
TRANSLATION_FEED_ATOM = None
AUTHOR_FEED_ATOM = None
AUTHOR_FEED_RSS = None

Code Highlighting

Markdown code highlighting is processed ultimately through Pygment which can be personailsed but has some builtin styles. Examples are on the Pygment site here and css files for these can be copied from the repo richleland/pygments-css.

You can copy the CSS file of your choice to your themes static folder (eg static/css/pygment.css) and then import that in the base.html

@import url(pygment.css);

Markdown code blocks can then be used and refer to the type of code inside them.

Conclusion

This is as far as I've gotten so far in working in Pelican and this Pelican created site is initially hosted on GitHub Pages at the URL https://jscooksey.github.io/Pelican/

I'll as I figure things out further, with the intention this will replace the primary site hosted at my domain.

Justin Cooksey

What began as a task just to export all Sites from a DatoRMM instance to a CSV file, has started me down the path of building a module to deal with many of the DattoRMM API end points.

Mainly working around the REST APIs that I needed to use to perform certain tasks, I've begun refactoring the export code to become more of an API interface module which may grow to be more useful. Other tasks may take my time away from this, but I will see were it may go.

The original code to export of sites is here DattoRMM-Site-Export. Currently this code pulls all Sites from a DattoRMM environment and exports the basic details in to CSV format file. It removes the system sites called Managed, OnDemand & Deleted Devices, so that you only get an export of the customer base.

Also in the repo is code to set Site variables in DattoRMM sites read in from a csv, as this was part of the next steps I needed to take.

Gets the API URL, Key and Secret from .env or environment variables (example below)

Functions to interact with the DattoRMM API are in the dattormmapi.py Python file.

Main function to do the API requests and export to CSV is in the export_sites.py Python file.

Refactoring to make this a more versatile module to handle interactions with the DattoRMM API will go in to a new GitHub repo, which I'll make public once its formed up some more.

Justin Cooksey

As usual I remembered the Advent of Code after it had started, but never the less I got stuck in to it around day 10 (10th December 20222). However I never got past the day 7 stage 1 puzzle but not because I couldn't, but rather I just fell in to that busy time of year, and didn't spend the time trying to keep up.

Well there is nothing to stop you from continuing on so I'll see if I can get back in to it in (ummm...) February 2023! (Im sitting here shaking my head)

If you havent ever heard about it, you should take a look. It's a series of puzzles, relased as an Advent calender, that you solve by wwriting up the code to find the answers. 2 puzzles are released every day from December 1st through Decmber 25th that you solve to help the elves undertake Christmas tasks. You can use any language you like, and its not the code you write that you progress with, just the correct answers you must get from that code.

So I'll continue on, no doubt it wont all get solve in 2 months, but tis still a fun task to keep you learning.

Web Site - Advent Of Code My GitHub repository - AdventOfCode2022

Adendum - 2023-07-17

Well I didnt get back to it, life tooks it path and I never jumped back in to it. Maybe this year I can make more of an effort.