There’s at least a hundred different ways to create your own blog/personal site, whether it’s using shared hosting with any of the myriad of free content management systems, using an existing platform such as Medium or Tumblr (although who in their right mind would do that? ), but why not try something more extravagant?
For rebooting my site, I chose a solution based on GitHub Pages. It’s definitely not a solution for everyone as it’s quite geeky at times, but maybe someone will find the description of my journey interesting or even decides to follow in my footsteps.
Installing the necessary software
Obviously, in order to work with a Jekyll project, I had to install the Jekyll binary itself, and a few more dependencies that were required by the particular template I used.
I like to have my Ruby versions maintained by
rbenv so I can theoretically
switch to another Ruby version in different directory, if needed.
What is GitHub Pages?
Let’s quickly go over what GitHub Pages actually is. In layman’s terms, it allows you to transform your Markdown1 documents into a full-fledged HTML pages that can be presented to the end user. The process of transformation is performed by a program called Jekyll and every step of it can be customized. This post, however, does not intend to focus on Jekyll directly, so if you’re interested in that topic, I recommend checking out their documentation, it’s truly comprehensive.
GitHub Pages automatically converts your Jekyll project into a HTML site, with one caveat: it supports only a limited number of Jekyll plug-ins and templates; unfortunately for me, the template I chose, a beautifully simple template called Chalk, was not one of the supported ones. That means I had to adjust the typical Jekyll workflow a little bit:
- I had to fork the template’s repository and build my work on top of that instead of creating an empty Jekyll project from the command line;
- I had to run
npm run setupto configure my environment;
- I had to start using
npm run localinstead of
bundle exec jekyll serveto preview the site locally2;
- and perhaps most importantly, I could not have GitHub Pages compile the site for me – I’ll get back to this point in a bit.
Configuring the repository
After forking the template’s repository, I decided how my Git workflow would
look like: I’d protect the
master branch from pushing any commits into it
directly3 and force myself to instead have a pull request for any group of
changes I intended to do on the site, including writing a new article. Why?
Because each commit that’s a part of a pull request would be processed by
Travis, whom I’d instruct to check the spelling and grammar for any modified
Markdown files and check all hyperlinks in the resulting HTML pages for any dead
links – and put the results of these checks as a comment in the pull request.
Only after these checks would pass, a merge back to
master would be allowed.
And speaking of merging to
master, that action would also trigger Travis one
more time. This time, on top of all the checks mentioned before, Travis would
yet again compile the site from Markdown to HTML, remove any unnecessary files,
gh-pages branch and force-push the finished site into that
branch, as GitHub Pages serves the content from the
gh-pages branch as the
How would such Travis configuration look like? For me, this
configuration file provided sufficient:
I believe most of the file is pretty self-explanatory (poke me in the comments
below if you disagree), I’ll just briefly touch on parts of the environment
GITHUB_EMAIL variables are used for pushing
compiled website by automated script mentioned just below these.
CLOUDFLARE_AUTH_EMAIL are used to identify my zone
(domain) and me as an user for when I want to purge Cloudflare’s cache at the
end of the build.
The two lines towards the end of the file that start with
encrypted GitHub token (
GITHUB_TOKEN) for the force-push into the
branch and a Cloudflare token (
CLOUDFLARE_AUTH_KEY). You can obtain the GitHub
token here (select the entire
category as your scope) and you can add it into your configuration file by
running following command:
The automated deployment script that’s run in the
bin/automated and looks something like this:
If you don’t use Cloudflare for your domain, you can obviously skip all the bits
related to Cloudflare by deleting relevant parts of
bin/automated; however, in my next article, I’ll describe how to configure
Cloudflare for custom domain to get both money-free and hassle-free SSL
certificate, so maybe hold on until then.
One last thing I needed to set up in Travis administration panel was a token
for Danger to use when it wishes to spit out the spelling and grammar check
results into a comment on a pull request – that one is obtained on the
same page as the token for force pushing,
except this time the scope should really be only
public_repo, and the token
should be stored under the name
DANGER_GITHUB_API_TOKEN. You can’t actually
add it to your
.travis.yml file as GitHub would detect it during your next
push, send you a scary e-mail and deactivate the token. Sneaky bastard!
Danger itself is configured in
Dangerfile like this:
Configuring the domain
Fortunately, this part was fairly easy. I wanted my site to be available at both
milanvit.net and www.milanvit.net, so I created two
A records, one
126.96.36.199, the other one pointing at
that’s the first case taken care of.
After that, I created a
www pointing at
which takes care of the
Once I was done with that, I entered
www.milanvit.net into the
CNAME file at
the root of the repository, committed that and after a push, GitHub was able to
tell which site to serve and how.
Recap: the entire workflow
After painful waiting for DNS records to refresh all the way down to my computer, I had the site available on my domain. Success!
Whenever I want to make a change (such as writing this exact article), I create
a new Git branch, start making changes, start committing and pushing them and at
some point, open a
pull request. That
triggers Travis to create a build out of that pull request, checking my
spelling, grammar and verifying that all links work, all images have a textual
description and all generated HTML is valid. The results of the build get
presented via Danger back to the pull request and from there, I can at any
point decide that I want to merge the pull request into
triggering another build that in the end results in the compiled site being
pushed into the
gh-pages branch from which GitHub Pages serves it to visitors
You might ask, why would one pick such a complicated route to creating a website?
I think a better question is… why not? Putting all these pieces of puzzle together was really fun, and isn’t that the most important thing when it comes to anything new in IT?
Should you have any questions regarding this setup, feel free to ask in the comments or get inspired by looking at the source code of this website.
Jekyll is actually not limited to having Markdown as an input for the transformation, but can also process Textile and Liquid documents or even raw HTML and CSS. ↩
Technically, I did not have to do this. It’s just a convenience method that Chalk provides. ↩
I accomplished this by going to the repository’s settings ↝ Branches ↝ Protected branches ↝
masterand checking the Protect this branch, Require status checks to pass before merging, Require branches to be up to date before merging, continuous-integration/travis-ci and Include administrators options. The second-to-last option will, of course, only appear after Travis is enabled for your repository. ↩
Theoretically, GitHub Pages can also serve the content of the
masterbranch or from the
docssubdirectory of the
masterbranch. For my use-case, keeping the site’s source code in the
masterbranch and the compiled result in the
gh-pagesbranch felt, however, like the cleanest possible solution. All of these options can be configured in the repository’s settings. ↩