Redesigning my Website

Motivation

Maintaining a personal website is tedious. Keeping one up to date, and running with static content which is not often updated, for example, a resume, or a simple portfolio is not too bad. However, a personal website which also acts as a personal blog can be a challenge.

In my opinion, the challenge in keeping a blog site is to act of keeping the content fresh. The act of writing new articles is hard, but what makes it more difficult is the process in publishing the article to the blog once an article is written.‌

If you’re on WordPress, or any other CMS, for example, then it can be relatively easy to add new content. The disadvantage here is that design of the website can be inflexible, and unable to be tailored to a specific need unless a lot of customization and hacking is involved.

At that point, when needing that much customization, another option is roll your own. But is it worth rolling a CMS? Depending on how much time you have, you might end up not wanting to write blog articles after developing the entire system!

A popular solution nowadays is creating a static site, and use something like Gatsby to generate the static files before deployment to a server. This is how my articles on this website are written, and managed now.

In order to keep the infrastructure for my website fresh, I try to work on it at least once a year. Although the static site generation model has worked really well, there are a lot of manual steps in which I rely on to actually fully process a blog article to be readable once deployed.

Most of my goals the past couple of years while revamping my own personal website is to continuously rely less on tooling, and server-side processing. The ultimate goal is to make it stupidly convenient to write new content.

This year, I decided to forego any popular frameworks, and build a static site generator tailored to my own needs.

Automating the Post-Processing Step

Although I am wanting to use less tooling, the previous iteration of the site using Gatsby had the killer feature I needed. I needed something to translate my blog posts written in Markdown into HTML. The content-creation workflow had worked quite well, and was something I wanted to keep going forward.

However, with my previous approach, I had to first write the article in Markdown, and then use Gatsby plugins to get it converted to HTML. After getting the Markdown to HTML, the state of the data was still un-tailored to fit the overall scheme of the website. And so, there was a post-processing step to obtain a blog post suitable for my personal site. That involved:

  1. Manually editing the HTML to include extra markup that would style the headers of the post correctly. This was specifically in adding the title, and the date posted.
  2. In order for the article to be displayed in the blog listing page, a JSON file containing the article entries must be edited. The structure of an entry contains the hyperlink reference to the HTML file, and the title which should be displayed.
  3. The blog page is then a React page which ingests the JSON file, and creates an array of li elements to be displayed as navigable links.
  4. When I am ready to publish, I just run a single npm command to push it to a private GitHub repository. A continuous deployment service will then drop the website artifacts onto an Azure Web App Service.

The previous implementation wasn’t so bad. For what it is worth, it takes a couple of days to write a blog post, and then 2 hours to type, and clean up. Finally, it takes about an additional 10 minutes to perform the post processing steps needed for the article to be displayed on the site.

Of course, post-processing can be automated, since it is mostly the same steps through each new article I write. The goal for the new iteration of the website was to make post-processing better. A part of the motivation was also that I had grown tired of web frameworks as they provide too many features, and complexity for what I really need.

And so, armed with concrete use-cases in my dream-list of features, I decided to go back to the year 2003, and write my own.

This new system is very simple. It has reduced the time needed to perform processing workflow to 0. All that is my concern now is writing the initial blog post in Markdown, and saving it to a directory which should then be automatically compiled into HTML, and magically appear on the blog page to be navigable.

There is a master Node.js script that will gather a set of templates which will build a typical page on the website.

A standard page for this website is simple. It just consists of: header, footer, Disqus thread, and the main content.

<html>
  <head>
    <style>
      {{styleSheet}}
    </style>
    <title>Roger Ngo's Website - {{pageTitle}}</title>
  </head>
  <body>
    <div class="header">
      {{headerContent}}
    </div>
    <div class="main main-content">
      {{mainContent}}
    </div>
    <div class="footer">
      {{footerContent}}
    </div>
    {{\disqus}}
  </body>
</html>

Header, and footer are self-explanatory. The header for example, contains the links to the main pages of the site. Home, Blog, FAQ, etc.

<header>
  <h1>Roger Ngo</h1>
  <nav class="navigation">
    <ul>
      <li>
        <a href="/">Home</a></li>
      <li>
        <a href="/blog">Blog</a></li>
      <li>
        <a href="/resume.html">Resume</a>
      </li>
      <li>
        <a target="_blank" href="/nesthing">NesThing</a>
      </li>
      <li>
        <a target="_blank" href="https://github.com/urbanspr1nter">GitHub</a>
      </li>
      <li>
        <a href="/faq.html">FAQ</a>
      </li>
    </ul>
  </nav>
</header>

Main content is then any content belongs to the specific page itself. So for example, a blog article.‌

How pages are compiled is a two step process in the script.‌

First Phase

  1. The script recursively walks down through the filesystem and determines any blog articles which are named index.md, which do not have a corresponding index.html.
  2. When found, index.md is converted to HTML using pandoc.
  3. The HTML file is then modified to include the article header: title, date authored, and author through the use of cheerio.
  4. The HTML file is then saved.

Second Phase

The file system is walked down again, and compiles all pages – assembling the templates together and gluing the main content to the final page. Finally, a Disqus thread is added to every blog article, and at the same time, a data structure is maintained to include a list of li elements to reference all blog articles compiled thus far.‌

The final blog page is rendered using the list of li elements, and written to a separate HTML file.

All files are dumped into a /public/ directory, where it will be pushed to a private GitHub repository, and served through a web server.

Deployment

Once compiled, the /public/ directory is copied into a GitHub repository, and pushed automatically by a shell script if a production deploy flag is set.

The final destination is an Azure App Service which runs on a Basic App Service plan. I think a goal for 2021 is to cut cloud costs. I have yet to think about what is better for my use case just yet, and I am happy with the Azure App Service right now – so changing this was something

HTTPS

Believe it or not, it has taken me this long to make my web page support at least TLS 1.1. For the longest time, users can only navigate the page using plain HTTP by default. I kept this because I felt like HTTPS was not needed due to my site having just static content.

However, I must get with the times, and since now HTTPS is treated as a first-class citizen for most browsers, I find it must be supported on my own website too.

I have upgraded the site by purchasing an SSL certificate from my domain name provider, and installed it in my Azure App Service. As of 2020, browsing to this website still can be done by either HTTP, or HTTPS, but I will eventually turn off HTTP, and support HTTPS only.