+++*

Symbolic Forest

A homage to loading screens.

Blog : Post Category : Geekery : Page 8

Photo post of the week

Neu, hanes Cymreig

Occasionally, when I visit The Mother, I look through old photos. Either family ones, or ones from my own albums. My first camera was a Christmas present I’d asked for when I was age 7 or 8: a Halina-branded Haking Grip-C compact camera that took 110 cartridge film. With a fixed focus, a fixed shutter-speed and a choice of two apertures, it was an almost-entirely mechanical beast. The shutter was cocked by a lever which engaged with the film’s sprocket holes (a single hole per frame on 110 film) and the only electrical component was a piezoelectric switch attached to the shutter, for firing a Flipflash bulb if you’d inserted any. I might still have an unused Flipflash somewhere.

A photography geek might look at the above spec and be amazed that I managed to photograph anything recognisable on that type of camera. Frankly, even aged 7 so was I: to go with the camera I’d been given a book called something like A Children’s Guide To Photography which made no bones about this type of camera being a very basic one that it was hard to get good results from. It lasted me a few years though, despite at least one drop that popped the back off; I was still using it in my teens, I think.

Sometimes on this blog I’ve mentioned visiting the Ffestiniog Railway; last December for example. The last time I visited The Mother, though, I dug out the photos I’d taken on my very first visit, on which we did a single round trip from Blaenau to Porthmadog and back again behind the Alco. All the photos were taken right at the start of the day, it seems.

The Alco at Blaenau

The Alco at Blaenau

The Alco at Blaenau

The weather in Blaenau is famously murky and damp; I’m not sure quite how much of the murk and grain in those photos is down to the camera and how much is down to the weather. Still, what the photos lack in sharpness, they certainly have in atmosphere.

We can rebuild it! We have the technology! (part four)

Or, finishing off the odds and ends

Settling down to see what else I should write in the series of posts about how I rebuilt this website, I realised that the main issues now have already been covered. The previous posts in this series have discussed the following:

And throughout the last two, we touched on some other important software engineering topics, such as refactoring your code so that you Don’t Repeat Yourself, and optimising your code when it’s clear that (and where) it is needed.

There are a few other topics to touch on, but I wasn’t sure they warranted a full post each, so this post is on a couple of those issues, and any other odds and ends that spring to mind whilst I am writing it.

Firstly, the old blog was not at all responsive: in web front-end terms, that means it didn’t mould itself to fit the needs of the users and their devices. Rather, it expected the world to all use a monitor the same size as mine, and if they didn’t, then tough. When I wrote the last designs the majority of the traffic the site was receiving was from people on regular computers; nowadays, that has changed completely.

However, the reason that this isn’t a particularly exciting topic to write about is that I didn’t learn any new skills or dive into interesting new programming techniques. I went the straightforward route, installed Bootstrap 4.5, and went from there. Now, I should say, using Bootstrap doesn’t magically mean your site will become responsive overnight; in fact, it’s very straightforward to accidentally write an entirely non-responsive website. Responsiveness needs carefully planning. However, with that careful planning, and some careful use of the Bootstrap CSS layout classes, I achieved the following aims:

  • The source code is laid out so that the main content of the page always comes before the sidebars in the code, wherever the sidebars are actually displayed. This didn’t matter so much on this blog, but on the Garden Blog, which on desktop screens has sidebars on both sides of the page, it does need to be specifically coded. Bootstrap’s layout classes, though, allow you to separate the order in which columns appear on a page from the order in which they appear in the code.
  • More importantly, the sidebars move about depending on page width. If you view this site on a desktop screen it has a menu sidebar over on the right. On a narrow mobile screen, the sidebar content is down below at the bottom of the page.
  • The font resizes based on screen size for easier reading on small screens. You can’t do this with Bootstrap itself; this required @media selectors in the CSS code with breakpoints chosen to match the Bootstrap ones (which, fortunately, are clearly documented).
  • Content images (as opposed to what you could call “structural images”) have max-width: 100%; set. Without this, if the image is bigger than the computed pixel size of the content column, your mobile browser will likely rescale things to get the whole image on screen, so the content column (and the text in it) will become too narrow to read.

On the last point, manual intervention is still required on a couple of types of content. Embedded YouTube videos like in this post need to have the embed code manually edited, and very long lines of text without spaces need to have soft hyphens or zero-width spaces inserted, in order to stop the same thing happening. The latter usually occurs in example code, where zero-width spaces are more appropriate than soft hyphens. All in all though, I’ve managed to produce something that is suitably responsive 95% of the time.

The other point that is worth writing about is the build process of the site. As Wintersmith is a static site generator, every change to the site needs the static files to be built and deployed. The files from the previous version need to be deleted (in case any stale ones, that have disappeared completely from the latest iteration of the output, are still lying about) and then Wintersmith is run to generate the new version. You could do this very simply with a one-liner: rm -rf ../../build/* && wintersmith build if you’re using Bash, for example. However, this site actually consists of three separate Wintersmith sites in parallel. The delete step might only have to be done once, but doing the build step three separate times is a pain. Moreover, what if you only want to delete and rebuild one of the three?

As Wintersmith is a JavaScript program, and uses npm (the Node Package Manager) for managing its dependencies, it turns out that there’s an easy solution to this. Every npm package or package-consumer uses a package.json file to control how npm works; and each package.json file can include a scripts section containing arbitrary commands to run. So, in the package.json file for this blog, I inserted the following:

"scripts": {
  "clean": "node ../shared/js/unlink.js ../../build/blog",
  "build": "wintersmith build"
}

You might be wondering what unlink.js is. I said before that “if you’re using Bash” you could use rm -rf ../../build. However, I develop on a Windows machine, and for this site I use VS Code to do most of the writing. Sometimes therefore I want to build the site using PowerShell, because that’s the default VS Code terminal. Sometimes, though, I’ll be using GitBash, because that’s convenient for my source control commands. One day I might want to develop on a Linux machine. One of the big changes between these different environments is how you delete things: del or Remove-Item in PowerShell; rm in Bash and friends. unlink.js is a small script that reproduces some of the functionality of rm using the JavaScript del package, so that I have a command that will work in the same way across any environment.

So, this means that in the main blog’s folder I can type npm run clean && npm run build and it will do just the same thing as the one-liner command above (although note that the clean step only deletes the main blog’s files). In the other Wintersmith site folders, we have a very similar thing. Then, in the folder above, we have a package.json file which contains clean and build commands for each subsite, and a top-level command that runs each of the others in succession.

"scripts": {
  "clean:main": "cd main && npm run clean",
  "clean:misc": "cd misc && npm run clean",
  "clean:garden": "cd garden && npm run clean",
  "clean": "npm run clean:misc && npm run clean:main && npm run clean:garden",
  "build:main": "cd main && npm run build",
  "build:misc": "cd misc && npm run build",
  "build:garden": "cd garden && npm run build",
  "build": "npm run build:misc && npm run build:main && npm run build:garden"
}

And there you have it. By typing npm run clean && npm run build at the top level, it will recurse into each subsite and clean and build all of them. By typing the same command one folder down, it will clean and build that site alone, leaving the others untouched.

When that’s done, all I have to do is upload the changed files; and I have a tool to do that efficiently. Maybe I’ll go through that another day. I also haven’t really touched on my source-control and change management process; and all I have to say there is, it doesn’t matter what process you use, but you will find things a lot more straightforward if you find a process you like and stick to it. Even if you’re just a lone developer, using a sensible source control workflow and release process makes life much easier and makes you less likely to make mistakes; you don’t need anything as rigid as a big commercial organisation, but just having a set process for storing your changes and releasing them to the public means that you are less likely to slip up and make mistakes. This is probably something else I’ll expand into an essay at some point.

Is the site perfect now? No, of course not. There are always more changes to be made, and more features to add; I haven’t even touched on the things I decided not to do right now but might bring in one day. Thoe changes are for the future, though. Right now, for a small spare-time project, I’m quite pleased with what we have.

Putting things into practice

Or, time to get the model trains out

A couple of times recently I’ve mentioned my vague model railway plans and projects, including the occasional veiled hint that I’ve already been building stock for the most fully-fleshed out of these ideas. At the weekend I had some time to myself, so I unpacked my “mobile workbench” (an IKEA tray with a cutting mat taped firmly to it) and had a look at which projects I could move on with.

The other week I’d been passing my local model shop and popped in to support them by buying whatever bits and pieces I could remember I needed. I’ve been wondering the best way to weight some of my stock, so bought a packet of self-adhesive model aircraft weights. I wasn’t convinced they would be ideal because they’re a bit on the large side for 009 scale, but the 5g size do just fit nicely inside a van.

Wagon and weights

Yes, I know I didn’t clean off the feed mark on the inside of the wagon; nobody’s going to see it, are they. The weights are very keen to tell everyone they are steel, not lead. I wasn’t really sure what amount to go with especially given that (like most Dundas wagon kits) it has plastic bearings; it now has 10g of steel inside it and feels rather heavy in the hand.

Another project that’s been progressing slowly is a Dundas kit for a Ffestiniog & Blaenau Railway coach, which will be a reasonable representation of the first generation of Porthdwyryd & Dolwreiddiog Railway coaches. The sides were painted early on with this kit so that I could glaze it before it was assembled; it still needs another coat on the panels but the area around the window glazing shouldn’t need to see the paintbrush again I hope. In my last train-building session I fitted its interior seating; in this one, it gained solebars and wheels and can now stand on the rails. Its ride is very low, so low that, given typical 009 flanges, it needs clearance slots in the floor for the wheels.

Coach underside

This made it a little awkward to slot the wheels into place, but when I did it all fitted together rather nicely, with little lateral slop in the wheels and a quick test showing everything was nice and square.

Standing on a perspex block to check for squareness

To show just how low-riding it is—like many early narrow gauge carriages—I used a piece of card and a rule to measure how much clearance there is above rail level.

Height measurement

This shows rather harshly that I’ve let this model get a bit dusty on the workbench.

It needs couplings, of course, so I made a start on folding up a pair of Greenwich couplings for it. I’m still trying to find the perfect pliers for making Greenwich couplings. They don’t need any soldering, at least, but they do need folding up from the fret and then fitting the two parts—buffer and loop—together with a pin. These small flat-nosed pliers are very good for getting a crisp fold.

Greenwich coupling fret

Part-folded Greenwich buffer

I should give those pictures a caption about the importance of white balance in photography, given how differently the green cutting mat has come out between them. By the time I got to this stage it was starting to get a bit too dark to fit two tiny black pieces of brass together with a black pin and get them moving freely, never mind wrapping ferromagnetic wire around the loop tail. Still, all in all, I think everything seemed to be coming along quite nicely.

The only perfect railway is the one you invent yourself

Or, some completely fictional history

The other week, I wrote about how there are just too many interesting railways to pick one to build a model of, which is one reason that none of my modelling projects ever approach completion; indeed, most of them never approach being started. Some, though, have developed further than others. In particular, I mentioned a plan for a fictitious narrow-gauge railway in the Rhinogydd, and said I’ve started slowly aquiring suitable stock for it. What I didn’t mention is that I’ve also put together the start of a history of this entirely invented railway. I first wrote it down a few years ago, and although it is a very high-level sketch, has a fairly high level of implausibility to it, and probably needs a lot of tweaks to its details, I think it’s a fair enough basis for a railway that is fictional but interesting.

Narrow-gauge modelling general does seem to have something of a history of the planning and creation of entire fictional systems; rather, I think it’s something that has disappeared from British standard gauge railway modelling, partly due to the history of the British railway network. This, then, is my attempt at an entry into this genre. If you don’t know the Rhinogydd: they are the mountain range that forms the core of Ardudwy, the mountains behind Harlech that form a compact block between the Afon Mawddach and the Vale of Ffestiniog. The main change I have made to real-world geography is to replace Harlech itself with a similar town more usable as a port; all the other villages, hamlets and wild mountain passes are essentially in the same place as in the real world, and if you sit down with this fictional history and the Outdoor Leisure map that covers the district, you should be able to trace the route of these various railways without too much trouble.

The primary idea behind the railway is that profitable industry was discovered in the heart of the Rhinogydd. Not slate as in Ffestiniog; the geology is all wrong for that. The industry here would be mining for metal ores, and it isn’t really too far from the truth. There genuinely were a whole host of mines, largely digging up manganese ore, in the middle of what was and is a very inhospitable area; all of them were very small and ultimately unsuccessful. The fiction is that an intelligent landowner realised that a railway would enable the mines to develop; so, using part of an earlier horse-drawn tramway, a rather circuituous route was built from the middle of the mountains down to a port at the mouth of the Afon Dwyryd. The earlier tramway, also fictional, would have run in a very different direction, from the Afon Artro up to the small farms in the hills overlooking Maentwrog. Why you would want to build a horse tramway over such a route I’m still not entirely sure, but it means that my Porthdwyryd & Dolwreiddiog Railway can be a network, a busy well-trafficked main line in one direction, and a half-abandoned branch line in the other. This is of course not too dissimilar to the Welsh Highland Railway, with its Croesor and Bryngwyn branches, originally both main lines but both later superseded.

I did, a few years ago, draft a whole outline history for the railway, trying to explain quite why such a thing would and could exist, and how it might have at least partially survived through to the present day. It was an interesting exercise, although I’m not sure it would be a very interesting piece of writing to post here. I do like the thought, though, of writing it up as a full history, complete with some unanswered questions; and then, when I do build models of the line, I can claim that it is at least an approximately accurate model of something that actually did run on the railway. I quite like the idea of steadfastly maintaining that it is actually a real place—what do you mean, you’ve never heard of it before?—and that I am trying to model, however imperfectly, trains that really did exist. I can always be very apologetic when my model “isn’t as accurate as I’d like”, or when I “haven’t been able to find out” exactly what colour a given train was painted in a given year. I wonder how persuasive I will manage to be.

The railway in the woods

Or, some autumnal exploration

Today: we went to wander around Leigh Woods, just outside Bristol on the far bank of the Avon Gorge. It’s not an ancient woodland: it is a mixture of landscapes occupied and used for various purposes for the past few thousand years. A hillfort, quarries, formal parkland, all today merged and swallowed up by woodland of various forms and patterns, although you can see its history if you look closely. I love walking around damp, wet countryside in autumn; although today was dry, everything had a good soaking yesterday and earlier in the week. The dampness brings out such rich colours in photos, even though I didn’t have anything better than the camera on my phone with me.

Twisted roots

Twisted trunks

Part of the woods, “Paradise Bottom”, belonged to the Leigh Court estate and was laid out by Humphry Repton, the garden and landscape designer who should not be confused with *Boulder Dash*. It includes a chain of ponds which are now very much overgrown, their water brown and their bottoms thick with silt; and some of the first giant redwood trees planted in Britain, around 160 years ago now.

Redwood, of not inconsiderable size

The ponds drain into a sluggish, silty stream which trickles through the woods down into the Avon, the final salt-tinged part of the stream running under a handsome three-arched viaduct built by the Bristol & Portishead Railway, back when when the redwoods were newly-planted.

Railway viaduct

Railway viaduct

If you’ve heard of the Bristol & Portishead, it may be because of the ongoing saga of when (if ever) it will reopen to passengers again. It closed to passenger traffic back in the 1960s, freight in the early 1980s, but unusually was mothballed rather than pulled up and scrapped. At the start of the 21st century it was refurbished and reopened for freight trains, but not to full passenger standards. Although there have been plans on the table for ten or fifteen years now to reopen it to passenger traffic, years have passed, the leaves in the wood have fallen and grown again, and nothing keeps on resolutely happening. The main issues are the signalling along the line (token worked, I understand, with traincrew-operated instruments) and its single track, which limits maximum capacity to one train each way per hour at the very most. Aside from putting in a station or two, these are the main factors which at present prevent it from being reopened to passengers.

When I moved to Bristol, over ten years ago now, the Bristol & Portishead line was busy every day with imported coal traffic. Now that that is fading away, the line itself is much quieter, and indeed can go for days at a time with no trains at all. Its railheads are dull, not shiny, as it curves through the lush green woodland. I walked up to the top of one of its tunnel mouths, and looked down upon it silently.

The railway in the woods

We can rebuild it! We have the technology! (part three)

Introducing Pug

If you want to start reading this series of articles from the start, the first part is here. In the previous part we discussed how I adapted Wintersmith to my purposes, adding extra page generators for different types of archive page, and refactoring them to make sure that I wasn’t repeating the same logic in multiple places, which is always a good process to follow on any sort of coding project. This post is about the templating language that Wintersmith uses, Pug. When I say “that Wintersmith uses”, incidentally, you should always add a “by default” rider, because as we saw previously adding support for something else generally wouldn’t be too hard to do.

In this case, though, I decided to stick with Pug because rather than being a general-purpose templating or macro language, it’s specifically tailored towards outputting HTML. If you’ve ever tried playing around with HTML itself, you’re probably aware that the structure of an HTML document (or the Document Object Model, as it’s known) has to form a tree of elements, but also that the developer is responsible for making sure that it actually is a valid tree by ending elements in the right order. In other words, when you write code that looks like this:

<section><article><h2>Post title</h2><p>Some <em>content</em> here.</p></article></section>

it’s the developer who is responsible for making sure that those </p>, </article> and </section> tags come in the right order; that the code ends its elements in reverse order to how they started. Because HTML doesn’t pay any attention to (most) white space, they have to be supplied. Pug, on the other hand, enforces correct indentation to represent the tree structure of the document, meaning that you can’t accidentally create a document that doesn’t have a valid structure. It might be the wrong structure, if you mess up your indentation, but that’s a separate issue. In Pug, the above line of HTML would look like this:

section
  article
    h2 Post title
    p Some
      em content
      | here.

You specify the content of an element by putting it on the same line or indenting the following line; elements are automatically closed when you reach a line with the same or less indentation. Note that Pug also assumes that the first word on each line will be an opening tag, and how we can suppress this assumption with the | symbol. You can supply attributes in brackets, so an <a href="target"> ... </a> element becomes a(href="target") ..., and Pug also has CSS-selector-style shortcuts for the class and id attributes, because they’re so commonly used. The HTML code

<section class="mainContent"><article id="post-94">...</article></section>

becomes this in Pug:

section.mainContent
  article#post-94 ...

So far so good; and I immediately cracked on with looking at the pages of the old Wordpress blog and converting the HTML of a typical page into Pug. Moreover, Pug also supports inheritance and mixins (a bit like functions), so I could repeat the exercise of refactoring common code into a single location. The vast majority of the template code for each type of page sits in a single layout.pug file, which is inherited by the templates for specific types of page. It defines a mixin called post() which takes the data structure of a single post as its argument and formats it. The template for single posts is reduced to just this:

extends layout
block append vars
  - subHeader = '';
block append title
  | #{ ' : ' + page.title }
block content
  +post(page)

The block keyword is used to either append to or overwrite specific regions of the primary layout.pug template. The content part of the home page template is just as straightforward:

extends layout
block content
  each article in articles
    +post(article)

I’ve omitted the biggest part of the home page template, which inserts the “Newer posts” and “Older posts” links at the bottom of the page; you can see though that for the content block, the only difference is that we iterate over a range of articles—chosen by the page generator function—and call the mixin for each one.

The great thing about Pug, though, is that it lets you drop out into JavaScript at any point to run bits of code, and when doing that, you don’t just have access to the data for the page it’s working on, you can see the full data model of the entire site. So this makes it easy to do things such as output the sidebar menus (I say sidebar; they’re at the bottom if you’re on mobile) with content that includes things like the number of posts in each month and each category. In the case of the tag cloud, it effectively has to put together a histogram of all of the tags on every post, which we can only do if we have sight of the entire model. It’s also really useful to be able to do little bits of data manipulation on the content before we output it, even if it’s effectively little cosmetic things. The mixin for each post contains the following Javascript, to process the post’s categories:

- if (!Array.isArray(thePost.metadata.categories)) thePost.metadata.categories = [ thePost.metadata.categories ]
- thePost.metadata.categories = Array.from(new Set(thePost.metadata.categories))

The - at the start of each line tells Pug that this is JavaScript code to be run, rather than template content; all this code does is tidy up the post’s category data a little, firstly by making sure the categories are an array, and secondly by removing any duplicates.

You can, however, get a bit carried away with the JavaScript you include in the template. My first complete design for the blog, it turned out, took something like 90 minutes to 2 hours to build the site on my puny laptop; not really helpful if you just want to knock off a quick blog post and upload it. That’s because all of the code I had written to generate the tag cloud, the monthly menus and the category menus, was in the template, so it was being re-computed over again for each page. If you assume that the time taken to generate all those menus is roughly proportional to the number of posts on the blog, O(n) in computer science terms (I haven’t really looked into it—it can’t be any better but it may indeed be worse) then the time taken to generate the whole blog becomes O(n2), which translates as “this doesn’t really scale very well”. The garden blog with its sixtyish posts so far was no problem; for this blog (over 750 posts and counting) it wasn’t really workable.

What’s the solution to this? Back to the Wintersmith code. All those menus are (at least with the present design) always going to contain the same data at any given time, so we only ever need to generate them once. So, I created another Wintersmith plugin, cacher.coffee. The JavaScript code I’d put into my layout templates was converted into CoffeeScript code, called from the plugin registration function. It doesn’t generate HTML itself; instead, it generates data structures containing all of the information in the menus. If you were to write it out as JSON it would look something like this:

"monthData": [
  { "url": "2020/10/", "name": "October 2020", "count": 4 },
  { "url": "2020/09/", "name": "September 2020", "count": 9 },
  ...
],
"categoryData": [
  { "name": "Artistic", "longer": "Posts categorised in Artistic", "count": 105 },
  ...
],
"tagData": [
  { "name": "archaeology", "count": 18, "fontSize": "0.83333" },
  { "name": "art", "count": 23, "fontSize": "0.97222" },
  ...
]

And so on; you get the idea. The template then just contains some very simple code that loops through these data structures and turns them into HTML in the appropriate way for each site. Doing this cut the build time down from up to two hours to around five minutes. It’s still not as quick to write a post here as it is with something like Wordpress, but five minutes is a liveable amount of overhead as far as I am concerned.

The Plain People Of The Internet: So, you’re saying you got it all wrong the first time? Wouldn’t it all have been fine from the start if you’d done it that way to begin with?

Well, yes and no. It would have been cleaner code from the start, that’s for certain; the faster code also has a much better logical structure, as it keeps the code that generates the semantic content at arm’s length from the code that handles the visual appearance, using the data structure above as a contract between the two. Loose coupling between components is, from an architectural point of view, nearly always preferable than tight coupling. On the other hand, one of the basic principles of agile development (in all its many and glorious forms) is: don’t write more code than you need. For a small side project like this blog, the best course of action is nearly always to write the simplest thing that will work, be aware than you’re now owing some technical debt, and come back to improve it later. The difficult thing in the commercial world is nearly always making sure that that last part actually happens, but for a site like this all you need is self-discipline and time.

That just about covers, I think, how I learned enough Pug to put together the templates for this site. We still haven’t covered, though, the layout itself, and all the important ancillary stuff you should never gloss over such as the build-deploy process and how it’s actually hosted. We’ll make a start on that in the next post in this series.

*The next post in this series, in which we discuss responsive design, and using `npm` to make the build process more straightforward, is here*

Too much to choose from

Or, why are there so many different trains in the world

Yesterday I said that having more blog posts about trains than about politics would be a good target to aim for by this time next year; and regardless of how frequently I post here overall, that’s probably still a good rule of thumb to aim for. So today, I thought I’d talk about model trains, and how I end up never building any.

I’ve always wanted a model train of some kind, ever since I was small and had a Hornby “Super Sound” trainset with an allegedly realistic chuff, generated by a sound machine wired in to the power circuity. However, there have always been a few problems with this, aside from the perennial problems of having enough time and space for such a space-gobbling hobby. There are two fundamental ones, at root: firstly, I am perennially pedantic, and secondly, I just like such a broad range of different railways and trains that it would be extremely hard to choose just one to stick with as a project. Given the first point, I would always want anything I build to be as accurate as I could make it; given the second, I can never stick with one idea for long enough to build enough stuff to practice the skills sufficiently and be a good enough model-builder to achieve this. Whilst drafting this post in my head, I tried to think just how many railways I’ve been interested in enough to start working out the feasibility of some sort of model railway project. It’s a long list.

  • Some sort of rural German branch line (I did actually start buying stock for this)
  • A fictitious narrow-gauge line in the Rhinogydd, in Ardudwy (again, this has reached the stock-acquiring level)
  • Grimsby East Marsh or somewhere else in Grimsby Docks
  • Something inspired by the Cambrian Railways’ coast section (although the actual stations are mostly fairly unattractive, apart from possibly Penrhyndeudraeth)
  • Woodhall Junction, on the Great Northern
  • Bala Junction (ever since I saw a plan of it in a Railway Modeller years and years ago)
  • Wadebridge (come on, who doesn’t like the North Cornwall Railway)
  • North Leith on the North British Railway (at 1:76 scale, you could do it to exact scale and it would still fit inside a 6 foot square)
  • Something fictitious based on the idea that the Lancashire, Derbyshire and East Coast Railway had actually finished their planned line east of Lincoln, which was always a wildly implausible plan in the real world.
  • The Rosedale Railway (although in practice this would probably be very dull as a model)
  • Moorswater, where the Liskeard and Looe Railway and Liskeard and Caradon Railways met (ideally when it was still in use as a passenger station, although that means before it was connected to the rest of the railway network)

Even for a modelling genius, or the sort of modeller who can produce an amazing, detailed landscape, then immediately packs it away in a box and starts working on the next one, that’s a lot of different ideas to vacillate between. And some of these would require just about everything on the model to be completely hand-made: Moorswater, for example, would have to have fully hand-made track, stock, locomotives and buildings in order to even vaguely resemble the original. With something like Woodhall Junction or Grimsby Docks most of the place-specific atmosphere is in the buildings rather than the trains, but even so, getting a good range of location-specific locos and stock would be difficult.

Just lately, there’s been another one to add to the list: I read a small book I picked up about the Brecon and Merthyr Railway, and was intrigued. I quickly found it had an intriguing range of operations, reached 1923 without ever owning any bogie coaches, and standardised on using somersault signals. The large-scale OS maps that are easily available (ie, those in the National Library of Scotland collection) show some very intriguing track layouts, its main locomotive works at Machen was an attractive and jumbled mix of 1820s stone and 1900s corrugated iron, and it even had some halts on the Machen-Caerffili branch which were only ever used by trains in one direction. However, on the other hand, the small book I picked up seems to be practically the only book ever written about the line, with very little information available easily about it. I suspect I’d end up writing a book about it myself before I got around to building anything.

I am going to try to build more models, and hopefully the more I build, the better they will get and the happier with my skills I’ll become. I’m going to have to try to stick to one and only one of the above, though, and try not to get distracted. That might be the hardest part.

We can rebuild it! We have the technology! (part two)

In which we delve into Wintersmith and some CoffeeScript

Previously, I discussed some various possible ways to structure the coding of a website, and why I decided to rebuild this site around the static site generator Wintersmith. Today, it’s time to dive a little deeper into what that actually entailed. Don’t worry if you’re not a technical reader; I’ll try to keep it all fairly straightforward.

To produce this website using Wintersmith, there were essential four particular technologies I knew I’d need to know. Firstly, the basics: HTML and CSS, as if I was writing every single one of the four-thousand-odd HTML files that currently make up this website from scratch. We’ll probably come onto that in a later post. Second-and-thirdly, by default Wintersmith uses Markdown to turn content into HTML, and Pug as the basis for its page templates. Markdown I was fairly familiar with as it’s so widely used; Pug was something new to me. And finally, as I said before, Wintersmith itself is written using CoffeeScript. I was vaguely aware that, out of the box, Wintersmith’s blog template wouldn’t fully replicate all of Wordpress’s features and I’d probably need to extend it. That would involve writing code, and when you’re extending an existing system, it’s always a good idea to try to match that system’s coding style and idioms. However, I’d come across CoffeeScript briefly a few years ago, and if you’ve used JavaScript, CoffeeScript is fairly straightforward to comprehend.

The Plain People Of The Internet: Hang on a minute there, now! You told us up there at the top, you were going to keep all this nice and straightforward for us non-technical Plain People. This isn’t sounding very non-technical to us now.

Ah, but I promised I would try. And look, so far, all I’ve done is listed stuff and told you why I needed to use it.

The Plain People Of The Internet: You’re not going to be enticing people to this Wintersmith malarkey, though, are you? Us Plain People don’t want something that means we need to learn three different languages! We want something nice and simple with a box on-screen we can write words in!

Now, now. I was like you once. I didn’t spring into life fully-formed with a knowledge of JavaScript and an instinctive awareness of how to exit Vim. I, too, thought that life would be much easier with a box I could just enter text into and that would be that. The problem is, I’m a perfectionist and I like the site to look just right, and for that you need to have some knowledge of HTML, CSS and all that side of things anyway. If you want your site to do anything even slightly out-of-the-ordinary, you end up having to learn JavaScript. And once you know all this, and you’re happy you at least know some of it, then why not go the whole hog and start knocking together something with three different programming languages you only learned last week? You’ll never know unless you try.

The Plain People Of The Internet: Right. You’re not convincing me, though.

Well, just stick with it and we’ll see how it goes.

In any case, I had at least come across CoffeeScript before at work, even if I didn’t use it for very much. It went through a phase a few years ago, I think, of almost being the next big language in the front-end space; but unlike TypeScript, it didn’t quite make it, possibly because (also unlike TypeScript) it is just that bit too different to JavaScript and didn’t have quite so much energy behind it. However, it is essentially just a layer on top of JavaScript, and everything in CoffeeScript has a direct JavaScript equivalent, so even if the syntax seems a bit strange at points it’s never going to be conceptually too far away from the way that JavaScript handles something. The official website goes as far as to say:

Underneath that awkward Java-esque patina, JavaScript has always had a gorgeous heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.

Now if you ask me, that’s going a little bit far; but then, I don’t mind the “Java-esque patina” because the C-derived languages like C# and Java are the ones I’m happiest using anyway. CoffeeScript brings Python-style whitespace-significance to JavaScript: in other words, whereas in JavaScript the empty space and indentation in your code is just there to make it look pretty, in CoffeeScript it’s a significant part of the syntax. My own feeling on this, which might be controversial, is that the syntax of CoffeeScript is harder to read than the equivalent JavaScript. However, despite what some people will tell you, there’s no such thing as an objective viewpoint when it comes to language syntax; and as I said above, as Wintersmith is written in CoffeeScript, the best language to use to change and extend its behaviour is also CoffeeScript.

Wintersmith, indeed, is designed for its behaviour to be changeable and extendable. By default it only has a fairly small set of capabilities. It takes a “content tree”, a particular set of files and folders, and a set of templates. Markdown files in the content tree are converted to HTML, merged with a template, and written to an output file. JSON files are treated in almost the same way, almost as content files without any actual content aside from a block of metadata. Other filetypes, such as images, are copied through to the output unchanged. So, to take this article you’re reading as an example: it started out as a file called articles/we-can-rebuild-it-we-have-the-technology-part-two/index.md. That file starts with this metadata block, which as is normal for Markdown metadata, is in YAML:

---
title: We can rebuild it! We have the technology! (part two)
template: article.pug
date: 2020-09-28 20:09:00
...
---

I’ve configured Wintersmith to use a default output filename based on the date and title in the metadata of each article. This file, therefore, will be merged with the article.pug template and output as 2020/09/28/we-can-rebuild-it-we-have-the-technology-part-two/index.html, so its URI will nicely match the equivalent in Wordpress. So there you go, we have a page for each blog post, almost right out of the box.

That’s fine for individual article pages, but what about the home page of the blog? Well, Wintersmith is designed to use plugins for various things, including page generation; and if you create a new Wintersmith site using its blog template, you will get a file called paginator.coffee added to your site’s plugins folder, plus a reference in the site configuration file config.json to make sure it gets loaded.

"plugins": [
    "./plugins/paginator.coffee"
]

The code in paginator.coffee defines a class called PaginatorPage, which describes a page consisting of a group of articles. It then calls a Wintersmith API function called registerGenerator, to register a generator function. The generator function looks over every article in the content/articles folder, slices them up into blocks of your favoured articles-per-page value, and creates a PaginatorPage object for each block of articles. These are then output as index.html, page/2/index.html, page/3/index.html and so on. There, essentially, is the basis of a blog.

If you’ve used something like Wordpress, or if you’re a regular reader of this site, you’ll know most blogs have a bit more to them than that. They have features to categorise and file articles, such as categories and tags, and they also have date-based archives so it’s easy to, say, go and read everything posted in May 2008 or any other arbitrary month of your choice. Well, I thought, that’s straightforward. All we have to do there is to reuse the paginator.coffee plugin, and go in and fiddle with the code. So, I copied the logic from paginator.coffee and produced categoriser.coffee, archiver.coffee and tagulator.coffee to handle the different types of archive page. Pure copy-and-paste code would result in a lot of duplication, so to prevent that, I also created an additional “plugin” called common.coffee. Any code that is repeated across more than one of the page-generator plugins was pulled out into a function in common.coffee, so that then it can be called from anywhere in the generator code that needs it. Moreover, this blog and the garden blog are structured as separate Wintersmith sites, so I pulled out all of the CoffeeScript code (including the supplied but now much-altered paginator.coffee) into a separate shared directory tree, equally distant from either blog. The plugins section of the configuration file now looked like this:

"plugins": [
    "../shared/wintersmith/plugins/common.coffee",
    "../shared/wintersmith/plugins/paginator.coffee",
    "../shared/wintersmith/plugins/categoriser.coffee",
    "../shared/wintersmith/plugins/tagulator.coffee",
    "../shared/wintersmith/plugins/archiver.coffee"
]

The original paginator page generation function has now turned into the below: note how the only logic here is that which slices up the list of articles into pages, because everything else has been moved out into other functions. The getArticles function weeds out any maybe-articles that don’t meet the criteria for being an article properly, such as not having a template defined.

env.registerGenerator 'paginator', (contents, callback) ->
  articles = env.helpers.getArticles contents
  numPages = Math.ceil articles.length / options.perPage
  pages = []
  for i in [0...numPages]
    pageArticles = articles.slice i * options.perPage, (i + 1) * options.perPage
    pages.push new PaginatorPage i + 1, numPages, pageArticles
  env.helpers.pageLinker pages
  rv = env.helpers.addPagesToOutput pages, 'default'
  callback null, rv

This is the simplest of all the page-generators: the others have slightly more complex requirements, such as creating a fake “Uncategorised posts” category, or labelling the archive page for January 1970 as “Undated posts”.

There we go: my Wintersmith installations are now reproducing pretty much all of the different types of archive that Wordpress was handling dynamically for me before. The next time I come back to this topic, we’ll move onto the template side of things, including some nasty performance issues I found and then sorted out along the way.

*The next part of this post, in which we discuss website templating using Pug, is here*

Trains and levers

Or, a brief pause for relaxation

To the Severn Valley yesterday to play with trains, possibly for the last time in a while. I’m not on the roster for next month, and as the pandemic appears to be getting worse again, who knows what will happen after that point. The pandemic timetable makes it a quiet day, just four trains in each direction, and only one crossing move. Here it is, with one train waiting in the station and all the signals pulled off for the other to have a clear run through.

Signals off

In-between trains I sat and read a book of Victorian history, Mid-Victorian Britain 1851-75 by Geoffrey Body, and almost melted in the heat. It was windy outside, but hardly any of it came through the signalbox door. I watched a buzzard (I think) circling overhead, soaring slowly and sending the crows into a panic; heard pheasants and partridges squawking in the undergrowth, and listened to the frequent sound of semi-distant shotgun fire. It has been much in the news this week that shooting parties are allowed to be larger than other groups of people,* and all of the Very Online naturally have been joking about getting the guns in for their family parties; but yesterday in Shropshire and Worcestershire it felt as if people were genuinely doing just that, so frequent were the hunters’ gun-blasts.

And in small victories, at the end of the day I was proud. For I had filled in the Train Register for the day and not needed to cross any bits out. It may have been a quiet day with few trains and no unusual incidents to record; but, as I said, small victories.

Train register

When I was going through and reviewing all of the previous posts on here as part of the big rewrite, I realised the utter pointlessness of writing about some rubbish that’s on TV purely to say that I’m not going to watch it because it’s probably going to be rubbish. So, I’m not going to do that even though “some rubbish will be on TV in a few months” is all over the internet today. If you like watching rubbish then go and watch it, I’m not going to stop you. Me telling you I’m not is really just exclusionary boasting. So that’s that.

* Obviously, if you’re reading this now, just after I wrote it, you know this already. If it’s now five years in the future, you’ll have completely forgotten.

We can rebuild it! We have the technology! (part one)

Or, how many different ways can you host a website?

I said the other day I’d write something about how I rebuilt the site, what choices I made and what coding was involved. I’ve a feeling this might end up stretched into a couple of posts or so, concentrating on different areas. We’ll start, though, by talking about the tech I used to redevelop the site with, and, indeed, how websites tend to be structured in general.

Back in the early days of the web, 25 or 30 years ago now, to create a website you wrote all your code into files and pushed it up to a web server. When a user went to the site, the server would send them exactly the files you’d written, and their browser would display them. The server didn’t do very much at all, and nor did the browser, but sites like this were a pain to maintain. If you look at this website, aside from the text in the middle you’re actually reading, there’s an awful lot of stuff which is the same on every page. We’ve got the header at the top and the sidebar down below (or over on the right, if you’re reading this on a desktop PC). Moreover, look at how we show the number of posts I’ve written each month, or the number in each category. One new post means every single page has to be updated with the new count. Websites from the early days of the web didn’t have that sort of feature, because they would have been ridiculous to maintain.

The previous version of this site used Wordpress, technology from the next generation onward. With Wordpress, the site’s files contain a whole load of code that’s actually run by the web server itself: most of it written by the Wordpress developers, some of it written by the site developer. The code contains templates that control how each kind of page on the site should look; the content itself sits in a database. Whenever someone loads a page from the website, the web server runs the code for that template; the code finds the right content in the database, merges the content into the template, and sends it back to the user. This is the way that most Content Management Systems (CMSes) work, and is really good if you want your site to include features that are dynamically-generated and potentially different on every request, like a “search this site” function. However, it means your webserver is doing much more work than if it’s just serving up static and unchanging files. Your database is doing a lot of work, too, potentially. Databases are seen as a bit of an arcane art by a lot of software developers; they tend to be a bit of a specialism in their own right, because they can be quite unintuitive to get the best performance from. The more sophisticated your database server is, the harder it is to tune it to get the best performance from it, because how the database is searching for your data tends to be unintuitive and opaque. This is a topic that deserves an essay in its own right; all you really need to know right now is that database code can have very different performance characteristics when run against different sizes of dataset, not just because the data is bigger, but because the database itself will decide to crack the problem in an entirely different way. Real-world corporate database tuning is a full-time job; at the other end of the scale, you are liable to find that as your Wordpress blog gets bigger as you add more posts to it, you suddenly pass a point where pages from your website become horribly slow to load, and unless you know how to tune the database manually yourself you’re not going to be able to do much about it.

I said that’s how most CMSes work, but it doesn’t have to be that way. If you’ve tried blogging yourself you might have heard of the Movable Type blogging platform. This can generate each page on request like Wordpress does, but in its original incarnation it didn’t support that. The software ran on the webserver like Wordpress does, but it wasn’t needed when a user viewed the website. Instead, whenever the blogger added a new post to the site, or edited an existing post, the Movable Type software would run and generate all of the possible pages that were available so they could be served as static pages. This takes a few minutes to do each time, but that’s a one-off cost that isn’t particularly important, whereas serving pages to users becomes very fast. Where this architecture falls down is if that costly regeneration process can be triggered by some sort of end-user action. If your site allows comments, and you put something comment-dependent into the on-every-page parts of your template - the number of comments received next to links to recent posts, for example - then only small changes in the behaviour of your end-users hugely increase the load on your site. I understand Movable Type does now support dynamically-generated pages as well, but I haven’t played with it for many years so can’t tell you how the two different architectures are integrated together.

Nowadays most heavily-used sites, including blogs, have moved towards what I supposed you could call a third generation of architectural style, which offloads the majority of the computing and rendering work onto the user’s browser. The code is largely written using JavaScript frameworks such as Facebook’s React, and on the server side you have a number of simple “microservices” each carefully tuned to do a specific task, often a particular database query. Your web browser will effectively download the template and run the template on your computer (or phone), calling back to the microservices to load each chunk of information. If I wrote this site using that sort of architecture, for example, you’d probably have separate microservice calls to load the list of posts to show, the post content (maybe one call, maybe one per post), the list of category links, the list of month links, the list of popular tags and the list of links to other sites. The template files themselves have gone full-circle: they’re statically-hosted files and the webserver sends them back just as they are. This is a really good system for busy, high-traffic sites. It will be how your bank’s website works, for example, or Facebook, Twitter and so on, because it’s much more straightforward to efficiently scale a site designed this way to process high levels of traffic. Industrial-strength hosting systems, like Amazon Web Services or Microsoft Azure, have moved in ways to make this architecture very efficiently hostable, too. On the downside, your device has to download a relatively large framework library, and run its code itself. It also then has to make a number of round-trips to the back-end microservices, which can take some time on a high-latency connection. This is why sometimes a website will start loading, but then you’ll just have the website’s own spinning wait icon in the middle of the screen.

Do I need something quite so heavily-engineered for this site? Probably not. It’s not as if this site is intended to be some kind of engineering portfolio; it’s also unlikely ever to get a huge amount of traffic. With any software project, one of the most important things to do to ensure success is to make sure you don’t get distracted from what your requirements actually are. The requirements for this site are, in no real order, to be cheap to run, easy to update, and fun for me to work on; which also implies I need to be able to just sit back and write, rather than spend long periods of time working on site administration or fighting with the sort of in-browser editor used by most CMS systems. Additionally, because this site does occasionally still get traffic to some of the posts I wrote years ago, if possible I want to make sure posts retain the same URLs as they did with Wordpress.

With all that in mind, I’ve gone for a “static site generator”. This architecture works in pretty much the same way as the older versions of Movable Type I described earlier, except that none of the code runs on the server. Instead, all the code is stored on my computer (well, I store it in source control, which is maybe a topic we’ll come back to at another time) and I run it on my computer, whenever I want to make a change to the site. That generates a folder full of files, and those files then all get uploaded to the server, just as if it was still 1995, except nowadays I can write myself a tool to automate it. This gives me a site that is hopefully blink-and-you’ll-miss-it fast for you to load (partly because I didn’t incorporate much code that runs on your machine), that I have full control over, and that can be hosted very cheaply.

There are a few static site generators you can choose from if you decide to go down this architectural path, assuming you don’t want to completely roll your own. The market leader is probably Gatsby, although it has recently had some well-publicised problems in its attempt to repeat Wordpress’s success in pivoting from being a code firm to a hosting firm. Other popular examples are Jekyll and Eleventy. I decided to go with a slightly less-fashionable but very flexible option, Wintersmith. It’s not as widely-used as the others, but it is very small and slim and easily extensible, which for me means that it’s more fun to play with and adapt, to tweak to get exactly the results I want rather than being forced into a path by what the software can do. As I said above, if you want your project to be successful, don’t be distracted away from what your requirements originally were.

The downside to Wintersmith, for me, is that it’s written in CoffeeScript, a language I don’t know particularly well. However, CoffeeScript code is arguably just a different syntax for writing JavaScript,* which I do know, so I realised at the start that if I did want to write new code, I could just do it in JavaScript anyway. If I familiarised myself with CoffeeScript along the way, so much the better. We’ll get into how I did that; how I built this site and wrote my own plugins for Wintersmith to do it, in the next part of this post.

*The next part of this post, in which we discuss how to get Wintersmith to reproduce some of the features of Wordpress, is here*

* This sort of distinction—is this a different language or is this just a dialect—is the sort of thing which causes controversies in software development almost as much as it does in the natural languages. However, CoffeeScript’s official website tries to avoid controversy by taking a clear line on this: “The golden rule of CoffeeScript is: ‘it’s just JavaScript’”.