+++*

Symbolic Forest

A homage to loading screens.

Blog : Post Category : Technology : Page 2

Alternate reality

When you can't use Google as a verb

Many people are concerned just how much corporate technological behemoths have embedded themselves into our lives nowadays. A few years ago now I spent a few days in meetings with some Microsoft consultants at their main British headquarters, and I entertained myself by counting the number of times I saw a pained look on the face of a Microsoft staffer having to physically stop themselves using “Google” as a verb. “We’ll just do a…” wince “…internet search for that.”*

The people I feel sorry for now, though, are the producers of TV shows. Yes, a particular website or app might be key to your plot, it might be vital to the everyday life of your characters, but you can’t use it, because no doubt its owners will be greatly upset if you do. So, for TV, thousands of working hours are spent producing mockup apps and mockup websites for the characters to use on-screen.

An award surely has to go to the producers of Australian police drama Deep Water, a rather good drama series about gay hate murders in Sydney. Their murder victim was obviously going to be using apps such as Grindr to meet guys, but they couldn’t show it on-screen: so, they invented—or, I assume they invented—an app called Thrustr for him to use instead. Now there’s a name that’s even better than the real thing.

What really made me want to write about this, though, is the Netflix series The Stranger, released earlier this year. Its not-Google-honest website is rather tasteful and well-designed, the Google screen layout but with a logo of interconnecting blue dots and lines that could, just about, plausibly be a Google Doodle that isn’t quite legible enough to make out the words of. When it comes to apps, though: they have a whole bevy of them, to fulfil whatever magical device the plot needs at the time. A phone-tracking app that uses some sort of dark-mode map layer for Extra Coolness. An app to allow the organisers of illegal raves to, well, organise illegal raves anonymously, but that also tells you where its anonymous users are. Of course, all these tracking apps always track people perfectly. They always have a mobile data signal and a good GPS fix, even in the city centre. The map view always updates exactly in real time: I hate to think how much battery power they must be using up sending out all those continual location updates.

The Stranger is set in a genericised North-West England: Cheshire and Lancashire with the place-names filed off. Because of that, it has the usual issues any sort of attempt at a “generic landscape” always has when it uses very recognisable places. The characters somehow manage to catch a through train, for example, from the very recognisable Stockport station to the equally recognisable Ramsbottom station, despite one being a busy main-line junction and the other being a silent, deserted heritage line. Talking of trains, there was also a rather fun chase sequence around Bury Bolton Street yard, although the joyless side of me has to say that you really shouldn’t crawl under stock the way they were doing. Nor can you in real life lean against the buffers of the average Mark 1 carriage and stay as clean as the characters did. Anyway. I was saying how unrealistic the GPS-tracking apps on the characters’ phones were: the one that really made me laugh out loud was when one character says that, as a given car registration is a hire car, he’ll be able to hack into the hire firm’s vehicle telemetry and get its current location in the time it takes to boil a kettle.

Admittedly, I have specialist knowledge here, because I used to be in charge of the backend tech for one particular vehicle telemetry provider’s systems. But the whole idea: assuming that they can see from the VRN which hire firm owns the car, they then have to know which telemetry firm that particular hire firm uses, and then know how to get in. Unless you do happen to have a notebook of where every car hire firm gets their telemetry services, and then have backdoors or high-level login credentials to every system, which I suppose is just about plausible for a private investigator, you’re stuffed. Getting in without that? Whilst someone makes you a cup of tea? Not feasible, at all.

Yes, maybe I’m applying unreasonable standards here for keeping my disbelief suspended. It seemed to be a particularly bad example, though, of technology either being magically accurate or terribly broken according to the requirements of the plot at any given time. Does the plot need you to know exactly where someone’s phone is? Bang, there you go. Does it need a system to be breakable on demand within seconds? All passwords to be immediately crackable as long as the right character is doing the cracking? No problem. Oh well: at least their not-Google looked relatively sensible.

* They all used Chrome rather than Edge or IE, though. Things might be different now Chromium Edge is out.

Feeling at home

On inclusion and diversity

Serious posts are hard to write, aren’t they. This article has been sitting in my drafting pile for a couple of months, and has been sitting around taking up space in my head for most of the past year. It’s about an important topic, though, one that is close to me and one that I think it’s important to discuss. This post is about diversity and inclusion initiatives, in the workplace in general, and specifically in the sort of workplaces I’ve experienced myself, so it will tend to concentrate on offices in general and tech jobs in particular. If you work in a warehouse or factory, your challenges are different and I suspect in many ways a lot harder to deal with, but it is not something I am myself in a position to speak on.

It’s fair to say, to start off with, firstly that my career has progressed a lot since I first started this website; and also that attitudes to diversity and inclusion have changed a lot over that time too. I’ve gone from working in businesses where you would have been laughed at for suggesting it at all mattered or should even be considered, to businesses that care deeply about diversity and inclusion because they see that it is important to them for a number of reasons. What I still see a lot, though, are businesses that start with the thought diversity is important, so how do we improve it, and I think that, frankly, they have things entirely the wrong way round. If instead they begin from a starting point of inclusivity is important, so how do we improve it diversity will naturally follow. If you try to make your workplace an inclusive workplace from top to bottom, in across-the-board ways, then you will create a safe place for your colleagues to work in. If your colleagues feel psychologically safe when they are at work, they’ll be more productive, you’ll have better staff retention rates, and people will actively want to work for you.

The Plain People Of The Internet: But I’ve always felt happy at my desk, chair reclined, just being me, anyway. It’s not something we have trouble with!

But this is where the inclusivity part really comes in to it. There are always going to be some people who feel at home wherever they are. They’re usually the people who are happy in their own identity, which is very nice for them. They’re also the people who expect everyone else to go along with what they want, which is less nice. The people who say “well I have to put up with things in my life, so I don’t see why we should make life easier for everyone else,” and “they’re just trying to be different because they want the attention.” These are the people who are going to have to have their views challenged, in order to make the office round them a truly inclusive place for everybody. At the same time, though, you can’t ignore these people, because inclusivity has by definition to include everybody. You have to try to educate them, which is inevitably going to be a harder job.

For that matter, you always have to remember that you don’t truly know your colleagues, however well you think you do—possibly barring a few exceptions such as married couples who work together, but even then, this isn’t necessarily an exception. You don’t know who in your office might have a latent mental health issue. You don’t know who might have a random phobia or random trauma which doesn’t manifest until it is triggered. Whatever people say about gaydar, you don’t know the sexuality of your colleagues for certain—they might have feelings they daren’t even admit to themselves, and the same goes for gender identity and no doubt a whole host of other things. You can never truly know your colleagues and what matters to them, or who they really are inside their heads.

The Plain People Of The Internet: So now you’ve gone and made this whole thing impossible then!

No, not at all; it’s just setting some basic ground rules. In particular, a lot of companies love “initiatives” on this sort of thing, but they tend to be very centralised, top-down affairs: “we’ll put a rainbow on our logo and organise a staff party”. Those aren’t necessarily bad things to do in themselves, but I strongly believe that to be truly successful, inclusivity has to come from the ground upwards. The best thing you can have is staff throughout the organisation who care about this sort of thing, if they can be given the opportunity to gather people around them, educate them about the importance of the whole thing, and push for change from the bottom upwards.

The Plain People Of The Internet: Aha, I get you now! Get all the minorities together, shut the boring white guys out of the room, and get the minorities to tell us how to sort it out!

No! Firstly, the people who you need to get to seed things off are the people who are passionate about it, moreover, people who are optimistic that their passion is going to have an affect. That applies whoever they are, too. If you want to be inclusive, you must never shut out anyone who is passionate about the topic—with certain exceptions that we’ll come to—because, firstly, inclusivity is for everyone, and everyone has a part to play in it. Secondly, as I said above, you don’t know your colleagues: you don’t know why any particular colleague is passionate about it.

Deliberately making inclusivity and diversity the responsbility of the minorities on your staff is, I’d go far to say, nearly always a counter-productive option. For one thing, you want to find passionate people to drive this forward: you shouldn’t automatically assume that everyone who doesn’t fall into a particular “minority” bucket in some way will be passionate about diversity and inclusion, or even that such a bucket exists. Equally, you need to be very wary of some people who will ride the concept as their own personal hobby-horse, and insist that they, personally, should be the arbiter of what diversity means. There are people out there who will insist that because they are disadvantaged in one way or another, they have the right to determine the meaning of diversity and inclusion in any organisations they are part of. These are the sort of people who conflate inclusivity across the whole office with advantage for themselves personally; they will insist that inclusivity and diversity efforts be focused solely on aspects that benefit them, and will attempt first to narrow the scope of diversity and then to gatekeep what is allowed inside. If you’ve followed my logic about diversity flowing from inclusivity and not vice-versa, you’ll immediately see that this is a nonsense. The reason the type of person I’m talking about doesn’t see it as such, is that they see it, even if they don’t realise it, as being something solely for their own benefit in one way or another.

The Plain People Of The Internet: Now you’re not making sense again! Find people that are passionate but not too passionate? You’re just looking for a team of nice milky liberals who won’t really do anything!

It’s difficult, really, to talk about hypotheticals in this sort of area, partly because every organisation and every situation genuine is very different to another. I’m confident, though, that when you do start getting involved in this sort of area it’s straightforward to see the difference in the two different kinds of passion I’m talking about: passion to improve everybody’s lives, or passion to get more for themself. Sadly, the latter are often much louder, but it’s often very clear: they will be the people saying that they know Diversity and can precisely define it, because they are themselves more Diverse than anybody else so know exactly what needs to be done. The people who say “I’m not really sure what diversity is, but I know we need to get everyone’s input on it” are the people that you want on your team.

The Plain People Of The Internet: So what was the point of all this again? Just what are our team trying to do here?

Make your workplace a more inclusive place, whatever that takes. Make sure that nobody feels excluded from social events. Try to make everyone feel that they are on the same broad top-level team. Make sure that “soft” discriminatory behaviour is discouraged,* and that people are educated away from it: for example, teach people to use non-discriminatory language. Make sure your interview and hiring processes are accessible and non-biased—this is particularly important at the moment when doing remote interviewing, because requiring the candidate to pass a certain technical bar is inevitably going to exclude people. But, most importantly, when your passionately inclusive pathfinders of inclusivity come up with ideas and want to get them adopted, make sure they have the support and resources to actually get that done.

The Plain People Of The Internet: And then you’ll magically be Diverse with a capital D?

There’s a lot more to it than that, of course. People have written whole books on this stuff; I can hardly squeeze it all into a single blog post. But if you can find people to transform your office into a more inclusive space—a space where everyone can feel safe and at home—then you are one step along the road. Actually generating that atmosphere: another step. After that, your office will become somewhere that a diverse range of people feel comfortable working in, because it is a fully inclusive space and because everyone across that range can feel at home working there. And then your management can start being proud of being a diverse organisation, rather than deciding that you are going to be Diverse but not knowing how to get there on more than a superficial level.

The Plain People Of The Internet: Feel at home at the office? Pshaw! Terrible idea!

I agree with you completely that the office shouldn’t be your home, “working from home” notwithstanding. It’s still important to separate the two and not hand over your entire soul to the capitalist monster. Nevertheless, much as you might hate working for a living, if you do have to work for a living, it’s important for you to try to be as happy as you can be within that context. Finding a workplace that can be a safe place for you to exist in, whilst not being your home, is one way to go about that. It’s not really what this post is supposed to be about, but it’s a digression it might be a good idea to explore at some point.

This post is getting a bit long now, judging by the way my scrollbar is stretching down the screen. It’s a personal view. I don’t pretend to know all the answers, and it’s not a field I claim to be an expert in, but it’s a field that is important to me personally and it’s a suggestion towards a sensible approach to take. Diversity is important to all of us, because we are all diverse: none of us is any more diverse than the other, and none of us has the right to judge another’s lifestyle as long as it causes others no harm.** The key thing, to my mind, is accepting that genuine diversity does require acceptance and appreciation of this; and that if you want to become diverse, becoming inclusive first is by far the easiest approach.

The dichotomy really, I suppose, is between organic growth and forced construction. Consider, if you’ll forgive me another painful analogy, your workforce as the shifting sands of a beach. If you build a Tower of Diversity and Inclusion on top of those shifting sands, it will fall, or get swallowed up by the dunes. If you let a Forest of Inclusion and Diversity grow up through the sand, it will hold it together and make it more cohesive. I know it’s a bit of a daft analogy really, but hopefully it helps you see what I’m trying to painfully and slowly explain. If you try to be inclusive, and if you turn your workplace into a safe space for everyone to be themselves, the latter is hopefully what you will be able to grow.

* I’m working on the basis here that “hard” discriminatory or offensive language or behaviour is immediately called out and shut down, which I know isn’t always the case in all workplaced.

** I have cut a whole section out of a previous draft of this post, discussing how to spot people who use diversity as a shield to do horrible things. Hopefully, in most situations, it’s not something people have to worry about, but it does happen. It’s a shame that we do have to worry about these situations, but they do happen. Going round again, though, if an inclusive workplace is one where people feel safe to be themselves, it’s also one where hopefully people feel safe to report any transgressions and make sure they are dealt with. I have, sadly, heard of people who use diversity-styled language to try to defend themselves against accusations of abuse or of sexually predatory behaviour, and I’m not surprised there are some who think that diversity is some sort of loophole in that regard, because some people will always take whatever advantage they can.

Technical advice post of the week

Or, what to do with a particular compilation problem

This week, Microsoft released .NET 5, and it reminded me I’ve been meaning to post a piece of technical advice that has bitten me a few times but which doesn’t seem to be very well-documented or well-described online. It’s a piece of technical advice, though, that will slowly be fading away in relevance because it’s advice on .NET Framework; so I thought I should put it up here whilst it is still helpful to people.

(Note for non-technical readers, who are used to photos of trains and cemeteries and probably won’t find this post very interesting: .NET 5 is the latest version of .NET Core, which is the replacement for .NET Framework, hence Microsoft have dropped “Core” from the name to try to make that clearer. .NET 5 is the successor to .NET Core 3, because there were many very popular versions of .NET Framework 4.x which were and are heavily used for a long time, so Microsoft thought reusing the number 4 would be just Too Confusing. Are you less confused now?)

This problem, too, is pretty much specific to people working in teams. It only happens (well, I’ve only seen it happen) if all of the following apply:

  • you’re working in parallel in a team, on a complex system, probably that has solutions containing a relatively large number of projects
  • you’re using the MSBuild tool as part of your continuous integration pipeline, deployment process, or similar
  • you’re using Git as your version control system

The symptoms of the problem are:

  • You can open the solution in Visual Studio and build it with no problems
  • When MSBuild tries to build the solution, it immediately errors, claiming that the solution file has a syntax error on line 2.

Spoiler: there is no syntax error on line 2.

Another note for non-technical readers who are still here: what you might think of loosely as a programming project, in any kind of .NET flavour, has a primary file called a “solution” (its name ends in .sln). The solution contains one or more “projects”; each project contains code. Visual Studio can open your solution file and your project files and turn the projects into some sort of output product, such as a program, a website, a code library or whatever. However, you don’t have to use Visual Studio to do this. .NET Framework has a program called MSBuild that does the same thing. If you have automated your build process (which if you’re working in a team you probably should) and you’re using .NET Framework, your build process will probably use MSBuild to do its work. What happens here is one of a range of problems called “well it worked on my machine”. A developer has code that seems to be in a happy, working state, they upload their code to the team’s server, the automated build process runs, and the automated build process falls over and says it doesn’t work.

The cause of the problem is: two people on the team have added different projects to the solution, in parallel. Now, Git is often quite good, when two people change code at the same time, at either working out how to merge the changes together, or at least, asking you to sort the situation out manually. This, though, is a situation where Git does the wrong thing and breaks your solution file—but it breaks it in a way that only MSBuild notices, and that Visual Studio happily ignores.

The reason this happens is down to the syntax of solution files. The part which lists the projects they contain looks a bit like this:

Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Important.Project.Library", "Important.Project.Library\Important.Project.Library.csproj", "{E6FF8E04-A41D-446B-9F8A-CCFAF4B08AD2}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Important.Project", "Important.Project\Important.Project.csproj", "{9A7E2940-50B8-4F3A-A535-AB6220E6CE3A}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Important.Project.Tests", "Important.Project.Tests\Important.Project.Tests.csproj", "{68035DDB-1C24-407C-B6B3-32CEC1D964E5}"
EndProject

Don’t worry too much about what each line says: the important thing to spot is that each project has a pair of lines: a Project(...) = line that contains the important information, and an EndProject line that, er, doesn’t. The projects are in a fairly arbitrary order, too; on your screen in Visual Studio they get sorted alphabetically, but that isn’t reflected in the file, where they are in the order they were added in.

The real cause of the problem is that Git doesn’t know that every Project... has to be followed by an EndProject. So, imagine two people have added new, different projects to the solution file. Git sees this and thinks: Alice has added Project... to line 42, and Bob has added a different Project... to line 42. So I’ll make those into lines 42 and 43. Alice added EndProject to line 43, and so did Bob, so I’ll just pop that in as line 44. So you get this:

Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Alices.Library", "Alices.Library\Alices.Library.csproj", "{0902233A-3857-4E5E-99F4-54F3F5E695E5}"
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Bobs.Library", "Bobs.Library\Bobs.Library.csproj", "{56ABE9BB-1373-43D3-B1C5-1526E443AD73}"
EndProject

Visual Studio is quite unpeturbed by this. MSBuild, however, doesn’t like it at all. It reads the file, realises there’s a Project without a matching EndProject, and falls over. For some reason, it always complains that the error is on line 2, even though it isn’t anywhere near line 2.

The fix for this, as you might have guessed, is to open up the solution file in a text editor and manually enter that missing EndProject line after Alice’s project. And that’s it. Or, if you don’t feel comfortable going in and hacking your solution file directly, remember I said that Visual Studio is completely unfazed by this? You can make some sort of small change in Visual Studio that will imply a different change to the solution file: for example, tell it not to build one of the projects in one of the build configurations. Visual Studio doesn’t just change that bit, it will write out the whole file from scratch, so the problem gets silently fixed for you. Which one is less work depends on which one you’re happier doing, to be honest.

That’s the abstruse technical post over for now. Next time I write one, I’ll see if I can find something even more technically obscure.

We can rebuild it! We have the technology! (part four)

Or, finishing off the odds and ends

Settling down to see what else I should write in the series of posts about how I rebuilt this website, I realised that the main issues now have already been covered. The previous posts in this series have discussed the following:

And throughout the last two, we touched on some other important software engineering topics, such as refactoring your code so that you Don’t Repeat Yourself, and optimising your code when it’s clear that (and where) it is needed.

There are a few other topics to touch on, but I wasn’t sure they warranted a full post each, so this post is on a couple of those issues, and any other odds and ends that spring to mind whilst I am writing it.

Firstly, the old blog was not at all responsive: in web front-end terms, that means it didn’t mould itself to fit the needs of the users and their devices. Rather, it expected the world to all use a monitor the same size as mine, and if they didn’t, then tough. When I wrote the last designs the majority of the traffic the site was receiving was from people on regular computers; nowadays, that has changed completely.

However, the reason that this isn’t a particularly exciting topic to write about is that I didn’t learn any new skills or dive into interesting new programming techniques. I went the straightforward route, installed Bootstrap 4.5, and went from there. Now, I should say, using Bootstrap doesn’t magically mean your site will become responsive overnight; in fact, it’s very straightforward to accidentally write an entirely non-responsive website. Responsiveness needs carefully planning. However, with that careful planning, and some careful use of the Bootstrap CSS layout classes, I achieved the following aims:

  • The source code is laid out so that the main content of the page always comes before the sidebars in the code, wherever the sidebars are actually displayed. This didn’t matter so much on this blog, but on the Garden Blog, which on desktop screens has sidebars on both sides of the page, it does need to be specifically coded. Bootstrap’s layout classes, though, allow you to separate the order in which columns appear on a page from the order in which they appear in the code.
  • More importantly, the sidebars move about depending on page width. If you view this site on a desktop screen it has a menu sidebar over on the right. On a narrow mobile screen, the sidebar content is down below at the bottom of the page.
  • The font resizes based on screen size for easier reading on small screens. You can’t do this with Bootstrap itself; this required @media selectors in the CSS code with breakpoints chosen to match the Bootstrap ones (which, fortunately, are clearly documented).
  • Content images (as opposed to what you could call “structural images”) have max-width: 100%; set. Without this, if the image is bigger than the computed pixel size of the content column, your mobile browser will likely rescale things to get the whole image on screen, so the content column (and the text in it) will become too narrow to read.

On the last point, manual intervention is still required on a couple of types of content. Embedded YouTube videos like in this post need to have the embed code manually edited, and very long lines of text without spaces need to have soft hyphens or zero-width spaces inserted, in order to stop the same thing happening. The latter usually occurs in example code, where zero-width spaces are more appropriate than soft hyphens. All in all though, I’ve managed to produce something that is suitably responsive 95% of the time.

The other point that is worth writing about is the build process of the site. As Wintersmith is a static site generator, every change to the site needs the static files to be built and deployed. The files from the previous version need to be deleted (in case any stale ones, that have disappeared completely from the latest iteration of the output, are still lying about) and then Wintersmith is run to generate the new version. You could do this very simply with a one-liner: rm -rf ../../build/* && wintersmith build if you’re using Bash, for example. However, this site actually consists of three separate Wintersmith sites in parallel. The delete step might only have to be done once, but doing the build step three separate times is a pain. Moreover, what if you only want to delete and rebuild one of the three?

As Wintersmith is a JavaScript program, and uses npm (the Node Package Manager) for managing its dependencies, it turns out that there’s an easy solution to this. Every npm package or package-consumer uses a package.json file to control how npm works; and each package.json file can include a scripts section containing arbitrary commands to run. So, in the package.json file for this blog, I inserted the following:

"scripts": {
  "clean": "node ../shared/js/unlink.js ../../build/blog",
  "build": "wintersmith build"
}

You might be wondering what unlink.js is. I said before that “if you’re using Bash” you could use rm -rf ../../build. However, I develop on a Windows machine, and for this site I use VS Code to do most of the writing. Sometimes therefore I want to build the site using PowerShell, because that’s the default VS Code terminal. Sometimes, though, I’ll be using GitBash, because that’s convenient for my source control commands. One day I might want to develop on a Linux machine. One of the big changes between these different environments is how you delete things: del or Remove-Item in PowerShell; rm in Bash and friends. unlink.js is a small script that reproduces some of the functionality of rm using the JavaScript del package, so that I have a command that will work in the same way across any environment.

So, this means that in the main blog’s folder I can type npm run clean && npm run build and it will do just the same thing as the one-liner command above (although note that the clean step only deletes the main blog’s files). In the other Wintersmith site folders, we have a very similar thing. Then, in the folder above, we have a package.json file which contains clean and build commands for each subsite, and a top-level command that runs each of the others in succession.

"scripts": {
  "clean:main": "cd main && npm run clean",
  "clean:misc": "cd misc && npm run clean",
  "clean:garden": "cd garden && npm run clean",
  "clean": "npm run clean:misc && npm run clean:main && npm run clean:garden",
  "build:main": "cd main && npm run build",
  "build:misc": "cd misc && npm run build",
  "build:garden": "cd garden && npm run build",
  "build": "npm run build:misc && npm run build:main && npm run build:garden"
}

And there you have it. By typing npm run clean && npm run build at the top level, it will recurse into each subsite and clean and build all of them. By typing the same command one folder down, it will clean and build that site alone, leaving the others untouched.

When that’s done, all I have to do is upload the changed files; and I have a tool to do that efficiently. Maybe I’ll go through that another day. I also haven’t really touched on my source-control and change management process; and all I have to say there is, it doesn’t matter what process you use, but you will find things a lot more straightforward if you find a process you like and stick to it. Even if you’re just a lone developer, using a sensible source control workflow and release process makes life much easier and makes you less likely to make mistakes; you don’t need anything as rigid as a big commercial organisation, but just having a set process for storing your changes and releasing them to the public means that you are less likely to slip up and make mistakes. This is probably something else I’ll expand into an essay at some point.

Is the site perfect now? No, of course not. There are always more changes to be made, and more features to add; I haven’t even touched on the things I decided not to do right now but might bring in one day. Thoe changes are for the future, though. Right now, for a small spare-time project, I’m quite pleased with what we have.

We can rebuild it! We have the technology! (part three)

Introducing Pug

If you want to start reading this series of articles from the start, the first part is here. In the previous part we discussed how I adapted Wintersmith to my purposes, adding extra page generators for different types of archive page, and refactoring them to make sure that I wasn’t repeating the same logic in multiple places, which is always a good process to follow on any sort of coding project. This post is about the templating language that Wintersmith uses, Pug. When I say “that Wintersmith uses”, incidentally, you should always add a “by default” rider, because as we saw previously adding support for something else generally wouldn’t be too hard to do.

In this case, though, I decided to stick with Pug because rather than being a general-purpose templating or macro language, it’s specifically tailored towards outputting HTML. If you’ve ever tried playing around with HTML itself, you’re probably aware that the structure of an HTML document (or the Document Object Model, as it’s known) has to form a tree of elements, but also that the developer is responsible for making sure that it actually is a valid tree by ending elements in the right order. In other words, when you write code that looks like this:

<section><article><h2>Post title</h2><p>Some <em>content</em> here.</p></article></section>

it’s the developer who is responsible for making sure that those </p>, </article> and </section> tags come in the right order; that the code ends its elements in reverse order to how they started. Because HTML doesn’t pay any attention to (most) white space, they have to be supplied. Pug, on the other hand, enforces correct indentation to represent the tree structure of the document, meaning that you can’t accidentally create a document that doesn’t have a valid structure. It might be the wrong structure, if you mess up your indentation, but that’s a separate issue. In Pug, the above line of HTML would look like this:

section
  article
    h2 Post title
    p Some
      em content
      | here.

You specify the content of an element by putting it on the same line or indenting the following line; elements are automatically closed when you reach a line with the same or less indentation. Note that Pug also assumes that the first word on each line will be an opening tag, and how we can suppress this assumption with the | symbol. You can supply attributes in brackets, so an <a href="target"> ... </a> element becomes a(href="target") ..., and Pug also has CSS-selector-style shortcuts for the class and id attributes, because they’re so commonly used. The HTML code

<section class="mainContent"><article id="post-94">...</article></section>

becomes this in Pug:

section.mainContent
  article#post-94 ...

So far so good; and I immediately cracked on with looking at the pages of the old Wordpress blog and converting the HTML of a typical page into Pug. Moreover, Pug also supports inheritance and mixins (a bit like functions), so I could repeat the exercise of refactoring common code into a single location. The vast majority of the template code for each type of page sits in a single layout.pug file, which is inherited by the templates for specific types of page. It defines a mixin called post() which takes the data structure of a single post as its argument and formats it. The template for single posts is reduced to just this:

extends layout
block append vars
  - subHeader = '';
block append title
  | #{ ' : ' + page.title }
block content
  +post(page)

The block keyword is used to either append to or overwrite specific regions of the primary layout.pug template. The content part of the home page template is just as straightforward:

extends layout
block content
  each article in articles
    +post(article)

I’ve omitted the biggest part of the home page template, which inserts the “Newer posts” and “Older posts” links at the bottom of the page; you can see though that for the content block, the only difference is that we iterate over a range of articles—chosen by the page generator function—and call the mixin for each one.

The great thing about Pug, though, is that it lets you drop out into JavaScript at any point to run bits of code, and when doing that, you don’t just have access to the data for the page it’s working on, you can see the full data model of the entire site. So this makes it easy to do things such as output the sidebar menus (I say sidebar; they’re at the bottom if you’re on mobile) with content that includes things like the number of posts in each month and each category. In the case of the tag cloud, it effectively has to put together a histogram of all of the tags on every post, which we can only do if we have sight of the entire model. It’s also really useful to be able to do little bits of data manipulation on the content before we output it, even if it’s effectively little cosmetic things. The mixin for each post contains the following Javascript, to process the post’s categories:

- if (!Array.isArray(thePost.metadata.categories)) thePost.metadata.categories = [ thePost.metadata.categories ]
- thePost.metadata.categories = Array.from(new Set(thePost.metadata.categories))

The - at the start of each line tells Pug that this is JavaScript code to be run, rather than template content; all this code does is tidy up the post’s category data a little, firstly by making sure the categories are an array, and secondly by removing any duplicates.

You can, however, get a bit carried away with the JavaScript you include in the template. My first complete design for the blog, it turned out, took something like 90 minutes to 2 hours to build the site on my puny laptop; not really helpful if you just want to knock off a quick blog post and upload it. That’s because all of the code I had written to generate the tag cloud, the monthly menus and the category menus, was in the template, so it was being re-computed over again for each page. If you assume that the time taken to generate all those menus is roughly proportional to the number of posts on the blog, O(n) in computer science terms (I haven’t really looked into it—it can’t be any better but it may indeed be worse) then the time taken to generate the whole blog becomes O(n2), which translates as “this doesn’t really scale very well”. The garden blog with its sixtyish posts so far was no problem; for this blog (over 750 posts and counting) it wasn’t really workable.

What’s the solution to this? Back to the Wintersmith code. All those menus are (at least with the present design) always going to contain the same data at any given time, so we only ever need to generate them once. So, I created another Wintersmith plugin, cacher.coffee. The JavaScript code I’d put into my layout templates was converted into CoffeeScript code, called from the plugin registration function. It doesn’t generate HTML itself; instead, it generates data structures containing all of the information in the menus. If you were to write it out as JSON it would look something like this:

"monthData": [
  { "url": "2020/10/", "name": "October 2020", "count": 4 },
  { "url": "2020/09/", "name": "September 2020", "count": 9 },
  ...
],
"categoryData": [
  { "name": "Artistic", "longer": "Posts categorised in Artistic", "count": 105 },
  ...
],
"tagData": [
  { "name": "archaeology", "count": 18, "fontSize": "0.83333" },
  { "name": "art", "count": 23, "fontSize": "0.97222" },
  ...
]

And so on; you get the idea. The template then just contains some very simple code that loops through these data structures and turns them into HTML in the appropriate way for each site. Doing this cut the build time down from up to two hours to around five minutes. It’s still not as quick to write a post here as it is with something like Wordpress, but five minutes is a liveable amount of overhead as far as I am concerned.

The Plain People Of The Internet: So, you’re saying you got it all wrong the first time? Wouldn’t it all have been fine from the start if you’d done it that way to begin with?

Well, yes and no. It would have been cleaner code from the start, that’s for certain; the faster code also has a much better logical structure, as it keeps the code that generates the semantic content at arm’s length from the code that handles the visual appearance, using the data structure above as a contract between the two. Loose coupling between components is, from an architectural point of view, nearly always preferable than tight coupling. On the other hand, one of the basic principles of agile development (in all its many and glorious forms) is: don’t write more code than you need. For a small side project like this blog, the best course of action is nearly always to write the simplest thing that will work, be aware than you’re now owing some technical debt, and come back to improve it later. The difficult thing in the commercial world is nearly always making sure that that last part actually happens, but for a site like this all you need is self-discipline and time.

That just about covers, I think, how I learned enough Pug to put together the templates for this site. We still haven’t covered, though, the layout itself, and all the important ancillary stuff you should never gloss over such as the build-deploy process and how it’s actually hosted. We’ll make a start on that in the next post in this series.

The next post in this series, in which we discuss responsive design, and using npm to make the build process more straightforward, is here

We can rebuild it! We have the technology! (part two)

In which we delve into Wintersmith and some CoffeeScript

Previously, I discussed some various possible ways to structure the coding of a website, and why I decided to rebuild this site around the static site generator Wintersmith. Today, it’s time to dive a little deeper into what that actually entailed. Don’t worry if you’re not a technical reader; I’ll try to keep it all fairly straightforward.

To produce this website using Wintersmith, there were essential four particular technologies I knew I’d need to know. Firstly, the basics: HTML and CSS, as if I was writing every single one of the four-thousand-odd HTML files that currently make up this website from scratch. We’ll probably come onto that in a later post. Second-and-thirdly, by default Wintersmith uses Markdown to turn content into HTML, and Pug as the basis for its page templates. Markdown I was fairly familiar with as it’s so widely used; Pug was something new to me. And finally, as I said before, Wintersmith itself is written using CoffeeScript. I was vaguely aware that, out of the box, Wintersmith’s blog template wouldn’t fully replicate all of Wordpress’s features and I’d probably need to extend it. That would involve writing code, and when you’re extending an existing system, it’s always a good idea to try to match that system’s coding style and idioms. However, I’d come across CoffeeScript briefly a few years ago, and if you’ve used JavaScript, CoffeeScript is fairly straightforward to comprehend.

The Plain People Of The Internet: Hang on a minute there, now! You told us up there at the top, you were going to keep all this nice and straightforward for us non-technical Plain People. This isn’t sounding very non-technical to us now.

Ah, but I promised I would try. And look, so far, all I’ve done is listed stuff and told you why I needed to use it.

The Plain People Of The Internet: You’re not going to be enticing people to this Wintersmith malarkey, though, are you? Us Plain People don’t want something that means we need to learn three different languages! We want something nice and simple with a box on-screen we can write words in!

Now, now. I was like you once. I didn’t spring into life fully-formed with a knowledge of JavaScript and an instinctive awareness of how to exit Vim. I, too, thought that life would be much easier with a box I could just enter text into and that would be that. The problem is, I’m a perfectionist and I like the site to look just right, and for that you need to have some knowledge of HTML, CSS and all that side of things anyway. If you want your site to do anything even slightly out-of-the-ordinary, you end up having to learn JavaScript. And once you know all this, and you’re happy you at least know some of it, then why not go the whole hog and start knocking together something with three different programming languages you only learned last week? You’ll never know unless you try.

The Plain People Of The Internet: Right. You’re not convincing me, though.

Well, just stick with it and we’ll see how it goes.

In any case, I had at least come across CoffeeScript before at work, even if I didn’t use it for very much. It went through a phase a few years ago, I think, of almost being the next big language in the front-end space; but unlike TypeScript, it didn’t quite make it, possibly because (also unlike TypeScript) it is just that bit too different to JavaScript and didn’t have quite so much energy behind it. However, it is essentially just a layer on top of JavaScript, and everything in CoffeeScript has a direct JavaScript equivalent, so even if the syntax seems a bit strange at points it’s never going to be conceptually too far away from the way that JavaScript handles something. The official website goes as far as to say:

Underneath that awkward Java-esque patina, JavaScript has always had a gorgeous heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.

Now if you ask me, that’s going a little bit far; but then, I don’t mind the “Java-esque patina” because the C-derived languages like C# and Java are the ones I’m happiest using anyway. CoffeeScript brings Python-style whitespace-significance to JavaScript: in other words, whereas in JavaScript the empty space and indentation in your code is just there to make it look pretty, in CoffeeScript it’s a significant part of the syntax. My own feeling on this, which might be controversial, is that the syntax of CoffeeScript is harder to read than the equivalent JavaScript. However, despite what some people will tell you, there’s no such thing as an objective viewpoint when it comes to language syntax; and as I said above, as Wintersmith is written in CoffeeScript, the best language to use to change and extend its behaviour is also CoffeeScript.

Wintersmith, indeed, is designed for its behaviour to be changeable and extendable. By default it only has a fairly small set of capabilities. It takes a “content tree”, a particular set of files and folders, and a set of templates. Markdown files in the content tree are converted to HTML, merged with a template, and written to an output file. JSON files are treated in almost the same way, almost as content files without any actual content aside from a block of metadata. Other filetypes, such as images, are copied through to the output unchanged. So, to take this article you’re reading as an example: it started out as a file called articles/we-can-rebuild-it-we-have-the-technology-part-two/index.md. That file starts with this metadata block, which as is normal for Markdown metadata, is in YAML:

---
title: We can rebuild it! We have the technology! (part two)
template: article.pug
date: 2020-09-28 20:09:00
...
---

I’ve configured Wintersmith to use a default output filename based on the date and title in the metadata of each article. This file, therefore, will be merged with the article.pug template and output as 2020/09/28/we-can-rebuild-it-we-have-the-technology-part-two/index.html, so its URI will nicely match the equivalent in Wordpress. So there you go, we have a page for each blog post, almost right out of the box.

That’s fine for individual article pages, but what about the home page of the blog? Well, Wintersmith is designed to use plugins for various things, including page generation; and if you create a new Wintersmith site using its blog template, you will get a file called paginator.coffee added to your site’s plugins folder, plus a reference in the site configuration file config.json to make sure it gets loaded.

"plugins": [
    "./plugins/paginator.coffee"
]

The code in paginator.coffee defines a class called PaginatorPage, which describes a page consisting of a group of articles. It then calls a Wintersmith API function called registerGenerator, to register a generator function. The generator function looks over every article in the content/articles folder, slices them up into blocks of your favoured articles-per-page value, and creates a PaginatorPage object for each block of articles. These are then output as index.html, page/2/index.html, page/3/index.html and so on. There, essentially, is the basis of a blog.

If you’ve used something like Wordpress, or if you’re a regular reader of this site, you’ll know most blogs have a bit more to them than that. They have features to categorise and file articles, such as categories and tags, and they also have date-based archives so it’s easy to, say, go and read everything posted in May 2008 or any other arbitrary month of your choice. Well, I thought, that’s straightforward. All we have to do there is to reuse the paginator.coffee plugin, and go in and fiddle with the code. So, I copied the logic from paginator.coffee and produced categoriser.coffee, archiver.coffee and tagulator.coffee to handle the different types of archive page. Pure copy-and-paste code would result in a lot of duplication, so to prevent that, I also created an additional “plugin” called common.coffee. Any code that is repeated across more than one of the page-generator plugins was pulled out into a function in common.coffee, so that then it can be called from anywhere in the generator code that needs it. Moreover, this blog and the garden blog are structured as separate Wintersmith sites, so I pulled out all of the CoffeeScript code (including the supplied but now much-altered paginator.coffee) into a separate shared directory tree, equally distant from either blog. The plugins section of the configuration file now looked like this:

"plugins": [
    "../shared/wintersmith/plugins/common.coffee",
    "../shared/wintersmith/plugins/paginator.coffee",
    "../shared/wintersmith/plugins/categoriser.coffee",
    "../shared/wintersmith/plugins/tagulator.coffee",
    "../shared/wintersmith/plugins/archiver.coffee"
]

The original paginator page generation function has now turned into the below: note how the only logic here is that which slices up the list of articles into pages, because everything else has been moved out into other functions. The getArticles function weeds out any maybe-articles that don’t meet the criteria for being an article properly, such as not having a template defined.

env.registerGenerator 'paginator', (contents, callback) ->
  articles = env.helpers.getArticles contents
  numPages = Math.ceil articles.length / options.perPage
  pages = []
  for i in [0...numPages]
    pageArticles = articles.slice i * options.perPage, (i + 1) * options.perPage
    pages.push new PaginatorPage i + 1, numPages, pageArticles
  env.helpers.pageLinker pages
  rv = env.helpers.addPagesToOutput pages, 'default'
  callback null, rv

This is the simplest of all the page-generators: the others have slightly more complex requirements, such as creating a fake “Uncategorised posts” category, or labelling the archive page for January 1970 as “Undated posts”.

There we go: my Wintersmith installations are now reproducing pretty much all of the different types of archive that Wordpress was handling dynamically for me before. The next time I come back to this topic, we’ll move onto the template side of things, including some nasty performance issues I found and then sorted out along the way.

The next part of this post, in which we discuss website templating using Pug, is here

We can rebuild it! We have the technology! (part one)

Or, how many different ways can you host a website?

I said the other day I’d write something about how I rebuilt the site, what choices I made and what coding was involved. I’ve a feeling this might end up stretched into a couple of posts or so, concentrating on different areas. We’ll start, though, by talking about the tech I used to redevelop the site with, and, indeed, how websites tend to be structured in general.

Back in the early days of the web, 25 or 30 years ago now, to create a website you wrote all your code into files and pushed it up to a web server. When a user went to the site, the server would send them exactly the files you’d written, and their browser would display them. The server didn’t do very much at all, and nor did the browser, but sites like this were a pain to maintain. If you look at this website, aside from the text in the middle you’re actually reading, there’s an awful lot of stuff which is the same on every page. We’ve got the header at the top and the sidebar down below (or over on the right, if you’re reading this on a desktop PC). Moreover, look at how we show the number of posts I’ve written each month, or the number in each category. One new post means every single page has to be updated with the new count. Websites from the early days of the web didn’t have that sort of feature, because they would have been ridiculous to maintain.

The previous version of this site used Wordpress, technology from the next generation onward. With Wordpress, the site’s files contain a whole load of code that’s actually run by the web server itself: most of it written by the Wordpress developers, some of it written by the site developer. The code contains templates that control how each kind of page on the site should look; the content itself sits in a database. Whenever someone loads a page from the website, the web server runs the code for that template; the code finds the right content in the database, merges the content into the template, and sends it back to the user. This is the way that most Content Management Systems (CMSes) work, and is really good if you want your site to include features that are dynamically-generated and potentially different on every request, like a “search this site” function. However, it means your webserver is doing much more work than if it’s just serving up static and unchanging files. Your database is doing a lot of work, too, potentially. Databases are seen as a bit of an arcane art by a lot of software developers; they tend to be a bit of a specialism in their own right, because they can be quite unintuitive to get the best performance from. The more sophisticated your database server is, the harder it is to tune it to get the best performance from it, because how the database is searching for your data tends to be unintuitive and opaque. This is a topic that deserves an essay in its own right; all you really need to know right now is that database code can have very different performance characteristics when run against different sizes of dataset, not just because the data is bigger, but because the database itself will decide to crack the problem in an entirely different way. Real-world corporate database tuning is a full-time job; at the other end of the scale, you are liable to find that as your Wordpress blog gets bigger as you add more posts to it, you suddenly pass a point where pages from your website become horribly slow to load, and unless you know how to tune the database manually yourself you’re not going to be able to do much about it.

I said that’s how most CMSes work, but it doesn’t have to be that way. If you’ve tried blogging yourself you might have heard of the Movable Type blogging platform. This can generate each page on request like Wordpress does, but in its original incarnation it didn’t support that. The software ran on the webserver like Wordpress does, but it wasn’t needed when a user viewed the website. Instead, whenever the blogger added a new post to the site, or edited an existing post, the Movable Type software would run and generate all of the possible pages that were available so they could be served as static pages. This takes a few minutes to do each time, but that’s a one-off cost that isn’t particularly important, whereas serving pages to users becomes very fast. Where this architecture falls down is if that costly regeneration process can be triggered by some sort of end-user action. If your site allows comments, and you put something comment-dependent into the on-every-page parts of your template - the number of comments received next to links to recent posts, for example - then only small changes in the behaviour of your end-users hugely increase the load on your site. I understand Movable Type does now support dynamically-generated pages as well, but I haven’t played with it for many years so can’t tell you how the two different architectures are integrated together.

Nowadays most heavily-used sites, including blogs, have moved towards what I supposed you could call a third generation of architectural style, which offloads the majority of the computing and rendering work onto the user’s browser. The code is largely written using JavaScript frameworks such as Facebook’s React, and on the server side you have a number of simple “microservices” each carefully tuned to do a specific task, often a particular database query. Your web browser will effectively download the template and run the template on your computer (or phone), calling back to the microservices to load each chunk of information. If I wrote this site using that sort of architecture, for example, you’d probably have separate microservice calls to load the list of posts to show, the post content (maybe one call, maybe one per post), the list of category links, the list of month links, the list of popular tags and the list of links to other sites. The template files themselves have gone full-circle: they’re statically-hosted files and the webserver sends them back just as they are. This is a really good system for busy, high-traffic sites. It will be how your bank’s website works, for example, or Facebook, Twitter and so on, because it’s much more straightforward to efficiently scale a site designed this way to process high levels of traffic. Industrial-strength hosting systems, like Amazon Web Services or Microsoft Azure, have moved in ways to make this architecture very efficiently hostable, too. On the downside, your device has to download a relatively large framework library, and run its code itself. It also then has to make a number of round-trips to the back-end microservices, which can take some time on a high-latency connection. This is why sometimes a website will start loading, but then you’ll just have the website’s own spinning wait icon in the middle of the screen.

Do I need something quite so heavily-engineered for this site? Probably not. It’s not as if this site is intended to be some kind of engineering portfolio; it’s also unlikely ever to get a huge amount of traffic. With any software project, one of the most important things to do to ensure success is to make sure you don’t get distracted from what your requirements actually are. The requirements for this site are, in no real order, to be cheap to run, easy to update, and fun for me to work on; which also implies I need to be able to just sit back and write, rather than spend long periods of time working on site administration or fighting with the sort of in-browser editor used by most CMS systems. Additionally, because this site does occasionally still get traffic to some of the posts I wrote years ago, if possible I want to make sure posts retain the same URLs as they did with Wordpress.

With all that in mind, I’ve gone for a “static site generator”. This architecture works in pretty much the same way as the older versions of Movable Type I described earlier, except that none of the code runs on the server. Instead, all the code is stored on my computer (well, I store it in source control, which is maybe a topic we’ll come back to at another time) and I run it on my computer, whenever I want to make a change to the site. That generates a folder full of files, and those files then all get uploaded to the server, just as if it was still 1995, except nowadays I can write myself a tool to automate it. This gives me a site that is hopefully blink-and-you’ll-miss-it fast for you to load (partly because I didn’t incorporate much code that runs on your machine), that I have full control over, and that can be hosted very cheaply.

There are a few static site generators you can choose from if you decide to go down this architectural path, assuming you don’t want to completely roll your own. The market leader is probably Gatsby, although it has recently had some well-publicised problems in its attempt to repeat Wordpress’s success in pivoting from being a code firm to a hosting firm. Other popular examples are Jekyll and Eleventy. I decided to go with a slightly less-fashionable but very flexible option, Wintersmith. It’s not as widely-used as the others, but it is very small and slim and easily extensible, which for me means that it’s more fun to play with and adapt, to tweak to get exactly the results I want rather than being forced into a path by what the software can do. As I said above, if you want your project to be successful, don’t be distracted away from what your requirements originally were.

The downside to Wintersmith, for me, is that it’s written in CoffeeScript, a language I don’t know particularly well. However, CoffeeScript code is arguably just a different syntax for writing JavaScript,* which I do know, so I realised at the start that if I did want to write new code, I could just do it in JavaScript anyway. If I familiarised myself with CoffeeScript along the way, so much the better. We’ll get into how I did that; how I built this site and wrote my own plugins for Wintersmith to do it, in the next part of this post.

The next part of this post, in which we discuss how to get Wintersmith to reproduce some of the features of Wordpress, is here

* This sort of distinction—is this a different language or is this just a dialect—is the sort of thing which causes controversies in software development almost as much as it does in the natural languages. However, CoffeeScript’s official website tries to avoid controversy by taking a clear line on this: “The golden rule of CoffeeScript is: ’it’s just JavaScript’”.

Inconsistency

In which different tools behave in different ways

One of those days when everything seemed to go wrong at work this afternoon. Partly because of things I broke, partly because of things that other people had messed up before I got there, partly because of things that seemed to go wrong entirely by themselves.

For example - warning, dull technical paragraph ahead - I hadn’t realised that Visual Studio can cope remarkably well with slightly-corrupt solution files and will happily skip over and ignore the errors; but other tools such as MSBuild will throw the whole file out, curl up and cry into their beer. Visual Studio, whilst ignoring the error, also won’t fix it. Therefore, when git is a git and accidentally corrupts a solution file in a merge, you will have no problems at all on a local build, but mysterious and hard-to-fix total failures happen whenever you try to build on the build server.

Update, September 8th 2020: At some point I will write a proper blog post about what happened here, how to spot it is going to happen, and how to fix it, because although MSBuild is going away now we are in the .NET Core world there are still plenty of people out there using .NET Framework, and they still occasionally face this problem.

Disassembly, Reassembly

In which we try to use metaphor

The past two days at work have largely just been the long slog of writing unit tests for a part of the system which firstly, was one of the hairiest and oldest parts of the system; and secondly, I’ve just rewritten from scratch. In its non-rewritten form it was almost entirely impossible to test, due to its reliance on static code without any sort of injection.

For non-coding people for whom this is all so much “mwah mwah mwah” like the adults in Peanuts: a few weeks ago I was doing some interesting work, to whit, dismantling a creaking horror, putting its parts side by side on the workbench, scraping off the rust and muck and polishing them up, before assembling the important bits back together into a smoother, leaner contraption and throwing away all the spare screws, unidentifiable rusted-up chunks and other bits that didn’t seem to do anything. Now, though, I have the job of going through each of the newly-polished parts of the machine and creating tools to prove that they do what I think they were originally supposed to do. As the old machine was so gummed-up and tangled with spiderwebs and scrags of twine, it was impossible to try to do this before, because trying to poke one part would have, in best Heath Robinson style, accidentally tugged on the next bit and pushed something else that was supposed to be unconnected, setting off a ball rolling down a ramp to trip a lever and drop a counterweight to hit me on the head in the classic slapstick manner. All this testing each aspect of the behaviour of each part of the device is, clearly, a very important task to do, but it’s also a very dull job. Which is why an awful lot of coders don’t like to do it properly, or use it as a “hiding away” job to avoid doing harder work.

Nevertheless, today it did lead me to find one of those bugs which is so ridiculous it made me laugh, so ridiculous that you have a “it can’t really be meant to work like that?” moment and have to dance around the room slightly. I confirmed with the team business analyst that no, the system definitely shouldn’t behave the way the code appeared to. I asked the team maths analyst what he thought, and he said, “actually, that might explain a few things.”

Repetition

In which we get annoyed with AWS

The problem with writing a diary entry every day is that most weeks of the year, five days out of seven are work. It’s hard to write about work and make it interesting and different every day; and also not write about anything too confidential.

Writing about work itself would quickly become pretty dull, I fear, however interesting I tried to make it. Today, I wrestled with the Amazon anaconda. Amazon have a product called Elastic Beanstalk, which is a bit like a mini virtual data centre for a single website. You pick a virtual image for your servers, you upload a zip file with your website in it, and it fires up as many virtual servers as you like, balances the load between them, fires up new servers when load is high and shuts down spare ones when load is low. If you’re not careful it’s a good way to let people DDoS your credit card, because you pay for servers by the hour, but all-in-all it works quite well. The settings are simple, but deliberately fairly straightforward: how powerful a server do you want to run on, how many of them do you need at different times of the day, and a few other more esoteric and technical knobs to tweak. Elastic Beanstalk isn’t so much a product in itself, as a wrapper around lots of other Amazon Web Services products that you can use separately: virtual servers, server auto-scaling and inbound load balancing. The whole idea is to make tying those things together in a typical use-case a really easy thing to do, rather than having to roll your own each time and struggling with the hard bits. The only thing that’s specifically Elastic Beanstalk is the control panel and configuration on top of it, which keeps track of what you want installed on your Elastic Beanstalk application and controls all the settings of the individual components. You can still access the individual components too, if you want to, and you can even change their settings directly, but doing so is usually a Bad Idea as then the Elastic Beanstalk control layer will potentially get very confused.

Today, I found I couldn’t update one of our applications. A problem with an invalid configuration. Damn. So I went to fix the configuration - but it was invalid. So I couldn’t open it, to fix it. It was broken, but I couldn’t fix it because it was broken. Oh.

That’s how exciting work is. One line of work held up, whilst I speak to Ops, get the broken Elastic Beanstalk replaced from scratch with a working one. In theory I could have done it myself, but our Ops chap doesn’t really like his part of the world infringed unilaterally.

The woman at the desk opposite me is on a January diet. One of those diets that involves special milkshakes and lots of water all day. Personally, I’d rather have real food.