+++*

Symbolic Forest

A homage to loading screens.

Blog : Posts tagged with ‘web design’

We can rebuild it! We have the technology! (part four)

Or, finishing off the odds and ends

Settling down to see what else I should write in the series of posts about how I rebuilt this website, I realised that the main issues now have already been covered. The previous posts in this series have discussed the following:

And throughout the last two, we touched on some other important software engineering topics, such as refactoring your code so that you Don’t Repeat Yourself, and optimising your code when it’s clear that (and where) it is needed.

There are a few other topics to touch on, but I wasn’t sure they warranted a full post each, so this post is on a couple of those issues, and any other odds and ends that spring to mind whilst I am writing it.

Firstly, the old blog was not at all responsive: in web front-end terms, that means it didn’t mould itself to fit the needs of the users and their devices. Rather, it expected the world to all use a monitor the same size as mine, and if they didn’t, then tough. When I wrote the last designs the majority of the traffic the site was receiving was from people on regular computers; nowadays, that has changed completely.

However, the reason that this isn’t a particularly exciting topic to write about is that I didn’t learn any new skills or dive into interesting new programming techniques. I went the straightforward route, installed Bootstrap 4.5, and went from there. Now, I should say, using Bootstrap doesn’t magically mean your site will become responsive overnight; in fact, it’s very straightforward to accidentally write an entirely non-responsive website. Responsiveness needs carefully planning. However, with that careful planning, and some careful use of the Bootstrap CSS layout classes, I achieved the following aims:

  • The source code is laid out so that the main content of the page always comes before the sidebars in the code, wherever the sidebars are actually displayed. This didn’t matter so much on this blog, but on the Garden Blog, which on desktop screens has sidebars on both sides of the page, it does need to be specifically coded. Bootstrap’s layout classes, though, allow you to separate the order in which columns appear on a page from the order in which they appear in the code.
  • More importantly, the sidebars move about depending on page width. If you view this site on a desktop screen it has a menu sidebar over on the right. On a narrow mobile screen, the sidebar content is down below at the bottom of the page.
  • The font resizes based on screen size for easier reading on small screens. You can’t do this with Bootstrap itself; this required @media selectors in the CSS code with breakpoints chosen to match the Bootstrap ones (which, fortunately, are clearly documented).
  • Content images (as opposed to what you could call “structural images”) have max-width: 100%; set. Without this, if the image is bigger than the computed pixel size of the content column, your mobile browser will likely rescale things to get the whole image on screen, so the content column (and the text in it) will become too narrow to read.

On the last point, manual intervention is still required on a couple of types of content. Embedded YouTube videos like in this post need to have the embed code manually edited, and very long lines of text without spaces need to have soft hyphens or zero-width spaces inserted, in order to stop the same thing happening. The latter usually occurs in example code, where zero-width spaces are more appropriate than soft hyphens. All in all though, I’ve managed to produce something that is suitably responsive 95% of the time.

The other point that is worth writing about is the build process of the site. As Wintersmith is a static site generator, every change to the site needs the static files to be built and deployed. The files from the previous version need to be deleted (in case any stale ones, that have disappeared completely from the latest iteration of the output, are still lying about) and then Wintersmith is run to generate the new version. You could do this very simply with a one-liner: rm -rf ../../build/* && wintersmith build if you’re using Bash, for example. However, this site actually consists of three separate Wintersmith sites in parallel. The delete step might only have to be done once, but doing the build step three separate times is a pain. Moreover, what if you only want to delete and rebuild one of the three?

As Wintersmith is a JavaScript program, and uses npm (the Node Package Manager) for managing its dependencies, it turns out that there’s an easy solution to this. Every npm package or package-consumer uses a package.json file to control how npm works; and each package.json file can include a scripts section containing arbitrary commands to run. So, in the package.json file for this blog, I inserted the following:

"scripts": {
  "clean": "node ../shared/js/unlink.js ../../build/blog",
  "build": "wintersmith build"
}

You might be wondering what unlink.js is. I said before that “if you’re using Bash” you could use rm -rf ../../build. However, I develop on a Windows machine, and for this site I use VS Code to do most of the writing. Sometimes therefore I want to build the site using PowerShell, because that’s the default VS Code terminal. Sometimes, though, I’ll be using GitBash, because that’s convenient for my source control commands. One day I might want to develop on a Linux machine. One of the big changes between these different environments is how you delete things: del or Remove-Item in PowerShell; rm in Bash and friends. unlink.js is a small script that reproduces some of the functionality of rm using the JavaScript del package, so that I have a command that will work in the same way across any environment.

So, this means that in the main blog’s folder I can type npm run clean && npm run build and it will do just the same thing as the one-liner command above (although note that the clean step only deletes the main blog’s files). In the other Wintersmith site folders, we have a very similar thing. Then, in the folder above, we have a package.json file which contains clean and build commands for each subsite, and a top-level command that runs each of the others in succession.

"scripts": {
  "clean:main": "cd main && npm run clean",
  "clean:misc": "cd misc && npm run clean",
  "clean:garden": "cd garden && npm run clean",
  "clean": "npm run clean:misc && npm run clean:main && npm run clean:garden",
  "build:main": "cd main && npm run build",
  "build:misc": "cd misc && npm run build",
  "build:garden": "cd garden && npm run build",
  "build": "npm run build:misc && npm run build:main && npm run build:garden"
}

And there you have it. By typing npm run clean && npm run build at the top level, it will recurse into each subsite and clean and build all of them. By typing the same command one folder down, it will clean and build that site alone, leaving the others untouched.

When that’s done, all I have to do is upload the changed files; and I have a tool to do that efficiently. Maybe I’ll go through that another day. I also haven’t really touched on my source-control and change management process; and all I have to say there is, it doesn’t matter what process you use, but you will find things a lot more straightforward if you find a process you like and stick to it. Even if you’re just a lone developer, using a sensible source control workflow and release process makes life much easier and makes you less likely to make mistakes; you don’t need anything as rigid as a big commercial organisation, but just having a set process for storing your changes and releasing them to the public means that you are less likely to slip up and make mistakes. This is probably something else I’ll expand into an essay at some point.

Is the site perfect now? No, of course not. There are always more changes to be made, and more features to add; I haven’t even touched on the things I decided not to do right now but might bring in one day. Thoe changes are for the future, though. Right now, for a small spare-time project, I’m quite pleased with what we have.

We can rebuild it! We have the technology! (part one)

Or, how many different ways can you host a website?

I said the other day I’d write something about how I rebuilt the site, what choices I made and what coding was involved. I’ve a feeling this might end up stretched into a couple of posts or so, concentrating on different areas. We’ll start, though, by talking about the tech I used to redevelop the site with, and, indeed, how websites tend to be structured in general.

Back in the early days of the web, 25 or 30 years ago now, to create a website you wrote all your code into files and pushed it up to a web server. When a user went to the site, the server would send them exactly the files you’d written, and their browser would display them. The server didn’t do very much at all, and nor did the browser, but sites like this were a pain to maintain. If you look at this website, aside from the text in the middle you’re actually reading, there’s an awful lot of stuff which is the same on every page. We’ve got the header at the top and the sidebar down below (or over on the right, if you’re reading this on a desktop PC). Moreover, look at how we show the number of posts I’ve written each month, or the number in each category. One new post means every single page has to be updated with the new count. Websites from the early days of the web didn’t have that sort of feature, because they would have been ridiculous to maintain.

The previous version of this site used Wordpress, technology from the next generation onward. With Wordpress, the site’s files contain a whole load of code that’s actually run by the web server itself: most of it written by the Wordpress developers, some of it written by the site developer. The code contains templates that control how each kind of page on the site should look; the content itself sits in a database. Whenever someone loads a page from the website, the web server runs the code for that template; the code finds the right content in the database, merges the content into the template, and sends it back to the user. This is the way that most Content Management Systems (CMSes) work, and is really good if you want your site to include features that are dynamically-generated and potentially different on every request, like a “search this site” function. However, it means your webserver is doing much more work than if it’s just serving up static and unchanging files. Your database is doing a lot of work, too, potentially. Databases are seen as a bit of an arcane art by a lot of software developers; they tend to be a bit of a specialism in their own right, because they can be quite unintuitive to get the best performance from. The more sophisticated your database server is, the harder it is to tune it to get the best performance from it, because how the database is searching for your data tends to be unintuitive and opaque. This is a topic that deserves an essay in its own right; all you really need to know right now is that database code can have very different performance characteristics when run against different sizes of dataset, not just because the data is bigger, but because the database itself will decide to crack the problem in an entirely different way. Real-world corporate database tuning is a full-time job; at the other end of the scale, you are liable to find that as your Wordpress blog gets bigger as you add more posts to it, you suddenly pass a point where pages from your website become horribly slow to load, and unless you know how to tune the database manually yourself you’re not going to be able to do much about it.

I said that’s how most CMSes work, but it doesn’t have to be that way. If you’ve tried blogging yourself you might have heard of the Movable Type blogging platform. This can generate each page on request like Wordpress does, but in its original incarnation it didn’t support that. The software ran on the webserver like Wordpress does, but it wasn’t needed when a user viewed the website. Instead, whenever the blogger added a new post to the site, or edited an existing post, the Movable Type software would run and generate all of the possible pages that were available so they could be served as static pages. This takes a few minutes to do each time, but that’s a one-off cost that isn’t particularly important, whereas serving pages to users becomes very fast. Where this architecture falls down is if that costly regeneration process can be triggered by some sort of end-user action. If your site allows comments, and you put something comment-dependent into the on-every-page parts of your template - the number of comments received next to links to recent posts, for example - then only small changes in the behaviour of your end-users hugely increase the load on your site. I understand Movable Type does now support dynamically-generated pages as well, but I haven’t played with it for many years so can’t tell you how the two different architectures are integrated together.

Nowadays most heavily-used sites, including blogs, have moved towards what I supposed you could call a third generation of architectural style, which offloads the majority of the computing and rendering work onto the user’s browser. The code is largely written using JavaScript frameworks such as Facebook’s React, and on the server side you have a number of simple “microservices” each carefully tuned to do a specific task, often a particular database query. Your web browser will effectively download the template and run the template on your computer (or phone), calling back to the microservices to load each chunk of information. If I wrote this site using that sort of architecture, for example, you’d probably have separate microservice calls to load the list of posts to show, the post content (maybe one call, maybe one per post), the list of category links, the list of month links, the list of popular tags and the list of links to other sites. The template files themselves have gone full-circle: they’re statically-hosted files and the webserver sends them back just as they are. This is a really good system for busy, high-traffic sites. It will be how your bank’s website works, for example, or Facebook, Twitter and so on, because it’s much more straightforward to efficiently scale a site designed this way to process high levels of traffic. Industrial-strength hosting systems, like Amazon Web Services or Microsoft Azure, have moved in ways to make this architecture very efficiently hostable, too. On the downside, your device has to download a relatively large framework library, and run its code itself. It also then has to make a number of round-trips to the back-end microservices, which can take some time on a high-latency connection. This is why sometimes a website will start loading, but then you’ll just have the website’s own spinning wait icon in the middle of the screen.

Do I need something quite so heavily-engineered for this site? Probably not. It’s not as if this site is intended to be some kind of engineering portfolio; it’s also unlikely ever to get a huge amount of traffic. With any software project, one of the most important things to do to ensure success is to make sure you don’t get distracted from what your requirements actually are. The requirements for this site are, in no real order, to be cheap to run, easy to update, and fun for me to work on; which also implies I need to be able to just sit back and write, rather than spend long periods of time working on site administration or fighting with the sort of in-browser editor used by most CMS systems. Additionally, because this site does occasionally still get traffic to some of the posts I wrote years ago, if possible I want to make sure posts retain the same URLs as they did with Wordpress.

With all that in mind, I’ve gone for a “static site generator”. This architecture works in pretty much the same way as the older versions of Movable Type I described earlier, except that none of the code runs on the server. Instead, all the code is stored on my computer (well, I store it in source control, which is maybe a topic we’ll come back to at another time) and I run it on my computer, whenever I want to make a change to the site. That generates a folder full of files, and those files then all get uploaded to the server, just as if it was still 1995, except nowadays I can write myself a tool to automate it. This gives me a site that is hopefully blink-and-you’ll-miss-it fast for you to load (partly because I didn’t incorporate much code that runs on your machine), that I have full control over, and that can be hosted very cheaply.

There are a few static site generators you can choose from if you decide to go down this architectural path, assuming you don’t want to completely roll your own. The market leader is probably Gatsby, although it has recently had some well-publicised problems in its attempt to repeat Wordpress’s success in pivoting from being a code firm to a hosting firm. Other popular examples are Jekyll and Eleventy. I decided to go with a slightly less-fashionable but very flexible option, Wintersmith. It’s not as widely-used as the others, but it is very small and slim and easily extensible, which for me means that it’s more fun to play with and adapt, to tweak to get exactly the results I want rather than being forced into a path by what the software can do. As I said above, if you want your project to be successful, don’t be distracted away from what your requirements originally were.

The downside to Wintersmith, for me, is that it’s written in CoffeeScript, a language I don’t know particularly well. However, CoffeeScript code is arguably just a different syntax for writing JavaScript,* which I do know, so I realised at the start that if I did want to write new code, I could just do it in JavaScript anyway. If I familiarised myself with CoffeeScript along the way, so much the better. We’ll get into how I did that; how I built this site and wrote my own plugins for Wintersmith to do it, in the next part of this post.

The next part of this post, in which we discuss how to get Wintersmith to reproduce some of the features of Wordpress, is here

* This sort of distinction—is this a different language or is this just a dialect—is the sort of thing which causes controversies in software development almost as much as it does in the natural languages. However, CoffeeScript’s official website tries to avoid controversy by taking a clear line on this: “The golden rule of CoffeeScript is: ’it’s just JavaScript’”.

Curious problem

In which we have an obscure font problem, in annoyingly specific circumstances

Only a day after the new garden blog went live, I found myself with a problem. This morning, I noticed a problem with it, on K’s PC. Moreover, it was only a problem on K’s PC. On her PC, in Firefox and in IE, the heading font was hugely oversized compared to the rest of the page. In Chrome, everything was fine.

Now, I’d tested the site in all of my browsers. On my Windows PC, running Window 7 just like K’s, there were no problems in any of the browsers I’d tried. On my Linux box, all fine; on my FreeBSD box, all fine. But on K’s PC, apart from in Chrome, the heading font was completely out. Whether I tried setting an absolute size or a relative size, the heading font was completely out.

All of the fonts on the new site are loaded through the Google Webfonts API, because it’s nice and simple and practically no different to self-hosting your fonts. Fiddling around with it, I noticed something strange: it wasn’t just a problem specific to K’s PC, it was a problem specific to this specific font. Changing the font to anything else: no problems at all. With the font I originally chose: completely the wrong size on the one PC. Bizarre.

After spending a few hours getting more and more puzzled and frustrated, I decided that, to be frank, I wasn’t that attached to the specific font. So, from day 2, the garden blog is using a different font on its masthead. The old one – for reference, “Love Ya Like A Sister” by Kimberly Geswein – was abandoned, rather than wrestle with getting it to render at the right size on every computer out there. The replacement – “Cabin Sketch” by Pablo Impallari – does that reliably, as far as I’ve noticed;* and although it’s different it fits in just as well.

* this is where someone writes in and says it looks wrong on their Acorn Archimedes, or something along those lines.

The size of things

In which we measure monitors

The redesign is now almost done, which means that soon you’ll be saved from more posts on the minutiae of my redesign. It’s got me thinking, though: to what extent do I need to think about readers’ technology?

When this blog first started, I didn’t really worry about making it accessible to all,* or about making sure that the display was resolution-independent. It worked for me, which was enough. Over time, screens have become bigger; and, more importantly, more configurable, so I’ve worried less and less about it. When it came to do a redesign, though, I started to wonder. What browsers do my readers actually used.

Just after Christmas, for entirely different reasons, I signed up for Google Analytics, rather than do my own statistics-counting as I had been doing. Because Google Analytics relies on JavaScript to do its dirty work, it gives me rather more information about such things than the old log-based system did. So, last week, I spent an hour or so with my Analytics results and a spreadsheet. Here’s the graph I came up with:

Browser horizontal resolutions, cumulative %

The X-axis there is the horizontal width of everyone’s screens, in order but not to scale; the Y-axis is the cumulative percentage of visits.** In other words, the percentage figure for a given width tells you the proportion of visits from people whose screen was that size, or wider.

Straight away, really, I got the answer I wanted. 93% of visits are to this site are from people whose screens are 1024 pixels wide, or more. It’s 95% if I take out the phone-based browsers at the very low end, because I suspect most of that is accounted for by K reading it on the bus on her way home from work. The next step up, though, the graph plunges to only 2/3 of visits. 1024 pixels is the smallest screen width that my visitors use heavily.

Admittedly there’s a bit of self-selection in there, based on the current design; it looks horrible at 800 pixels, and nearly everyone still using an 800×600 screen has only visited once in the two-month sample period. However, that applies to most of the people who visit this site in any case; just more so for the 800-pixel users. Something like 70% of visits are from people who have probably only visited once in the past couple of months; so it’s fair to assume that my results aren’t too heavily skewed by the usability of the current design. It will be interesting to see how much things change.

I’m testing the new design in the still-popular 1024×768 resolution, to make sure everything will still work. I’ll probably test it out a fair bit on K’s phone, too. But, this is a personal site. If you don’t read it, it’s not vital, to you or to me. If I don’t test it on 800×600 browsers, the world won’t end. The statistics, though, have shown me where exactly a cutoff point might be worthwhile.

* For example, in the code of the old design, all that sidebar stuff over on the right comes in the code before this bit with the content, which does (I assume) make it a bit of a bugger for blind readers. That, at least, will be sorted out in the new design.

** “visits” is of course a bit of a nebulous term, but that is a rant for another day.

Classification

In which we discuss tagging and filksonomies

Another design point that’s come up as part of the Grand Redesign I keep promising you: tagging. The little bundle of links at the bottom of each post that I didn’t really think did very much.

I was a latecomer to tagging. When this site first started, it didn’t have any for the first month or so. After a while I started adding them, pointing them to Technorati. Back then, the site was running on WordPress version 1.5.something, and it didn’t have any built-in tagging support. I was trying to avoid using too many WordPress plugins, and I didn’t think that tag management (as distinct from tagging per se) mattered all that much; so I wrote all the tags manually. Like this, at the end of each post, with the <a> element repeated for each tag:

<small>Keyword noise: <a class="tag" rel="tag" href="…">tag1</a>, …</small>

Which worked, quite well; there was a visually distinct “tag” class, because I wanted tag links – which all led to Technorati back then – to be visually distinct from regular links which would go to whatever they were about.

Things move on, though, and WordPress has since gained built-in tagging functionality. Given that I’m redesigning the whole site, and putting in new built-from-scratch layout templates, I thought I may as well switch to using a more organising tagging system. For one thing, it means less typing each time I write a post. All that code up above is replaced by one little chunk in the template:

<p id="thetags"><small><?php the_tags('Keyword noise: ', ', ' ,");?></small></p>

This one covers all the tags, calling a Wordpress API function to pull them out of the database and convert them into HTML. I know all those commas and quotes look a bit confusing; but really they’re not that bad. And the point is: this is in the template, not in each post. That bit of code there only has to be written once; the previous chunk had to be typed out every time. The most awkward part is that WordPress isn’t flexible enough to let you set the class of each link individually, hence the <p class="…"> at the start.

The big change this leads to, though, is that the tag links no longer point to Technorati. Now, they point back to the site itself: tag page requests generate a page containing every post marked with that tag. And, already, that’s shown that people do indeed click on the tags. People, particularly people coming from searches, do seem to use them. Whether they find them useful or not is another matter, of course, now that they point back within the site and not to a broader variety of opinions on the matter; but they do get used.

Doing it this way means that I put more tags on each post, simply because there’s much less typing to do. Conversion, though, is going to be a bit of a job. Right now there are 760-odd posts on this site, all of which I’m having to reread and re-tag. It’s going to take a while, but hopefully the majority of it will be done by the time the new design is finished.* The only problem with this transitional phase is that: the current template is, because of its age, completely unaware of tags. So it doesn’t really know what a tag-based archive page is; so when you click on a tag, there’s no explanation as to what you’re looking at. I’m not sure if this is going to be a problem for you readers or not; and, hopefully, it’s only going to be a short-lived situation.

The word “folksonomy” has often been used to describe this sort of tagging system. I’m not sure it’s an ideal term for what I’m doing, though. “Filksonomy” might be more relevant: a bit like a folksonomy, but rather more whimsical and silly.

*** In any case, there are plenty of other parts of the new design that also need each post checking and potentially editing.

Design points

In which nothing, design-wise, is accomplished

As I mentioned recently, I’m embarking on a Grand Epic Ground-Upwards Redesign of this site, because, well, the design hasn’t been changed since I first set it up. I knocked it together in a few days holiday in August ’05; back then my holiday year ended in August and I often had a few spare days at the end of the month where I had nothing to do and needed to keep myself occupied. In 2005, this blog was the result.

Anyway, my point is: it was put together in a bit of a hurry, with most of the design code ripped out of a standard theme I downloaded, without me really understanding what each bit did. The design’s always had a few rough edges, and there are lots of things that I’ve meant to develop further but never have. Hopefully, some of those points will be addressed, attacked, and taken by storm.

Thinking about the design, though, and what I want it to achieve, has made me think about one of the things I was most unhappy with when I first put this site together. One of the things I liked about this theme when I first saw it was:* the little boxes for the date on each post. You know, these ones:

Date with cardinal number

But one thing I didn’t like, though, was the cardinal number. Maybe it’s because I’m English, that that’s how I was taught, but when I read a date, I always read it with an ordinal number. “January 11th”, not “January 11″.

I can’t remember, to be honest, if it was possible to fix that easily when I first started using WordPress. Possibly it was, possibly it was something that’s been added later.** In any case, I didn’t fix it. I know I tried to, at one point; but abandoned the fix and didn’t go back to it. Then I forgot the issue, until, coming back to the redesign, I tried the fix again the other day. When I retried it, I remembered that I’d given it a go before. Because this is the result

Date with ordinal number

Those two extra characters mean that on most days, the text is just marginally too long to fit in the box. The box gets pushed down. Which isn’t so bad; but, it doesn’t always happen. You can’t necessarily know what the date box will look like; how it will relate to the elements around it. Moreover, I don’t know how it will look on other computers, where the fonts have slightly differing metrics to mine.

There are ways to fix it, of course. The box could be slightly wider. I could make sure that the horizontal line always comes underneath the date box, although that might leave annoying white space under the post title. The question, though, is whether it’s worth doing. However many times I tweak it, I’m not sure I’d ever get it quite right based on the current design.

And so, this all is partly why I’m going to start pretty much from scratch. The risk is that I’ll reinvent the wheel; the upside is that at least I’ll know how it works from its heart.

* and still is

** To be pedantic: it’s not a feature of WordPress itself, it’s a feature of PHP, the underlying language. I’m too lazy to go back through PHP’s version change logs and find out when the feature in question – the “S” character in date formatting strings – was added.

Freedom of speech

In which we ponder competition among blog hosting companies

Back in the mists of time, I wrote about Jakob Nielsen‘s top ten blog design mistakes. Including: not having your own domain name. My response: there are several sites I read and respect that do do this, but if you want to be completely sure you control your own reputation, you need to control your domain name too.

One thing I didn’t consider, though, is that the people who host your site can, if they want, control what you put on it. Filter out things they don’t like. You could, for example, do what News Corporation subsidiary Myspace have been caught doing: censor links to video-hosting sites, presumably because these sites will soon become News Corp’s competitors when Myspace introduces its own video-hosting service. You might think you can say anything you like on the internet – but if you’re a Myspace user, that apparently doesn’t apply.

Byline

In which we think about design and credibility

Going back on last week’s post on Jakob Nielsen‘s top ten blog design mistakes: his Number Two Mistake is: no author photo on the site. Thinking about it, out of all the mistakes on his list, that’s almost certainly the most commonly-made.

Faces are better-remembered than names, he says. People will come up to you because they recognise you from your photo, he says. How many bloggers actually want that to happen to them, though? I know I don’t. It makes you personable; it improves your credibility if people know who you are.

I said before that I don’t believe it would give me any extra credibility. I don’t think you need to know what I look like in order to believe the stuff I write here; and frankly, I don’t care whether you believe it or not; I know myself how true it all is. Thinking about it again, though, I’m a bit suspicious of his reasoning; and what makes me suspicious is: comparing his theories to the way newspapers work.

If you look at most national newspapers – I mean, British ones – their regular columnists will have byline photos. You know what they look like, so, the theory goes, you should trust them more. Columnists, though, aren’t there to write things you need to trust them about. They’re there to write down their opinions, which may well be – and often are – complete bollocks. The news pages, which are the parts you’re supposed to believe are true, don’t bother with byline photos. They don’t always bother with bylines. These people, though, are the ones you’re supposed to trust, and their words are supposedly more trustworthy because of their relative anonymity.

Of course, this all breaks down when you consider that most bloggers see their role in life as over-opinionated commentators, not the byline-free just-the-facts news types. I wanted to mention it, though, because it’s a different angle on how trust works in the media. Who do you trust more?

Mistakes

In which we consider how well this site scores against Nielsen’s standard

Website design and usability expert Dr Jakob Nielsen has published his list of the top ten blog design mistakes. So, I thought I’d go through the list and see how many of them I’m making.

No author biography, no author photo. Well, there’s a kind of biography, but certainly no photo. I don’t want you to know who I am. It wouldn’t mean anything unless you already know me in real life. Telling you more about myself wouldn’t gain me anything in credibility, which seems to be the most important point here.

Nondescript posting titles – I do this all the time. Mostly because, as he says, writing good headlines is hard work. Partly, though, because this isn’t a news site. If you look at a newspaper, the concise descriptive headlines Nielsen favours are all over the news pages; but the comment sections’ headlines are deliberately vaguer and enticing.

Links don’t say where they go – I try not to do this, because I know it’s bad for search engines.

Classic hits are buried. If I ever write some, I might think about doing something about this.

The calendar is the only navigation – in other words, you should try to categorise everything properly. I’d say I score half-marks on this one.

Irregular publishing frequency is about the only thing on the list you can’t accuse me of, unless you want to complain about me not always posting at the same time every day.

Mixing topics. Hah. I don’t even have a topic.

Forgetting that you write for your future boss – this is why I don’t tell you much about who I am, in the hope of avoiding this problem. Nielsen thinks that trying to avoid this is hopeless given the march of technology, though.

Finally, Having a domain name owned by a weblog service – lots of well-known, well-respected sites do do this. I see the point, though: you need to control your domain to control your reputation. Not something I need to worry about, though.

Totting up, I seem to have hit six (and a half) of the top mistakes in weblog design. All of them, though, are all very good points when made about a different sort of site to mine. I just don’t feel that those six mistakes I’ve made are a problem for me at the moment – and some of them might be mistakes, but they’re decisions that I deliberately took. I’m fairly happy with the nature of this site at the moment, whatever an expert might say.

You’re It

In which we consider the mechanics of tagging

Feeling at a loose end, I’ve been experimenting with possible ways of adding Technorati tags to the posts here, without making them unreadable. There are three ideas I’m trying:

  1. using a different class of link for links that identify tags
  2. additionally, confining tags to a footnote-like section at the bottom of each post
  3. putting tags inside an “invisible” block inside each post (using ‘display: none’ to hide the tag paragraph)

This post uses Idea 1; and I’ve also edited Friday’s post to use a combination of ideas 1 and 2. Thursday’s post, on the other hand, has been edited with Idea 2 alone. My first reaction is that including the tags in the post body makes the post look a bit too messy, and not many people will realise the difference between the two sorts of link. If you have an opinion, let me know what you think. Would Idea 3 even work, or would it just be ignored?

Update, later that day: I’ve emailed Technorati Support to ask if Idea 3 would actually work. It does feel like a bit of a dirty, underhand trick to pull though.

Update, some years later – Jan 24th ’09 to be precise: I eventually decided on a method combining Ideas 1 and 2. And – pending a redesign – I still use it. However, I’m currently rewriting all the tags so they don’t go to Technorati any more, they go to “tag pages” within this site instead. It’s demonstrated, for one thing, that lots of people do actually click on the tags. Incidentally, as part of this, I’ve regularised the “test posts” described above; the “(b) test” no longer looks as described.