+++*

Symbolic Forest

A homage to loading screens.

Blog : Posts from March 2024

Becoming visible

In which we talk about Transgender Day of Visibility

Today, March 31st, is Transgender Day of Visibility. This year, 2024, it’s fifteen years since the event first started. Event is maybe a big word. It’s a marker, a day in the calendar for trans people to stand up and be loud about who they are.

The calendar sometimes seems full of queer-related events nowadays. Aside from TDoV there’s LGBTQ History Month (February, in Britain at any rate); Pride Month (June); Transgender Day of Remembrance (November); and probably more that haven’t immediately sprung into my head. It sometimes feels like there’s so many similar events in the calendar that they are coming around every week. Nevertheless, they are still all important. Transgender Day of Visibility was started as a celebration, a reaction to the only trans-specific day in the calendar being one of sadness and hurt, a reaction to the medical establishment’s position that the ultimate goal of all trans people should be to become invisible, and a reaction to those who don’t think trans people should be included under the queer umbrella. A day for us to stand up and be proud of ourselves.

Yes, ourselves.

This blog started in its current form in August 2005, getting on for nineteen years ago. In all that time, I think, I’ve not once referenced the fact that I am trans. There’s a reason for that.

I’m not just trans, I am a detransitioner. In 2005, I had just detransitioned. I went into deep, deep denial, about who I was and who I am. So, here, it was never mentioned.

I started to transition again in 2021. One of the first things one of my close friends said was: “Welcome back!” It touched me more than you can imagine. I scanned all of the content on this blog for anything that gendered me, and scrupulously removed them all. I wasn’t ready to talk about it here until now.

Transitioning, like coming out, isn’t a single event. It’s a lifelong process. But an important part of my second transition, coming out to my work colleagues, coincidentally happened two years ago today. Not specifially because it was TDoV, just because we happened to have the quarterly all-staff meeting that day and HR thought it would be a good idea to make it as face-to-face an event as possible. I didn’t mind. I had to do it three times, with separate groups of people. Each time I told them the basic facts, and each time everyone around me was as caring and supportive as possible. In general, that has absolutely been the case. The first time I came out, over twenty years ago, I did lose friends. Not most, but some. This time, everyone in my life who matters to me has been completely and unequivocally supportive of me.

There’s never a right day to come out. Just like being gay, though; if you’re trans, you’re still trans whether or not you come out. Detransitioners are still, ultimately, trans, even though they are used as a political football by the queerphobic—one reason I always kept very quiet about being a detransitioner. I was born trans, I always will be trans, and I always would have been even if I had never transitioned.

As I do transition, too, I’m becoming less visible. I look like any other middle-aged mum now. It’s not immediately obvious that I’m transgender, not at all. That’s one reason, I think, why days like TDoV are still important. Even though I do enjoy looking like any other middle aged mum, I enjoy no longer having to fight for my gender to be perceived, I will still always be trans. Like many middle aged women, I rely on HRT now. Even people who know I am trans forget that I am; a colleague recently was slightly surprised to discover that I have changed my first name. Before too long, people will only know if they go back and read things like this, or if I stand up on days like today and say so. It matters, though. In some ways, I want to be visible.

There’s a museum I’ve taken The Children to a few times, that often has the same person either behind the counter or working as a custodian in one of the rooms. They have long hair, and a beard. They appear to be male. But…every time they see me, even though they are a stranger, their face breaks out into a broad, broad smile as if they are incredibly happy to see me existing in the world as a visibly trans woman. I’ve seen that look a few times, on the faces of strangers in the street, on the faces of teenagers, even on the faces of work colleagues. They’re probably also trans people, trans people who for now are still in the closet, who haven’t been able to transition yet. Maybe they never will. But in moments like that, I know it’s good to be visible, it’s good to be able to show people that this is possible. At least one friend has told me that my transition inspired them to come out too. I hope I can keep doing that—I hope I can keep inspiring people and showing them that is possible to be out in the world as your true self. I hope all of them, everyone who sees me and feels that urge inside, is able to find themselves eventually.

Beside the sea again

Or, resurgence from the waves

Regular readers might remember that two or three years back, I visited the Buck Beck Beach Bench, a strange and delightful bench built up from driftwood on one of the remoter stretches of Cleethorpes Beach. I haven’t been back very much since that visit, what for one reason and another, but I did keep following the Bench and its creators on social media. Because of that, I knew that twice since, it had been completely destroyed by storms; and then, rebuilt. After all, the Bench first started as a ramshackle, makeshift affair for dog-walkers to sit on whilst they waited for the tide to turn, and it was created by slow, organic growth rather than some grand plan. When it is destroyed, it comes back, recreated with the same impulse to create something, build something, and create a record that people stood in a particular spot and stared out at the ever-changing ocean.

A view of the Buck Beck Beach Bench

The bench is smaller now, much smaller than it was before, small enough that it can almost be captured in a single photo. The bench-builders still aim for everything the bench is made from to be safely degradable, something that will rot away harmlessly when it is washed away, as it inevitably will be.

A view of the Buck Beck Beach Bench

The new bench has moved a little from its previous spot which can still be identified from fragments of the previous bench lying about and projecting from the sand. It is on higher ground, now, higher above the waves. This does give it a more commanding view, but I doubt it will last as long as its previous incarnations. This is because it stands on top of a dune, close to its edge, and before very long that edge will have eroded away. It will erode quickly, both from the action of the sea at the spring tide and the footsteps of people climbing up and down from the shore to the bench and back to the shore again. I only give it a few months, before it is undermined and topples down into the water.

Closeup of the Buck Beck Beach Bench

But when it does, it will be rebuilt. And I’ll go down there again, take photos of what its latest regeneration looks like, at once the same but entirely, completely different. And then I will turn, homeward and landward, picking my path carefully back through the marsh.

View from the Buck Beck Beach Bench

Modern technology

Or, keeping the site up to date

Well, hello there! This site has been on somethng of a hiatus since last summer, for one reason and another. There’s plenty to write about, there’s plenty going on, but somehow I’ve always been too busy, too distracted, too many other things going on to sit down and want to write a blog post. Moreover, there are more technical reasons that I’ve felt I needed to get resolved too.

This site has never been a “secure site”. By that I mean, the connection between the website’s server and your browser has never been encrypted; anyone with access to the network in-between can see what you’re looking at. Alongside that, there’s no way for you to be certain that you’re looking at my genuine site, that the connection from your browser or device is actually going to me, not just to someone pretending to be me. Frankly, I’d never thought, for the sort of nonsense I post here, that it was very important. You’re not going to be sending me your bank details or your phone number; since the last big technical redesign, all of four years ago now, you haven’t been able to send me anything at all because I took away the ability to leave a comment. After that redesign was finished, “turn the site into a secure site” was certainly on the to-do list, but never very near the top of it. For one thing, I doubt anyone would ever want to impersonate me.

That changed a bit, though, in the last few months. There has been a concerted effort from the big browser companies to push users away from accessing sites that don’t use encryption. This website won’t shop up for you in search results any more. Some web browsers will show you an error page if you go to the site, and you have to deliberately click past a warning telling you, in dire terms, that people might interfere with your traffic or capture your credit card number. That’s not really a risk for this site, but the general trend has been to push non-technical users towards thinking that all non-encrypted sites are all extremely dangerous to the same degree. It might be a bit debateable, but it’s easy and straightforward for them to do, and it does at least avoid any confusion for the users, avoids them having to make any sort of value judgement about a technical issue they don’t properly understand. The side effect: it puts a barrier in front of actually viewing this site. To get over that barrier, I’d have to implement TLS security.

After I did the big rewrite, switching this site over from Wordpress to a static site generator back in 2020, I wrote a series of blog posts about the generation engine I was using and the work pipeline I came up with. What I didn’t talk about very much was how the site was actually hosted. It was hosted using Azure Storage, which lets you expose a set of files to the internet as a static website very cheaply. Azure Storage supports using TLS encryption for your website, and it supports you hosting it under a custom domain like symbolicforest.com. Unfortunately, it doesn’t very easily let you do both at the same time; you have to put a Content Delivery Network in front of your Storage container, and terminate your TLS connection on the CDN. It’s certainly possible to do, and if this was the day job then I’d happily put the parts together. For this site, though, a weird little hobby site that I don’t sometimes update for months or years at a time, it felt like a fiddly and expensive way to go.

During the last four years, though, Microsoft have introduced a new Azure products which falls somewhere in-between the Azure Storage web-hosting functionality and the fully-featured hosting of Azure App Service. This is Azure Static Web Apps, which can host static files in a similar way to Azure Storage, but with a control panel interface more like Azure App Service. Moreover, Static Web Apps feature TLS support for custom domains, out of the box, for free. This is a far cry from 20-something years ago, when I remember having to get a solicitor to prove my identity before someone would issue me with a (very expensive) TLS certificate; according to the documentation, it Just Works with no real configuration needed at all. Why not, I thought, give it a bit of a try?

With Azure Storage, you dump the files you want to serve as objects in an Azure Blob Storage container and away you go. With an App Service, you can zip up the files that form your website and upload them. Azure Static Web Apps are a bit more complex than this: they only support deployment via a CI/CD pipeline from a supported source repository hosting service. For, say, Github, Azure tries to automate it as much as possible: you link the Static Web App to your Github account, specify the repository, and Azure will create an Action which is run on updates to the main branch, and which uses Microsoft Oryx to build your site and push the build artefacts into the web app. I’m sure you could manually pull apart what Oryx does, get the web app’s security token from the Azure portal, and replicate this whole process manually, but the goal is clearly that you use a fully automated workflow.

My site had never been set up with an automated workflow: that was another “nice to have” which had never been that high on the priority list. Instead, my deployment technique was all very manual: once I had a version of the site I wanted to deploy in my main branch—whose config was set up for local running—I would merge that branch into a deploy branch which contained the production config, manually run npm run clean && npm run build in that branch, and then use a tool to upload any and all new or changed files to the Azure Storage container. Making sure this all worked inside a Github Action took a little bit of work: changing parts of the site templates, for example, to make sure that all paths within the site were relative so that a single configuration file could handle both local and production builds. I also had to make sure that the top-level npm run build script also called npm install for each subsite, including for the shared Wintersmith plugins, so that the build would run on a freshly-cloned repository without any additional steps. With a few other little tweaks to match what Oryx expected—such as the build output directory being within the source directory instead of alongside it—everything would build cleanly inside a Github action runner.

It was here I hit the major issue. One of the big attractions of Azure Static Web Apps is that they’re free! Assuming you only want a personal site, with a couple of domain names, they’re free! Being from Northern England, I definitely liked that idea. However, free Static Web Apps also have a size limit of 250Mb. Oryx was hitting it.

This site is an old site, after all. There are just over a thousand posts on here, at the time of writing,* some of them over twenty years old. You can go back through all of them, ten at a time, from the home page; or you can go through them all by category; or month by month; or there are well over 3,000 different tags. Because this site is hosted through static pages, that means the text of each post is repeated inside multiple different HTML files, as well as each post having its own individual page. All in all, that adds up to about 350Mb of data to be hosted. I have to admit, that’s quite a lot. An average of 350Kb or so per post—admittedly, there are images in there which bump that total up a bit.

In the short term, this is fixable, in theory. Azure Static Web Apps offer two Hosting Plans at present. The free one, with its 250Mb limit, and a paid one. The paid one has a 500Mb limit, which should be enough for now. In the longer term, I might need to look at solutions to reduce the amount of space per post, but for now it would work. It wasn’t that expensive, either, so I signed up. And found that…Oryx still fell over. Instead of clearly hitting a size limit, I was getting a much vaguer error message. Failure during content distribution. That’s not really very helpful; but I could see two things. Firstly, this only occurred when Oryx was deploying to my production environment, not to the staging environment, so the issue wasn’t in my build artefacts. Secondly, it always occurred just as the deployment step passed the five-minute-runtime mark—handily, it printed a log message every 15 seconds which made that nice and easy to spot. The size of the site seemed to be causing a timeout.

The obvious place to try to fix this was with the tag pages, as they were making up over a third of the total file size. For comparison, all of the images included in articles were about half, and the remaining sixth, roughly speaking, covered everything else including the individual article pages. I tried cutting the article text out of the tag pages, assuming readers would think to click through to the indivdual articles if they wanted to read them, but the upload still failed. However, I did find a hint in a Github issue, suggesting that the issue could also occur for uploads which changed lots of content. I built the site with no tag pages at all, and the upload worked. I rebuilt it with them added in again, and it still worked.

Cutting the article text out of the tag pages has only really reduced the size to about 305Kb per post, so for the long term, I am definitely going to have to do more to ensure that I can keep blogging for as long as I like without hitting that 500Mb size limit. I have a few ideas for things to do on this, but I haven’t really measured how successful they will be likely to be. Also, the current design requires pretty much every single page on the site to change when a new post is added, because of the post counts on the by-month and by-category archive pages. That was definitely a nuisance when I was manually uploading the site after building it locally; if it causes issues with the apparent 5-minute timeout, it may well prove to be a worse problem for a Static Web App. I have an idea on how to work around this, too; hopefully it will work well.

Is this change a success? Well, it’s a relatively simple way to ensure the site is TLS-secured whilst still hosting it for a relatively cheap cost, and it didn’t require too much in the way of changes to fit it in to my existing site maintenance processes. The site feels much faster and more responsive, subjectively to me, than the Azure Storage version did. There are still more improvements to do, but they are improvements that would likely have been a good idea in any case; this project is just pushing them further to the top of the heap. Hopefully it will be a while before I get near the next hosting size limit; and in the meantime, this hosting change has forced me to improve my own build-and-deploy process and make it much smoother than it was before. You never know…maybe I’ll even start writing more blog posts again.

* If I’ve added up correctly, this is post 1,004.