Warning! Your devicePixelRatio is not 1 and you have JavaScript disabled!
(Or if this message is not in a red box, your browser doesn't even support basic CSS and you can probably safely ignore this warning.
You'll encounter way more suboptimal things in that case...)
Since HTML is as braindead as it can get, pixel graphics might look off without javashit workarounds.
Try to get a non-HiDPI screen or enable javashit for best results.
2025 updates
Created: 1761771101 (2025-10-29T20:51:41Z), Updated: 1766786553 (2025-12-26T22:02:33Z), 2743 words, ~12 minutes
Tags: meta, rant, blog update
This post is part of series blog update: 2023, 2024, 2025, 2026
After the latest more major-ish update to the blog in 2023, I decided to go down the rabbit hole again. So some rant filled updates incoming.
Video player#
Probably the most noticeable update is the removal of the Plyr based player. In the previous article I described how I modified Plyr to remove most of the garbage, but of course I never touched it again, so the site was running on an unmaintained fork of Plyr for years. It's not the case it didn't do its job still, but it had its problems, and honestly it was easier to write a new player than fixing it. I also hated its modern nonsense, being unstylable, fucking annoying disappearing playback controls, plus I had some future additions in mind which would require more in depth modifications to Plyr. And for reasons unrelated to this blog, I had the "pleasure" of working on some web projects, so I had a bit more experience in web abominations than when I did the last update.
So what's changed? Windows 9x style for the player too! No more auto disappearing playback controls when not in fullscreen! Also, they're at the top of the video, not at the bottom, just because. The scrollbars aren't Windows like either.
What changed under the hood? A lot. I've converted the javashit code to typescript, so it means npm crap to compile the site. Which is less than ideal, as nanoc has zero support for any JS bundlers, so it's a disgusting hack now (rollup basically writes the bundle into nanoc's content directory), but it works. More on nanoc later. But this means I can also use JS dependencies to avoid reinventing the wheel, so I decided to use a grand total of 1 (one) JS libraries: VanJS. Think react, except a few gigabytes smaller. I've mostly ported the gallery to VanJS—I say mostly, because while it's using VanJS now, if I were to rewrite it from scratch, I'd surely architect it differently, The video player is written in a more reactive style.
Maybe one of the most annoying thing in Plyr was how could I specify thumbnails for video. See the rant in the previous post, basically I had to make a VTT file with a weird syntax (VTT is normally for subtitles, Plyr (ab)used it for thumbnails), then make a texture atlas however I like. However Plyr didn't support changing resolution of the thumbnails, which was a problem for videos with changing resolutions. I had to workaround it by letterboxing all thumbnails to a common resolution, but with the new player I no longer have to do it! Just change VTT to a more sane (and compact) format is all I what I needed to do, but... There's always a but. While trying to figure out the interval used by YouTube to take snapshots (I didn't find the answer), I ran into a random stackoverflow question, quoting the relevant part of the answer:
Optimize order of downloads. For example, if you have video with length 2:55:
- First, download container image with 8 thumbs covering full range of video time: 0:00, 0:25, 0:50, 1:15, 1:40, 2:05, 2:30, 2:55. This makes your service to appear as "working instantly" for slow clients.
- Next, download 4 container images with 32 thumbs total covering full range of video, but more densely: 0:00, 0:06, 0:11, 0:17, ...
- Now, download gradually all other thumbnails without any particular order.
Hmm... Let's make it a bit more regular, and we have a working solution. I called a set of images above a "layer", and for example with 4 layers the solution could look like:
- Layer 0 contains every 8th thumbnail (where
i % 8 == 0). - Layer 1 contains every 4th thumbnail not already on layer 0. (Alternatively, in this special case, every 8th image with an offset of 4, but this description won't scale.)
- Layer 2 contains every 2nd thumbnail not already on layer 0 or 1.
- Layer 3 contains every remaining thumbnail (or every 1st not already on layer 0, 1 or 2).
And this can be made to work with any number of layers, with n layers, layer l will contain thumbnails where i % (1 << (n-l-1)) == 0.
One downside here is for every l >= 2, you'll have more thumbnails than on the previous layer, so different layers have different size.
But on the other hand, the important layers are smaller and thus are faster to download.
Right now the longest videos on this blog have 4 layers, shorter ones have less, but the important thing is, even if you have a slow internet connection, you should get a rough thumbnail set pretty fast!
Oh and thumbnails, video posters can be JXL and scaled too now. Previously Plyr only supported specifying a single image, but with the new player is no longer an issue. Now, currently JXL is practically only supported by Pale Moon, so it's probably not too useful, but better HiDPI support is welcome.
One change which might affect non-javashit users a bit negatively is that you'll no longer have a HTML video embed.
The problem is, while in principle a single <video> element can have multiple <source> elements, I don't know of any browser which allows you to select which one to use.
They're only good for listing multiple formats where the browser only supports one of the formats, not when you want it user selectable.
And having multiple instances of the same video next to each other in different quality would just look stupid, so non-JS users will see links for the different quality video files (similar to what they already have with the gallery).
Also fun fact: while redoing the thumbnails, I noticed I didn't include a video about track presentation (or lack of it) in the NFS5 post. I made a video, I put it into git, due to how nanoc works, it was even deployed on the server, just never referenced anywhere. The best of all, in the NFS6 post I even wrote presentations are back, despite failing to even mention it in NFS5's article... So you might want to go back and check the new ranting I added there for your enjoyment, and sorry for forgetting about it the first time.
HiDPI support#
Back when I first checked how the blog looks under a HiDPI screen, I quickly added a big warning on top of the page:
Warning! Your
devicePixelRatiois not 1! (Are you a phonefag or using a meme 4k monitor?) This site might appear distorted. Reset your browser's zoom level (or use Zoom text only in Pale Moon/Firefox). If you use Pale Moon/ Firefox, set layout.css.devPixelsPerPxto 1 inabout:config. If you use a chromium based browser, launch it with--force-device-scale-factor=1. Note that this will probably make everything unreadable, but at least it won't fuck up pixel graphics. Don't buy a HiDPI garbage next time.
Then as time went on, I improved HiDPI support.
Part of it was the fact I couldn't buy a new laptop with a non-HiDPI screen with otherwise acceptable specs...
Part of it is the new video player, where I could fix things I couldn't with Plyr previously.
Now pixel thumbnails/zoom, on Pale Moon you get a filter based workaround, which is not perfect.
Hopefully Pale Moon will add support for zoom soon as it's standard now, and I can remove this disgusting workaround, as it complicates non Pale Moon code too, but we'll see.
It also lacks support for CSS min()/max() functions...
Getting ready: nanoc?#
This blog is currently being built with nanoc. And while in general I like it, there are some problems with it, which makes using it increasingly annoying.
First is the text and binary item handling.
Every file is either text or binary, only specified by their extension (so you can't do pattern matching on it), and they work completely differently (text files are read into memory by nanoc, while binary files are passed around as filenames).
The thumbnail generator for the videos outputs a YAML file with info about the thumbs, YAML is a textual format, so it should be a text file, right?
No!
In nanoc, you can put metadata at the beginning of a file, for example to give the page a title or something, between a pair of --- headers.
This is fine, except YAML files generated by Ruby also start with ---.
And if you mark something as text file, nanoc will try to parse these blocks no matter what, and bail out saying your YAML file is invalid since it can't find a second --- line.
Nanoc's documentation even has a note about this error (of course the anchor won't go to the error, because having a free Palestine banner in a fucking software documentation is more important than having a documentation for the fucking software that fucking actually fucking works for anything other than fucking virtue signaling), and the solution is to add two more --- to the header.
Yeah, I'm going to fuck up all the YAML files so just this idiot tool won't shit itself.
In the end I went with the alternative of marking these YAML files as binary, because apparently in nanoc, text files can't start with ---.
The other problem with binary items is snapshots.
Nanoc by default creates three snapshots for each item, which while can be useful at times, in my experience 99% of the time completely unnecessary.
And since binary items live in files, this involves making at least 4 copies of each binary file (the 3 snapshot, plus the file in the output dir).
Of course this is not a big deal if you have a few kbyte sized files, but with video files having a size of a couple hundred megabytes, these copies add up quickly.
I've worked around this by monkey patching nanoc so it just symlinks the files instead of copying.
(Newer nanoc versions are supported to use reflink copies on BTRFS, so it won't actually occupy disk space more than once, but it still blows up du output and gives a lot of extra work to file backup/
The above is just annoying, but here comes the deal breaker: nanoc is ridiculously slow.
To have nanoc just print the help message takes about 300 ms.
I guess nanoc should be renamed to bloatc to slowc.
A no-op rebuild (when nothing has to be rebuilt, and everything is cached into memory) takes about 4 seconds.
A change in lib/ (which triggers a full rebuild), but without any change in the output, is about 33 seconds.
Oh and this is with ruby --jit, without JIT it's 4.lib touch rebuild time to 12.
Oh and just a quick tip. Don't write @items['foo'].
While it looks nice, if it can't find an item named foo exactly, it will switch into globbing mode, and try to fnmatch EVERY FUCKING ITEM against your string which contains zero wildcard characters!
So it's suddenly O(n) with a bigger constant instead of O(1).
Nanoc gives you free hash flooding in a project where there is no untrusted input!
The correct answer is the mouthful @items[Nanoc::Core::Identifier.new id].
Yeah, nanoc is already thrashing the GC, so this will help it for sure, but at least the algorithm stays O(1).
What would I change from nanoc to, I don't know. It's not like nanoc's documentation has a "ridiculously slow" point under features, and with small toy sites I can't really test the speed of alternate static site builder tools. Of course, there's always the alternative of rolling my own tool, designed to be parallel from the get go, but I don't want to do it yet. But I didn't want to write my video player either...
Update 2025-12-26: while messing around with my nanoc replacement idea, I ran into a peculiarity about SHA-256 checksums in Ruby (commands output truncated for brevity):
$ cat foo.rb
require 'digest/sha2'
puts Digest::SHA256.file(ARGV[0]).hexdigest
$ truncate -s1000 small
$ perf stat -r 10 sha256sum small
0.0009217 +- 0.0000444 seconds time elapsed ( +- 4.81% )
$ perf stat -r 10 ruby foo.rb small
0.036914 +- 0.000413 seconds time elapsed ( +- 1.12% )
OK, sha256sum is a utility written in C doing only one thing (calculating SHA-256 checksums), while Ruby needs to execute a bunch of code to set up the interpreter and load the modules.
Still 37 ms vs 0.
$ truncate -s$((1024*1024*1024)) big
$ perf stat -r 10 sha256sum big
0.504410 +- 0.000431 seconds time elapsed ( +- 0.09% )
$ perf stat -r 10 ruby foo.rb big
3.20031 +- 0.00901 seconds time elapsed ( +- 0.28% )
Hmm?
Ruby is still more than 6 times slower than sha256sum.
So it's not just the interpreter startup overhead after all.
Spawning sha256sum from ruby and parsing its output is faster than using the built-in Digest::SHA256 class if your input is more than a kibibyte long...
OK, next try. Ruby has an OpenSSL binding, let's try using it:
$ cat foz.rb
require 'openssl'
puts OpenSSL::Digest::SHA256.file(ARGV[0]).hexdigest
$ perf stat -r 10 ruby foz.rb small
0.051716 +- 0.000821 seconds time elapsed ( +- 1.59% )
$ perf stat -r 10 ruby foz.rb big
0.58543 +- 0.00142 seconds time elapsed ( +- 0.24% )
Loading the OpenSSL library has a much higher overhead, but it's performance is comparable to the command line sha256sum tool.
Now, let me summarize this in a table.
The added ruby raw row means using ruby's Benchmark module to do the measurements, thus ignoring the interpreter startup and require time.
| Name | Small | Big |
|---|---|---|
sha256sum |
0.9 | 504 |
| Ruby Digest | 36.9 | 3200 |
| Ruby OpenSSL | 51.7 | 585 |
| Ruby Digest raw | 0.031 | 3120 |
| Ruby OpenSSL raw | 0.024 | 530 |
As you can see from the ruby raw numbers, with the small input, the above perf commands practically measured the ruby interpreter startup time.
Why is this important?
If you look at any image or video on this blog (not in this post, sorry), the URL will look something like /c/some random identifier/filename.
The random identifier is the SHA-256 sum of the file, encoded in Base32, truncated to 10 characters.
Before looking at the code, I could have sworn I used xxHash, but for some reason I went with SHA-256 (there's no untrusted input here, so a cryptographically secure hash is not needed), and of course it used Digest::SHA256.
I've replaced it with OpenSSL in the site code, a no-op rebuild with empty digest cache but the files cached into memory went down from 43.
And what about xxHash?
I could decrease the above mentioned no-op rebuild to 6.