Warn­ing! Your devicePixelRatio is not 1 and you have JavaScript dis­abled! (Or if this mes­sage is not in a red box, your brows­er doesn't even sup­port ba­sic CSS and you can prob­a­bly safe­ly ig­nore this warn­ing. You'll en­counter way more sub­op­ti­mal things in that case...) Since HTML is as brain­dead as it can get, pix­el graph­ics might look off with­out javashit workarounds. Try to get a non-HiDPI screen or en­able javashit for best re­sults.

2025 updates

Cre­at­ed: 1761771101 (2025-10-29T20:51:41Z), Up­dat­ed: 1766786553 (2025-12-26T22:02:33Z), 2743 words, ~12 min­utes

Tags: , ,

This post is part of se­ries blog up­date: 2023, , 2025,

Af­ter the lat­est more ma­jor-ish up­date to the blog in 2023, I de­cid­ed to go down the rab­bit hole again. So some rant filled up­dates in­com­ing.

Video player#

Prob­a­bly the most no­tice­able up­date is the re­moval of the Plyr based play­er. In the pre­vi­ous ar­ti­cle I de­scribed how I mod­i­fied Plyr to re­move most of the garbage, but of course I nev­er touched it again, so the site was run­ning on an un­main­tained fork of Plyr for years. It's not the case it didn't do its job still, but it had its prob­lems, and hon­est­ly it was eas­i­er to write a new play­er than fix­ing it. I al­so hat­ed its mod­ern non­sense, be­ing un­sty­lable, fuck­ing an­noy­ing dis­ap­pear­ing play­back con­trols, plus I had some fu­ture ad­di­tions in mind which would re­quire more in depth mod­i­fi­ca­tions to Plyr. And for rea­sons un­re­lat­ed to this blog, I had the "plea­sure" of work­ing on some web projects, so I had a bit more ex­pe­ri­ence in web abom­i­na­tions than when I did the last up­date.

So what's changed? Win­dows 9x style for the play­er too! No more au­to dis­ap­pear­ing play­back con­trols when not in fullscreen! Al­so, they're at the top of the video, not at the bot­tom, just be­cause. The scroll­bars aren't Win­dows like ei­ther.

What changed un­der the hood? A lot. I've con­vert­ed the javashit code to type­script, so it means npm crap to com­pile the site. Which is less than ide­al, as nanoc has ze­ro sup­port for any JS bundlers, so it's a dis­gust­ing hack now (rollup ba­si­cal­ly writes the bun­dle in­to nanoc's con­tent di­rec­to­ry), but it works. More on nanoc lat­er. But this means I can al­so use JS de­pen­den­cies to avoid rein­vent­ing the wheel, so I de­cid­ed to use a grand to­tal of 1 (one) JS li­braries: VanJS. Think re­act, ex­cept a few gi­ga­bytes small­er. I've most­ly port­ed the gallery to VanJS—I say most­ly, be­cause while it's us­ing VanJS now, if I were to rewrite it from scratch, I'd sure­ly ar­chi­tect it dif­fer­ent­ly, The video play­er is writ­ten in a more re­ac­tive style.

Maybe one of the most an­noy­ing thing in Plyr was how could I spec­i­fy thumb­nails for video. See the rant in the pre­vi­ous post, ba­si­cal­ly I had to make a VTT file with a weird syn­tax (VTT is nor­mal­ly for sub­ti­tles, Plyr (ab)used it for thumb­nails), then make a tex­ture at­las how­ev­er I like. How­ev­er Plyr didn't sup­port chang­ing res­o­lu­tion of the thumb­nails, which was a prob­lem for videos with chang­ing res­o­lu­tions. I had to workaround it by let­ter­box­ing all thumb­nails to a com­mon res­o­lu­tion, but with the new play­er I no longer have to do it! Just change VTT to a more sane (and com­pact) for­mat is all I what I need­ed to do, but... There's al­ways a but. While try­ing to fig­ure out the in­ter­val used by YouTube to take snap­shots (I didn't find the an­swer), I ran in­to a ran­dom stack­over­flow ques­tion, quot­ing the rel­e­vant part of the an­swer:

Op­ti­mize or­der of down­loads. For ex­am­ple, if you have video with length 2:55:

  1. First, down­load con­tain­er im­age with 8 thumbs cov­er­ing full range of video time: 0:00, 0:25, 0:50, 1:15, 1:40, 2:05, 2:30, 2:55. This makes your ser­vice to ap­pear as "work­ing in­stant­ly" for slow clients.
  2. Next, down­load 4 con­tain­er im­ages with 32 thumbs to­tal cov­er­ing full range of video, but more dense­ly: 0:00, 0:06, 0:11, 0:17, ...
  3. Now, down­load grad­u­al­ly all oth­er thumb­nails with­out any par­tic­u­lar or­der.

Hmm... Let's make it a bit more reg­u­lar, and we have a work­ing so­lu­tion. I called a set of im­ages above a "lay­er", and for ex­am­ple with 4 lay­ers the so­lu­tion could look like:

And this can be made to work with any num­ber of lay­ers, with n lay­ers, lay­er l will con­tain thumb­nails where i % (1 << (n-l-1)) == 0. One down­side here is for every l >= 2, you'll have more thumb­nails than on the pre­vi­ous lay­er, so dif­fer­ent lay­ers have dif­fer­ent size. But on the oth­er hand, the im­por­tant lay­ers are small­er and thus are faster to down­load. Right now the longest videos on this blog have 4 lay­ers, short­er ones have less, but the im­por­tant thing is, even if you have a slow in­ter­net con­nec­tion, you should get a rough thumb­nail set pret­ty fast!

Oh and thumb­nails, video posters can be JXL and scaled too now. Pre­vi­ous­ly Plyr on­ly sup­port­ed spec­i­fy­ing a sin­gle im­age, but with the new play­er is no longer an is­sue. Now, cur­rent­ly JXL is prac­ti­cal­ly on­ly sup­port­ed by Pale Moon, so it's prob­a­bly not too use­ful, but bet­ter HiDPI sup­port is wel­come.

One change which might af­fect non-javashit users a bit neg­a­tive­ly is that you'll no longer have a HTML video em­bed. The prob­lem is, while in prin­ci­ple a sin­gle <video> el­e­ment can have mul­ti­ple <source> el­e­ments, I don't know of any brows­er which al­lows you to se­lect which one to use. They're on­ly good for list­ing mul­ti­ple for­mats where the brows­er on­ly sup­ports one of the for­mats, not when you want it user se­lec­table. And hav­ing mul­ti­ple in­stances of the same video next to each oth­er in dif­fer­ent qual­i­ty would just look stu­pid, so non-JS users will see links for the dif­fer­ent qual­i­ty video files (sim­i­lar to what they al­ready have with the gallery).

Al­so fun fact: while re­do­ing the thumb­nails, I no­ticed I didn't in­clude a video about track pre­sen­ta­tion (or lack of it) in the NFS5 post. I made a video, I put it in­to git, due to how nanoc works, it was even de­ployed on the serv­er, just nev­er ref­er­enced any­where. The best of all, in the NFS6 post I even wrote pre­sen­ta­tions are back, de­spite fail­ing to even men­tion it in NFS5's ar­ti­cle... So you might want to go back and check the new rant­i­ng I added there for your en­joy­ment, and sor­ry for for­get­ting about it the first time.

HiDPI support#

Back when I first checked how the blog looks un­der a HiDPI screen, I quick­ly added a big warn­ing on top of the page:

Warn­ing! Your devicePixelRatio is not 1! (Are you a phonefag or us­ing a meme 4k mon­i­tor?) This site might ap­pear dis­tort­ed. Re­set your brows­er's zoom lev­el (or use Zoom text on­ly in Pale Moon/Fire­fox). If you use Pale Moon/Fire­fox, set layout.css.devPixelsPerPx to 1 in about:config. If you use a chromi­um based brows­er, launch it with --force-device-scale-factor=1. Note that this will prob­a­bly make every­thing un­read­able, but at least it won't fuck up pix­el graph­ics. Don't buy a HiDPI garbage next time.

Then as time went on, I im­proved HiDPI sup­port. Part of it was the fact I couldn't buy a new lap­top with a non-HiDPI screen with oth­er­wise ac­cept­able specs... Part of it is the new video play­er, where I could fix things I couldn't with Plyr pre­vi­ous­ly. Now pix­el thumb­nails/videos (and gallery/video UI el­e­ments) are sup­posed to be on­ly scaled by an in­te­ger fac­tor, every­thing else should be scaled more-or-less nor­mal­ly. There's a small prob­lem with UI, though, re­cent­ly CSS zoom be­came stan­dard. When I start­ed work­ing on the HiDPI sup­port, on­ly Chromi­um and Sa­fari sup­port­ed it as a non-stan­dard ex­ten­sion orig­i­nat­ing from In­ter­net Ex­plor­er, but now al­so Fire­fox sup­ports it, so I got my hopes up maybe I can re­move all the workarounds... Ex­cept Pale Moon still doesn't sup­port it. So now on mod­ern browsers you get CSS zoom, on Pale Moon you get a fil­ter based workaround, which is not per­fect. Hope­ful­ly Pale Moon will add sup­port for zoom soon as it's stan­dard now, and I can re­move this dis­gust­ing workaround, as it com­pli­cates non Pale Moon code too, but we'll see. It al­so lacks sup­port for CSS min()/max() func­tions...

Getting ready: nanoc?#

This blog is cur­rent­ly be­ing built with nanoc. And while in gen­er­al I like it, there are some prob­lems with it, which makes us­ing it in­creas­ing­ly an­noy­ing.

First is the text and bi­na­ry item han­dling. Every file is ei­ther text or bi­na­ry, on­ly spec­i­fied by their ex­ten­sion (so you can't do pat­tern match­ing on it), and they work com­plete­ly dif­fer­ent­ly (text files are read in­to mem­o­ry by nanoc, while bi­na­ry files are passed around as file­names). The thumb­nail gen­er­a­tor for the videos out­puts a YAML file with in­fo about the thumbs, YAML is a tex­tu­al for­mat, so it should be a text file, right? No! In nanoc, you can put meta­da­ta at the be­gin­ning of a file, for ex­am­ple to give the page a ti­tle or some­thing, be­tween a pair of --- head­ers. This is fine, ex­cept YAML files gen­er­at­ed by Ru­by al­so start with ---. And if you mark some­thing as text file, nanoc will try to parse these blocks no mat­ter what, and bail out say­ing your YAML file is in­valid since it can't find a sec­ond --- line. Nanoc's doc­u­men­ta­tion even has a note about this er­ror (of course the an­chor won't go to the er­ror, be­cause hav­ing a free Pales­tine ban­ner in a fuck­ing soft­ware doc­u­men­ta­tion is more im­por­tant than hav­ing a doc­u­men­ta­tion for the fuck­ing soft­ware that fuck­ing ac­tu­al­ly fuck­ing works for any­thing oth­er than fuck­ing virtue sig­nal­ing), and the so­lu­tion is to add two more --- to the head­er. Yeah, I'm go­ing to fuck up all the YAML files so just this id­iot tool won't shit it­self. In the end I went with the al­ter­na­tive of mark­ing these YAML files as bi­na­ry, be­cause ap­par­ent­ly in nanoc, text files can't start with ---.

The oth­er prob­lem with bi­na­ry items is snap­shots. Nanoc by de­fault cre­ates three snap­shots for each item, which while can be use­ful at times, in my ex­pe­ri­ence 99% of the time com­plete­ly un­nec­es­sary. And since bi­na­ry items live in files, this in­volves mak­ing at least 4 copies of each bi­na­ry file (the 3 snap­shot, plus the file in the out­put dir). Of course this is not a big deal if you have a few kbyte sized files, but with video files hav­ing a size of a cou­ple hun­dred megabytes, these copies add up quick­ly. I've worked around this by mon­key patch­ing nanoc so it just sym­links the files in­stead of copy­ing. (New­er nanoc ver­sions are sup­port­ed to use re­flink copies on BTRFS, so it won't ac­tu­al­ly oc­cu­py disk space more than once, but it still blows up du out­put and gives a lot of ex­tra work to file back­up/sync­ing tools.)

The above is just an­noy­ing, but here comes the deal break­er: nanoc is ridicu­lous­ly slow. To have nanoc just print the help mes­sage takes about 300 ms. I guess nanoc should be re­named to bloatc to slowc. A no-op re­build (when noth­ing has to be re­built, and every­thing is cached in­to mem­o­ry) takes about 4 sec­onds. A change in lib/ (which trig­gers a full re­build), but with­out any change in the out­put, is about 33 sec­onds. Oh and this is with ruby --jit, with­out JIT it's 4.4 s and 36 s (well, JIT doesn't help much). I didn't dare test­ing what hap­pens if I delete the out­put di­rec­to­ry, it would prob­a­bly take min­utes. On a tiny blog with 19 posts. What would hap­pen if I had 1000 posts? Would it take a half hour to com­pile it?! With some hacks I man­aged to de­crease the lib touch re­build time to 12.5 s, but I don't know what to do about the no-op re­build. Nanoc's de­pen­den­cy track­er is just that slow.

Oh and just a quick tip. Don't write @items['foo']. While it looks nice, if it can't find an item named foo ex­act­ly, it will switch in­to glob­bing mode, and try to fnmatch EVERY FUCK­ING ITEM against your string which con­tains ze­ro wild­card char­ac­ters! So it's sud­den­ly O(n) with a big­ger con­stant in­stead of O(1). Nanoc gives you free hash flood­ing in a project where there is no un­trust­ed in­put! The cor­rect an­swer is the mouth­ful @items[Nanoc::Core::Identifier.new id]. Yeah, nanoc is al­ready thrash­ing the GC, so this will help it for sure, but at least the al­go­rithm stays O(1).

What would I change from nanoc to, I don't know. It's not like nanoc's doc­u­men­ta­tion has a "ridicu­lous­ly slow" point un­der fea­tures, and with small toy sites I can't re­al­ly test the speed of al­ter­nate sta­t­ic site builder tools. Of course, there's al­ways the al­ter­na­tive of rolling my own tool, de­signed to be par­al­lel from the get go, but I don't want to do it yet. But I didn't want to write my video play­er ei­ther...

Up­date 2025-12-26: while mess­ing around with my nanoc re­place­ment idea, I ran in­to a pe­cu­liar­i­ty about SHA-256 check­sums in Ru­by (com­mands out­put trun­cat­ed for brevi­ty):

$ cat foo.rb
require 'digest/sha2'
puts Digest::SHA256.file(ARGV[0]).hexdigest
$ truncate -s1000 small
$ perf stat -r 10 sha256sum small
0.0009217 +- 0.0000444 seconds time elapsed  ( +-  4.81% )
$ perf stat -r 10 ruby foo.rb small
0.036914 +- 0.000413 seconds time elapsed  ( +-  1.12% )

OK, sha256sum is a util­i­ty writ­ten in C do­ing on­ly one thing (cal­cu­lat­ing SHA-256 check­sums), while Ru­by needs to ex­e­cute a bunch of code to set up the in­ter­preter and load the mod­ules. Still 37 ms vs 0.9 ms, more than an or­der of mag­ni­tude slow­er. Let's try a big­ger file, say 1 GiB:

$ truncate -s$((1024*1024*1024)) big
$ perf stat -r 10 sha256sum big
0.504410 +- 0.000431 seconds time elapsed  ( +-  0.09% )
$ perf stat -r 10 ruby foo.rb big
3.20031 +- 0.00901 seconds time elapsed  ( +-  0.28% )

Hmm? Ru­by is still more than 6 times slow­er than sha256sum. So it's not just the in­ter­preter start­up over­head af­ter all. Spawn­ing sha256sum from ru­by and pars­ing its out­put is faster than us­ing the built-in Digest::SHA256 class if your in­put is more than a kibibyte long...

OK, next try. Ru­by has an OpenSSL bind­ing, let's try us­ing it:

$ cat foz.rb
require 'openssl'
puts OpenSSL::Digest::SHA256.file(ARGV[0]).hexdigest
$ perf stat -r 10 ruby foz.rb small
0.051716 +- 0.000821 seconds time elapsed  ( +-  1.59% )
$ perf stat -r 10 ruby foz.rb big
0.58543 +- 0.00142 seconds time elapsed  ( +-  0.24% )

Load­ing the OpenSSL li­brary has a much high­er over­head, but it's per­for­mance is com­pa­ra­ble to the com­mand line sha256sum tool. Now, let me sum­ma­rize this in a ta­ble. The added ru­by raw row means us­ing ru­by's Benchmark mod­ule to do the mea­sure­ments, thus ig­nor­ing the in­ter­preter start­up and re­quire time.

Name Small Big
sha256sum 0.9 504
Ruby Digest 36.9 3200
Ruby OpenSSL 51.7 585
Ruby Digest raw 0.031 3120
Ruby OpenSSL raw 0.024 530

As you can see from the ru­by raw num­bers, with the small in­put, the above perf com­mands prac­ti­cal­ly mea­sured the ru­by in­ter­preter start­up time.

Why is this im­por­tant? If you look at any im­age or video on this blog (not in this post, sor­ry), the URL will look some­thing like /c/some random identifier/filename. The ran­dom iden­ti­fi­er is the SHA-256 sum of the file, en­cod­ed in Base32, trun­cat­ed to 10 char­ac­ters. Be­fore look­ing at the code, I could have sworn I used xxHash, but for some rea­son I went with SHA-256 (there's no un­trust­ed in­put here, so a cryp­to­graph­i­cal­ly se­cure hash is not need­ed), and of course it used Digest::SHA256. I've re­placed it with OpenSSL in the site code, a no-op re­build with emp­ty di­gest cache but the files cached in­to mem­o­ry went down from 43.2 s to 11.1 s. Seems like an im­pres­sive num­ber, but does it mat­ter in prac­tice? I've al­ready no­ticed pre­vi­ous­ly that re-hash­ing the files all the time is a per­for­mance hog, so my code caches all these hash­es. So gen­er­al­ly it should on­ly mat­ter with clean re­builds, but in gen­er­al I al­most nev­er do it, nanoc is smart enough to not mess up in­cre­men­tal builds.

And what about xxHash? I could de­crease the above men­tioned no-op re­build to 6.2 s, at the ex­pense of chang­ing the lo­ca­tion of every file. I'm not sure it's worth it yet, es­pe­cial­ly now as my re­build times are get­ting some­what bear­able again (it's still an­noy­ing if save a mark­down file, it takes about 4 sec­onds un­til I can see the re­sult in a brows­er though, but it's not be­cause of this). End up­date.

This post is part of se­ries blog up­date: 2023, , 2025,