Warning! Your devicePixelRatio
is not 1 (or your browser doesn't CSS media queries, in which case you can probably ignore this warning)!
Since HTML is as braindead as it can get, creating sites outside of diversity hires corporate art sphere is damn near impossible, so this site is not really optimized for HiDPI screens.
Some images might appear blurry.
2023 updates
I've made some changes to the blog, I'll try to summarize it here for the grand total zero number readers this blog has. This is going to be technical, and mostly a rant, so feel free to ignore this.
It all started with me trying to write a new article, when I wanted to put timecodes into the text. For optimal usage, it should be clickable, and the video should just jump to it, right? Unfortunately I don't think that can be done without javascript (especially if you have two video, and you want to jump to the video that was viewed last by the user...), so I quickly realized I'll need to bite the bullet and add some javashit to the page. Don't worry, everything should work with disabled JS (except for one small thing with the videos, more on that later).
So I've been looking around at various open source JS video players, in the end I've chosen plyr, and I've started to play around with it...
and I realized that it's usual web-developer quality shit (i.e. utter garbage).
And a bloated mess.
I've looked at a few other alternatives, they're either proprietary, have horrible licenses, or doesn't know the features I'd need.
So I downloaded the plyr repo, and removed most of the bloat and spyware (sorry, I can't categorize YouTube and adverts integration as anything but spyware), fixed some bugs (I needed to add a webcomponent polyfill, so it works in Pale Moon, I have no idea how this shit works in IE11 when the recommended polyfill doesn't include webcomponents polyfill)...
but at this point I was already running down the rabbit hole.
It's skin looks horrible (typical modern garbage that tries to look like Android as much as possible), it can be customized through some CSS variables, but you can barely use it for anything besides recolor.
Like, I wanted to get rid of that huge and pointless padding around the controls, it's not fucking chrome,
It looks like it can be set by --plyr-control-spacing
...
but if I set that, it also sets all padding between the controls, so if I set it to zero, controls will be squished on the player.
Of course, one small CSS override can fix this, but to fix every problem, I'd likely need to write so many overrides (or edit the source style) that it would amount to a rewrite.
Sigh.
Gallery#
So anyway, while I was venting about how shit this player is, I remembered that I always wanted a gallery that's a tiny little bit more usable than just opening images in new tabs. Originally I didn't have a gallery, because doing it in pure CSS is next to impossible (especially if you want it to have features), but now that I have JS...
And this is where things went wrong.
Of course, I could have used some premade gallery, inject 15 gigabytes of leftpad like dependencies, that of course fails spectacularly in anything but the latest chrome/firefox/safari...
but instead, I've decided to write my own.
I've had to write some basic JS to get that player working anyway.
So down the rabbit hole again.
I don't want to write too much about this, this was probably the most fun part of this whole.
Yes, javashit is a retarded language, but unfortunately I've come to the conclusion that all the ecosystem that was built around it is even more retarded.
We've reached (not exactly illustrious) achievement where javashit is not the worst aspect of web development.
So just click on the images and enjoy the full-page gallery with some unique features, like integer scaling, provided you have JS enabled and your devicePixelRatio
is 1.
HiDPI garbage#
It probably doesn't work. See, some time ago, when they started to create displays with higher densities, they realized that websites that used px
when they should have used em
and friends will be unreadably small in those shiny new screens.
The solution?
Just change the definition of the pixel.
In CSS, px
no longer means one pixel.
It's an abstract unit that can mean anything from 1e-308 pixels to 1e308 pixels.
And usually it's not an integer.
So, what can you use to get something that's 1 pixel?
Nothing.
I mean, look at this stackoverflow question.
Javashit codes that fill the screen just to create something that's 100x100 pixels large.
Really?
Are you fucking nuts?
This is the fucking state of the fucking web in 2023, you can't make a 100x100 pixels large shit.
They just say use SVG.
Yeah, please give me a time machine, so I can tell people in 1989 to use SVG when you have a few MHz CPU with less than 1 MiB of RAM.
And you wonder while every modern crap looks the same, this whole fucking technology is incapable of creating anything more complicated than single colored rectangles where you have no control over the size of said rectangles.
I have some hack in the gallery code to compensate for non-1 devicePixelRatio
values, but I'm pretty sure it will break with some browsers.
And doing this in pure CSS is damn impossible, you have dppx
, which is supposed to represent how many (real) pixels you have in one (fucked-up CSS) pixel, but you can only use it in (range) media queries.
I was thinking about generating a file, like
@media (max-resolution: 1.000000dppx) :root { --dppx: 1.000000 }
@media (max-resolution: 1.000001dppx) :root { --dppx: 1.000001 }
@media (max-resolution: 1.000002dppx) :root { --dppx: 1.000002 }
@media (max-resolution: 1.000003dppx) :root { --dppx: 1.000003 }
@media (max-resolution: 1.000004dppx) :root { --dppx: 1.000004 }
@media (max-resolution: 1.000005dppx) :root { --dppx: 1.000005 }
@media (max-resolution: 1.000006dppx) :root { --dppx: 1.000006 }
@media (max-resolution: 1.000007dppx) :root { --dppx: 1.000007 }
@media (max-resolution: 1.000008dppx) :root { --dppx: 1.000008 }
@media (max-resolution: 1.000009dppx) :root { --dppx: 1.000009 }
@media (max-resolution: 1.000010dppx) :root { --dppx: 1.000010 }
@media (max-resolution: 1.000011dppx) :root { --dppx: 1.000011 }
@media (max-resolution: 1.000012dppx) :root { --dppx: 1.000012 }
@media (max-resolution: 1.000013dppx) :root { --dppx: 1.000013 }
@media (max-resolution: 1.000014dppx) :root { --dppx: 1.000014 }
@media (max-resolution: 1.000015dppx) :root { --dppx: 1.000015 }
@media (max-resolution: 1.000016dppx) :root { --dppx: 1.000016 }
@media (max-resolution: 1.000017dppx) :root { --dppx: 1.000017 }
@media (max-resolution: 1.000018dppx) :root { --dppx: 1.000018 }
@media (max-resolution: 1.000019dppx) :root { --dppx: 1.000019 }
do it from 0 to 10, and you have a 640 MiB CSS file (but only 48 MiB after gzip compression!). But that's huge and the precision is limited. And would probably DoS every browser. Another idea was to create something like this:
@import "x.css?l=0&h=5" max-resolution: 5dppx;
@import "x.css?l=5=h=10" not max-resolution: 5dppx;
and have a server-side component that would generate these x.css
files to have a kind of binary search for the value of resolution.
It should be more accurate than the big pregenerated list above, consume way less bandwidth, but it would increase latency a lot, and it would also no longer be a static site.
So TL;DR if you use a HiDPI screen, you're fucked.
Image formats#
Up until now I generally used WebP, except when PNG or JPG was better.
In the meantime, JXL gained some traction, I also looked into HTML's <picture>
element, so in the end I got rid of jewgle's shilled image format.
The more I look at it, the more I realize how utterly crap it is.
First, it only supports YUV420 in the lossy mode (even though VP9 support YUV444), which means it's completely unsuitable for anything that's not a high resolution photograph.
I've tried to use it for some thumbnails, but 99% WebP produced worse results than 70% JPG with twice the file size.
So basically you can only use the lossless compression.
There's a "near lossless" mode, which uses the lossless mode of WebP, but preprocesses the image, so it compresses better.
That's like the only sane way to use WebP if you don't encode a photograph, but since it still uses the lossless mode, the files encoded this way are still pretty big.
And using plain pngquant produces better results many times anyway.
And the second problem is, the lack of any interlacing/progressive image support. Jewgle says it is to reduce CPU usage, but that whole answer sounds like some diversity hire bullshit. You don't have to refresh the screen with each byte received you fucking dimwit. Incremental decoding is not an alternative, it's a fucking standard feature in fucking every image decoder in used fucking every browser, and it means if you have a half downloaded image, you'll see the upper half of it. Not a low resolution one that you have with interlaced images. You managed to kill a feature that JPG and PNG supported for more than 30 years. You managed to destroy user experience on slower internet connections. And this is a format that was supposedly created for the web. Is there any people left with positive IQ score at jewgle!? Sigh.
So, anyway, I exterminated all WebP images from this site, everything should be available in bog-standard PNG or JPG (depending on the image), and JXL (except in the few corner-cases when JXL ended up bigger than JPG/PNG). Images that are visible in the HTML pages should be interlaced now, except in a few corner-cases with pixelart images, where they only weight a few kibibytes and the interlaced files ended up being 2x the size of the non-interlaced ones (interlacing shouldn't matter much in these cases, even if you're on a dialup connection). Unfortunately interlacing usually blows up file sizes (except in JPEG/lossy JXL, where progressive coding usually produces smaller files), so as a compromise, linked images are not interlaced.
Video formats#
I also started to play around with AV1 videos. Since I have that plyr thing now, I can have multiple videos and the user can select which one he wants to play (this is the only thing that currently doesn't work well in the non-JS version, the browser just picks the first source it can play, with no option to select another one).
And now comes the ugly part.
I have to admit, I had some invalid preconceptions.
Like, FFmpeg can be used for encoding videos.
But I'm getting ahead of myself.
There are three AV1 encoders in FFmpeg: libaom
, svt-av1
and rav1e
.
From a quick look it looked like the SVT-AV1 is the better one, the one is being actively developed and generally the better, so first I checked that.
Of course half of the options are not available on the FFmpeg command line, but fortunately you have -svtav1-params
to specify arbitrary parameters... if you have FFmpeg 5.
And here comes the problem, because FFmpeg developers never heard of API and ABI stability, they're constantly breaking API with minor releases, you can expect the fuckup that comes with a new major version.
So even Gentoo unstable is still on FFmpeg 4.3, to avoid breaking the whole world.
In the end I backported the patch that adds -svtav1-params
to FFmpeg, so I can actually use this shit.
And man, it's slow.
VP9 is also slow, but 0.3 FPS encoding really kills my mood.
So I start looking around the net, find a random forum message somewhere (sorry, don't remember where) where a guy casually mentions encoding in chunks, like it was some super common knowledge.
Well, maybe, I've searched for it a bit, but hardly found anything, so I just sit down and write my own script.
FFmpeg has a segment "muxer", which splits the output into multiple chunks at keyframes.
Wonderful, exactly what I need, since svt-av1
doesn't have proper scene change detection for keyframe placement.
So the plan: convert the video to lossless x264
(because x264
has IMHO the best compromise between encoding speed and compression), split it at keyframes, encode the chunks in parallel, then concatenate them.
I quickly wrote a bare-bones bash script, that I copy-pasted all over the place, but it did the job.
(Later I rewrote it in ruby, because if I run multiple commands in parallel, their outputs will be mixed together, and I wanted to do something about it...)
So, everything looks fine, let's try to use it...
This pice of shit doesn't support YUV444 or RGB.
What the fucking what?
FFmpeg's wiki page mentioned that svt-av1
doesn't support lossless encoding, but they forgot to mention this "little" shortcoming.
Anyway, this means I'll have to use libaom
instead.
And figure out the different options, performance characteristics, and quality of the different encoder.
I'm back at square one.
And even though FFmpeg has -aom-params
, but many options that are available in the aomenc
command line program are not available through this or other FFmpeg params.
Why?
Because libaom
is a fucking pajeet code, there's like 4 different structs to store configuration and I don't know how many ways to set it, each covering a random subset of the possible options.
There's a 4355 line file in the codebase which only deals with converting the options from one struct to the other.
It's such a copy-paste mess that it hurts to look at it.
But at least they use clang-format
, so it randomly changes the formatting of the repeating almost identical lines, so you'll have some variation while reading the repeated lines.
(No, this is not a praise, this is not a fucking poem, this is supposed to be a functional piece of software.)
And I guess they simply forgot to add some options to the code that handles the string-based options (encoder_set_options
in av1/av1_cx_iface.c
, if you want to see some horroristic code)...
Or maybe you're just supposed to use one of the other 4 methods to set those options, but FFmpeg doesn't have that implemented, so I just went with the simple way and extended this disgusting spaghetti code a bit to add the options I needed.
And I don't know how many hours of swearing later, I had a script that created nice AV1 files for all the videos I needed.
Except one thing.
The segment
muxer.
See, it has a -segment_time
option to set the length of a segment, but since it can only split at keyframes, I thought it works like "minimum segment length".
WRONG!
DID YOU REALLY EXPECT FFMPEG TO NOT DO SOMETHING UTTERLY RETARDED FOR ONCE?
No, that's impossible.
What it does instead is that it tries to create segments at every n
seconds, so for example with -segment_time 1
, it tries to cut the first segment at 1 seconds, the second at 2 seconds, the third at 3 seconds, even if you don't have keyframes there.
So if your first keyframe is at 5 seconds, followed by two other keyframes, you'll end up with a five seconds long segment, then a 1 frame length segment.
Basically, if your -segment_time
is smaller than your average keyframe distance, you'll have a new chunk at each keyframe.
And x264
's scene detection went crazy in some videos with flashes and quick changes, and I ended up with a bunch of chunks with a few frames (many times that few number of frames was 1).
-keyint_min
is supposed to set the minimum keyframe interval, but it didn't work either.
There's a -min_seg_duration
option for the segment
muxer, but my FFmpeg version was too old to have it.
Plus I'd rather set the number of frames, not time (I use variable framerate, so time is not a useful unit).
The only thing that works with frames is the -segment_frames
options, but it requires me to supply all the frame numbers I want to split the video in advance.
I have no fucking idea where do I want to split the video until I've encoded it to H264.
Maybe I could have written yet another workaround, to write a single H264 file first, dump the keyframes with ffmpeg
, then figure out where I need those keyframes, but at this point I was like, having one more or less patch doesn't make a difference.
I guess I should start opensourcing this shit, maybe someone will find something useful here.
Are we done yet?
Nah.
I'm breaking the chronology here, but during the process I've realized that AV1 should support RGB video, so maybe I should use that instead of the lossy YUV conversion.
Chromium and Firefox played back RGB videos fine, but Pale Moon (and some other Firefox forks, like Waterfox Classic) didn't.
Because Pale Moon is the browser I care about the most, in the end I fixed it to support RGB videos (and a bit more).
I don't want to write too much about this, it was merged upstream, and should be part of the next release.
Not sure about other browsers out there, Waterfox Classic looks dead, and others I tested either didn't support AV1 at all or they supported RGB.
(Note, in Pale Moon, you need media.av1.enabled
set to true in about:config
for the time being.)
Video thumbnails#
One nifty feature I really like about this new JS player crap is that you can have thumbnails as you hover over the scroll bar.
The implementation... not much.
The plyr documentation doesn't really mention any way to do this, ffmpeg should be able to do this, but it's less than ideal.
It's tile
filter requires you to specify the number of rows and columns in advance, which means you need to know the number of screenshots in advance.
So first I created another ruby script, that takes tells FFmpeg to output thumbnails to image2pipe
, and it creates the tiled image with the correct size at the end.
But then I also realized, that while using scene detection to create thumbnails sounds good in theory, but if I try to apply it to the videos I've uploaded here, I end up with either 1 or 386 screenshots per video.
(Actually makes sense, the gameplay videos hardly have any scene change, while euphoria's and tasogare's intro videos generate shitloads of "scene changes".)
So in the end I went with simply creating screenshots every N seconds, but for that I need to know the length of the video (I want screenshots every 5 seconds, but I want to make sure that every video has at least 10 and at most 100 screenshots, so I need to adjust the interval based on the video length).
Ironically, this way I actually don't need this dynamic tile sizing (as I know the number of screenshots I'll have in advance), but I kept it.
Also, I'm not satisfied with the current situation, so it might change in the future, and in this case that code might be useful again.
(For example, the last ~2.5 minutes of the tasogare's intro video is a still image, yet it now has that single image thumbnailed 33 times.)
What's next?#
Dunno. There are 19327849233292 things I want to improve, but I usually end up bored quick and stop. I really don't like the video player, so I might do something about it. I've written some JS code by hand, but I'm not sure whether I want to continue like this. JS is a fucking abomination, I'd like something more sane, maybe with static type checking. I might look into transpilers, but in that case I'll complicate the build process of this site even more, and they usually generate shitloads of bloat. Also, it doesn't help that most JS transpilers expect you to have a full-blown nodejs ready, and I really don't want to integrate all that crap into my nanoc site.