Oldie goodie article with charts, comparing webp, jpegxl, avif, jpeg etc. avif is SLOW
Wow. Nice. Big improvement if JPEG and PNG can be replaced by one codec.
JPEG XL is not that massive.
JPEG XL spec is slightly less than 100 pages, about half the size of the JPEG1 spec.
A simple implementation in j40 was around 7000 lines of code last time I looked, not sure if it is 100 % complete however.
A simple encoder at libjxl-tiny is of similar size and very attractive to be used for expressing similar coding decisions in hardware intended for digital cameras.
A complex speed optimized C++ decoder implementation is ~35000 lines of code, but much of it is not due to the spec, but getting most out of SIMD-powered multi-core computers.
The binary size increase in Chromium on arm for adding (in the past) the C++ decoder was around 200 kB in APK size, possibly around 0.1 %.
Works for me with Qubes OS.
This is in jest, but those are my pain points - the AMD thinkpad I have can't run it, the Intel one melts yubikeys when decoding h264 video. The default lock screen can't read capital letters from the yubikeys static password entry. Qubes has a certain user that it caters to, I really wish they could get enough money to be able to cater to more use cases. It is not difficult to use it if it works for you.
> Do you hate using most hardware?
Nobody uses "most hardware". You may be unlucky with your hardware, then it's a problem. Or you can specifically buy hardware working with the OS you want.
> Do you like using Xorg?
What's wrong with Xorg?
Lock screens that crash. Lock screens that can’t handle input from a yubikey?
It's slow for tasks requiring GPU, but allowing GPU for chosen, trusted VMs is planned: https://github.com/QubesOS/qubes-issues/issues/8552
Escapes are not the only vulnerability. QSB-108 allows for reading the memory of other qubes running on the host[1].
Speculative sidechannel attacks have nothing to do with OS or compartmentalization technology, since they are the problem of CPUs. Nothing can help here, so this is irrelevant to this discussion. Except that Qubes Air will save you in the future: https://www.qubes-os.org/news/2018/01/22/qubes-air/
So are bubblewrap escapes, which is the sandbox flatpak uses.
> the first vulnerability is not a complete escape.
It could potentially lead to one, and being able to obtain information from other VMs defeats much of the point of isolation, and so defeats much of the point of why people use qubes.
> For example, any offline vault VM storing secrets stayed secure. This is just not happening with any other security approach.
That's not true. Strong MAC would suffice, no VT-d needed.
> Speculative sidechannel attacks have nothing to do with OS or compartmentalization technology
Of course they do, in fact they have more to do with it than solutions like flatpak, which is why Qubes releases security advisories and patches to address those vulnerabilities.
> So are bubblewrap escapes, which is the sandbox flatpak uses.
Not only they are much more frequent, including possibly kernel privilege escalations, not affecting Qubes, - the bubblewrap repository itself says that you have to be really careful to stay secure with it, even in the lack of vulnerabilities. This is not what people should seriously rely on. Again, my secrets in vault VM are safe since the introduction of VT-d in Qubes 4.0 in ~2021. There is no comparably secure OS in the world.
I don't understand your unsubstantiated attack on Qubes.
> and being able to obtain information from other VMs defeats much of the point of isolation
It does not. Even if a VM becomes hostile and starts reading the RAM, it will not get any privileges in any other VM. Also, it can be easily cleaned. Also, you can just stop all VMs when performing a secure operation. Tell me how you protect yourself in such case with Flatpak.
No, that's simply not the case.
> not affecting Qubes,
Maybe, qubese would still be vulnerable to kernel vulnerabilities even if they didn't allow VM escape - anything in the disposable VM would be at risk.
> the bubblewrap repository itself says that you have to be really careful to stay secure with it, even in the lack of vulnerabilities.
Source? I assume they are referring to misconfigurations.
> There is no comparably secure OS in the world.
You've said before you don't have a lot of security knowledge and it continues to show. Qubes is one specific approach to a problem not suitable for all goals, it's useful for hobbyists who use browsers and such. Anything in the disposable VM is still at risk.
SEL4, ASOS and CuBit are all more secure than Qubes. Qubes doesn't offer any more security than having a bunch of different machines to do different tasks on. Not even airgapped. If the machines have a vulnerability, then whatever is on the machine is fair game.
> I don't understand your unsubstantiated attack on Qubes.
There is no attack, I'm just refuting your preposterous zealotry for it. It's fine for what it is, but you make it much more than what it is. The developers of Qubes would absolutely disagree with your claims.
> Even if a VM becomes hostile and starts reading the RAM, it will not get any privileges in any other VM.
That depends entirely on the vulnerability.
You keep repeating this without providing any actual statistics. I provided statistics about Qubes vulnerabilities, https://www.qubes-os.org/security/xsa/. Show me the numbers please.
> anything in the disposable VM would be at risk.
This just shows that you don't understand the security approach of Qubes. You do not store anything important in a disposable. You run it specifically for one task of opening something untrusted and then it's destroyed. It's in the name: Disposable. Moreover, nothing prevents you from running Bubblewrap inside Qubes. Then one single VM will be as secure as your whole setup, and in addition, you get reliable isolation.
> Source? I assume they are referring to misconfigurations
You never give any actual reference, only I have to. Here you go: https://github.com/containers/bubblewrap.
> bubblewrap is not a complete, ready-made sandbox with a specific security policy.
> As a result, the level of protection between the sandboxed processes and the host system is entirely determined by the arguments passed to bubblewrap.
> Everything mounted into the sandbox can potentially be used to escalate privileges.
This is not a robust system designed for security first. You can use this to be (much) more secure than otherwise, but it's not a security-oriented design, unlike Qubes.
> Anything in the disposable VM is still at risk.
Which means nothing. Disposable can't store anything, it's destroyed every time you stop it.
> You've said before you don't have a lot of security knowledge and it continues to show.
I see the same about you. You keep repeating some myths about Qubes OS based on misunderstandings of its security approach. I don't have to be a professional in security to understand simple concepts. Qubes is not an OS made for professionals but for users.
> Qubes doesn't offer any more security than having a bunch of different machines to do different tasks on.
Yes, it does: https://doc.qubes-os.org/en/latest/introduction/faq.html#how...
> SEL4, ASOS and CuBit are all more secure than Qubes.
Do I have to trust you on this, or do you have any reasonable reference to security people? You don't even provide your threat model when saying this, which clearly shows how amateur your approach to security is.
> I'm just refuting your preposterous zealotry for it
Relying on professionals in the field is not zealotry. In contrast, you show exactly the latter. I see no references.
> The developers of Qubes would absolutely disagree with your claims.
This is plain false:
https://doc.qubes-os.org/en/latest/introduction/faq.html#wha...
https://doc.qubes-os.org/en/latest/introduction/faq.html#how...
https://doc.qubes-os.org/en/latest/introduction/faq.html#wha...
https://doc.qubes-os.org/en/latest/introduction/faq.html#why...
You can find this yourself. For any software running in the guest OS, you can look up it's security history.
> This just shows that you don't understand the security approach of Qubes. You do not store anything important in a disposable. You run it specifically for one task of opening something untrusted and then it's destroyed. It
I understand it perfectly, but you seem to be missing my point. Yes, the qubes are disposable, but you need to have information in them while you are using them, yes? So, you make a new qubes to do your taxes, your tax information is in the qubes because you need it to do that. While the qube is running, if it is vulnerable, then that information is at risk. I get that it is no longer at risk once the qube is destroyed, but that is irrelevant to my point.
Consider an example, back in 2024 if you were running SSH in a Qubes for some reason, you would likely be vulnerable to the regreSSHion vulnerability. Sure, an attacker could only access what was on the disposable VM, but that could still be a lot.
> You never give any actual reference, only I have to. Here you go: https://github.com/containers/bubblewrap.
This source doesn't support your claim.
> This is not a robust system designed for security first. You can use this to be (much) more secure than otherwise, but it's not a security-oriented design, unlike Qubes.
Neither is qubes. It's designed for specific use cases, and doesn't do much to protect the information running within a qube aside from destroying it after disposing of it.
> Which means nothing. Disposable can't store anything, it's destroyed every time you stop it.
It's at risk while the VM is running, which is the point.
> Yes, it does: https://doc.qubes-os.org/en/latest/introduction/faq.html#how...
No, it doesn't. Those points are rather nonsense. Malware that can bridge airgapped systems? Sure, if you have a compromised USB stick and stupidly run something from it, I guess. The disposable VM would be at risk also.
> Do I have to trust you on this, or do you have any reasonable reference to security people? You don't even provide your threat model when saying this, which clearly shows how amateur your approach to security is.
You have no security knowledge at all, though, you just repeat your chosen solution because it's FLOSS. It makes this discussion very frustrating. Do you understand anything about capabilities, mandatory access controls or formal verification?
> Relying on professionals in the field is not zealotry.
You are exaggerating claims you can't backup in a field you don't understand due to the software meeting your only real criteria, being FLOSS. That is absolutely zealotry.
> This is plain false:
Not only do your links not support your exaggerated claims at all, meaning I am correct the author would absolutely not agree with you, but the FAQ entry dismissing formal verification and safe languages refers to a paper from 2010 - back when Rust didn't even exist. You might not know this, but the tech world moves pretty fast...
Do me a favor, spend some time with your favorite FLOSS AI and ask it why SEL4 would be considered superior to Qubes from a security perspective.
You also reply to my references with shallow dismissals with no substance presenting that as facts ("Not only do your links not support your exaggerated claims at all")
You give examples how Qubes can't save you from absolutely everything. It's true. Yet your original claim is that Flatpac is similarly secure and you failed to explain how it would protect from the same problems.
> spend some time with your favorite FLOSS AI
They do not exist, only open-weight ones do.
Why is there a need for references? Do you not understand how VMs work? Do you dispute that software running in the VM can be vulnerable?
> You also reply to my references with shallow dismissals with no substance presenting that as facts ("Not only do your links not support your exaggerated claims at all")
Because your 'references' don't support your claims, it's that simple. You can't just copy and paste links and act like you have provided evidence when the links don't match. Your claim doesn't appear on the Bubblewrap github page at all.
> Yet your original claim is that Flatpac is similarly secure and you failed to explain how it would protect from the same problems.
Vulnerable software running in a Bubblewrap sandbox and in a Qubes VM are both similarly vulnerable to software vulnerabilities, and it is unlikely an attacker would be able to escape the sandbox or the VM. I grant that escaping the sandbox is easier and more common, but not by much.
Your first key point was that Bubblewrap vulnerabilities happen all the time, and you've yet to support that. The only 'reference' you provided was to the Bubblewrap github page.
> They do not exist, only open-weight ones do.
And of course you don't trust anything that isn't FLOSS, right?
This is a weird threat model. You trust some website with your personal information but you don't trust that images they embed are trusted and will not attack you. Nothing will save you here except switching off showing pictures, which you can also do on Qubes.
I would say, if they really embed malicious images, then they probably have other problems with security, which nothing you run can help with.
Or having a trustable image decoder, which is what web browsers actually do. This is a basic requirement that you are proposing to do away with by instead not showing images at all.
This may never exist, since all software have bugs. Instead, you can isolate opening your pictures into a different VM, keeping this VM safe.
> what web browsers actually do
Haven't we seen related vulnerabilities?
It's existed for years. https://chromium.googlesource.com/chromium/src/+/HEAD/third_...
Similarly, the JPEG XL decoder Chromium integrated is written in Rust, eliminating large classes of exploitable errors.
> Haven't we seen related vulnerabilities?
Repeatedly. That's why browser vendors are careful about adding new image decoders, and no, Qubes does not solve the problem.
By one ? Ten maybe: webp, avif, ...
Note that in that figure the formats are compared at the same SSIMULACRA2 score, not at the same file size. In the "very low quality" category, JPEG uses ~0.4 bpp (bits per pixel), while JPEG-XL and AVIF use ~0.13 bpp and ~0.1 bpp, respectively, so JPEG is roughly given 4 times as much space to work with. In the "med-low quality" category, JPEG-XL and AVIF use around 0.4 bpp, so perhaps you should compare the "very low quality" JPEG with "med-low quality" JPEG-XL and AVIF.
After reading your comment, I assumed you had missed the bpp difference. Please excuse me if I assumed incorrectly.
If the encoder have obvious problems it is not a big deal, but it doesn't bode well for the decoder.
That's not a great bar since both of them showed up around the same time. And importantly JXL hits many use cases that AVIF doesn't.
> while being written in an unsafe language
They put little emphasis on that part when they were rejecting JXL. If they wanted to call for a safer implementation they could have done that.
Concerns about the implementation only came up after years of pushback forced google ton reconsider.
I think for most modern software it's difficult to name the creator, but if you had to for webp, it would be hard to argue that it's anyone but Jyrki Alakuijala, who is in fact one of the co-creators of jpegxl and the person backing up the long-term support of the rust jxl-rs implementation, so I'm not even going to ask for a source here because it's just not true.
On2 Technologies had designed the lossy format and its initial encoder/decoder. Skal improved on the encoder (rewriting it for better quality, inventing workarounds for the YUV420 sampling quality issues), but did not change the format's image-related aspects that On2 Technologies had come up with for VP8 video use.
In the end stage of lossless productization (around February 2012) Skal had minor impact on the lossless format:
1. He asked it to have the same size limitations (16383x16383 pixels) like lossy.
2. He wanted to remove some expressivity for easier time for hardware implementations, perhaps a 0.5 % hit on density.
Skal also took care of integrating the lossless format into the lossy as an alpha layer.
Well it is up to you to decide. The link was submitted a dozen of times on HN and the whole thing was well reported. And Jyrki Alakuijala already classify its creator status.
They deliberately made up a flawed test to show AVIF is better than JPEG XL. When most evidences shows contrary.
https://github.com/search?q=repo%3Alibjxl%2Fjxl-rs%20unsafe&...
And my discovery (which basically anyone could have told me beforehand) was that ... "unsafe" rust is not really that different from regular rust. It lets you dereference pointers (which is not a particularly unusual operation in many other languages) and call some functions that need extra care. Usually the presence of "unsafe" really just means that you needed to interface with foreign functions or hardware or something.
This is all to say: implying that mere presence of an "unsafe" keyword is a sign that code is insecure is very, very silly.
No, memory safety is not security, Rust's memory guarantees eliminate some issues, but they also create a dangerous overconfidence, devs treat the compiler as a security audit and skip the hard work of threat modeling
A vigilant C programmer who manually validates everything and use available tools at its disposal is less risky than a complacent Rust programmer who blindly trust the language
I agree with this. But for a component whose job is to parse data and produce pixels, the security worries I have are memory ones. It's not implementing a permissions model or anything where design and logic are really important. The security holes an image codec would introduce are the sort where it a buffer overun gave an execution primitive (etc.).
You can get an awful lot done very quickly in C if you aren't bothered about security - and traditionally, most of the profession has done exactly that.
What about against a vigilant Rust programmer who also manually validates everything and uses available tools at its disposal?
So, a fairy-tale character?
JXL is not yet widely supported, so I cannot really use it (videogame maps), but I hope its performance is similar to WebP with better quality, for the future.
I also have both compiled with -O3 and -march=znver2 in GCC (same for rav1e's RUSTFLAGS) through my Gentoo profile.
Encoding time isn't as important as decoding time since encoding is generally a once-off operation.
Yeah, we all want faster encodes, but the decodes are the most important part (especially in the web domain).
I wonder if this new implementation could be extended to incorporate support for the older JPEG format and if then total code size could be reduced.
Browser support for WebP is excellent now. The last browser to add it was Safari 14 in September 16, 2020: https://caniuse.com/webp
It got into Windows 10 1809 in October 2018. Into MacOS Big Sur in November 2020.
Wikipedia has a great list of popular software that supports it: https://en.wikipedia.org/wiki/WebP#Graphics_software
Edit: After reading the comments, this doesn't seem to open in Photos App.
One customer of mine (fashion) has over 700k images in their DAM, and about 0.5% cannot be converted to webp at all using libwebp. They can without problem be converted to jpeg, png, and avif.
Certain pixel colour combinations in the source image appear to trip the algorithm to such a degree that the encoder will only produce a black image.
We know this because we have been able to encode the images by (in pure frustration) manually brute forcing moving a black square across the source image on different locations and then trying to encode again. Suddenly it will work.
Images are pretty much always exported from Adobe, often smaller than 3000x3000 pixels. Images from the same camera, same size, same photo session, same export batch will work and then suddenly one out of a few hundred may become black, and only the webp one not other formats, the rest of the photos will work for all formats.
A more mathematically inclined colleague tried to have a look at the implementation once, but was unable to figure it out because they could apparently not find a good written spec on how the encoder is supposed to work.
https://bulkresizephotos.com/en?preset=true&scale=100&format...
If it doesn't work, any chance I could have a copy of one of the images for testing with? (and trying to file the right bugs to get it fixed)
[0] https://developers.google.com/speed/webp/faq#what_is_the_max...
It is at least a very good transcoding target for the web, but it genuinely replaces many other formats in a way where the original source file can more or less be regenerated.
Let's say you want to store images lossless. This means you won't tolerate loss of data. Which means you don't want to risk it by using a codec that will compress the image lossy if you forget to enable a setting.
With PNG there is no way to accidentally make it lossy, which feels a lot safer for cases you want lossless compression.
If you want a robust lossless workflow, PNG isn't the answer. Automating the fiddly parts and validating that the automation does what you want is the answer.
16-bit PNG files can easily accidentally be reduced to 8-bit, which is of course a lossy operation. Animated PNG files can easily get converted into a still image (keeping only the first frame). CMYK images will have to be converted to RGB when saving them as PNG, which is also a lossy operation. It can happen that an image gets created as or converted to JPEG and then gets saved as PNG - which of course is a bad and lossy workflow, but it does happen.
So I don't agree that with PNG there is no way to accidentally make it lossy.
In any case: lossless or lossy is not a property of a format, but of a workflow. For keeping track of provenance information and workflow history, I would recommend looking into JPEG Trust / C2PA, which is a way to embed as metadata what happened to an image since it was captured/generated. Relying on the choice of image format for this is fragile and doesn't allow expressing the nuances, since reality is more complicated than just a binary "lossless or lossy".
> Specifically for JPEG files, the default cjxl behavior is to apply lossless recompression and the default djxl behavior is to reconstruct the original JPEG file (when the extension of the output file is .jpg).
You're right, however, that you do need to be careful and use the reference codec package for this, as tools like ImageMagick create loss during the decoding of the JPEG into pixels (https://github.com/ImageMagick/ImageMagick/discussions/6046) and ImageMagick sets quality to 92 by default. But perhaps that's something we can change.
But I fully realize, there are vanishingly few cases with similar constraints.
Or you could use content-negotiation to only send avif when it's supported, but IMO the HTML way with <picture> is perhaps clearer for the client and end user.
I think the webp problem was due to browsers supporting webp but not supporting animation, transparency or other features, so content negotiation based on mime types (either via <picture> or HTTP content-negotiation) did not work properly. Safari 16.1-16.3 has the same problem with AVIF, but that is a smaller problem than it was with webp.
From a quick look at various "benchmarks" JpegXL seems just be flat out better than WebP in both compression speed and size, why has there been such reluctance from Chromium to adopt it? Are there WebP benefits I'm missing?
My only experience with WebP has been downloading what is nominally a `.png` file but then being told "WebP is not supported" by some software when I try to open it.
Also from a security perspective the reference implementation of JPEG-XL isn't great. It's over a hundred kLoC of C++, and given the public support for memory safety by both Google and Mozilla it would be extremely embarrassing if a security vulnerability in libjxl lead to a zero-click zero-day in either Chrome or Firefox.
The timing is probably a sign that Chrome considers the Rust implementation of JPEG-XL to be mature enough (or at least heading in that direction) to start kicking the tires.
I agree with the second part (useless hero images at the top of every post demonstrate it), but not necessarily the first. JPEG is supported pretty much everywhere images are, and it’s the de facto default format for pictures. Most people won’t even know what format they’re using, let alone that they could compress it or use another one. In the words of Hank Hill:
> Do I look like I know what a JPEG is? I just want a picture of a god dang hot dog.
* CNN (cnn.com): News-related photos on their front page
* Reddit (www.reddit.com): User-provided images uploaded to their internal image hosting
* Amazon (amazon.com): Product categories on the front page (product images are in WebP)
I wouldn't expect to see a lot of WebP on personal homepages or old-style forums, but if bandwidth costs were a meaningful budget line item then I would expect to see ~100% adoption of WebP or AVIF for any image that gets recompressed by a publishing pipeline.
I can completely see why the default answer to "should we add x" should be no unless there is a really good reason.
- jxl is better at high bpp, best in lossless mode
The issue was the use of C++ instead of Rust or WUFFS (that Chromium uses for a lot of formats).
The decode speed benchmarks are misleading. WebP has been hardware accelerated since 2013 in Android and 2020 in Apple devices. Due to existing hardware capabilities, real users will _always_ experience better performance and battery life with webp.
JXL is more about future-proofing. Bit depth, Wide gamut HDR, Progressive decoding, Animation, Transparency, etc.
JXL does flat out beats AVIF (the image codec, not videos) today. AVIF also pretty much doesn't have hardware decoding in modern phones yet. It makes sense to invest NOW in JXL than on AVIF.
For what people use today - unfortunately there is no significant case to beat WebP with the existing momentum. The size vs perceptive quality tradeoffs are not significantly different. For users, things will get worse (worser decode speeds & battery life due to lack of hardware decode) before it gets better. That can take many years – because hey, more features in JXL also means translating that to hardware die space will take more time. Just the software side of things is only now picking up.
But for what we all need – it's really necessary to start the JXL journey now.
Extra data transfer costs performance and battery life too.
so webp > jpegxl > png
What you’re referring to is pngquant which uses dithering/reduces colors to allow the PNG to compress to a smaller size.
So the “loss” is happening independent of the format.
https://blog.cloudflare.com/uncovering-the-hidden-webp-vulne...
FWIW webp came from the same "research group in google switzerland" that later developed jpegxl.
The funny thing is all the places where Google's own ecosystem has ignored WebP. E.g., the golang stdlib has a WebP decoder, but all of the encoders you'll find are CGo bindings to libwebp.
Affinity supports it. Photoshop supports it. Microsoft Photos supports it. Gimp supports it. Apple has had systemwide support for it since iOS 17+ / macOS 12+, including in Safari and basically any app that uses the system image functions.
Chromium isn't on the bleeding edge here. They actually were when it first came out, but then retreated and waited, and now they're back again.
There seems to be some support there, though I tested on iOS 26.
WhatsApp doesn't even support WebP though. Hopefully, if they ever get around to adding WebP, they'll throw JXL in, too.
https://apps.microsoft.com/detail/9MZPRTH5C0TB?hl=en-us&gl=U...
More use cases for a single popular format makes this more likely.
>>>
- Progressive decoding for improved perceived loading performance
- Support for wide color gamut, HDR, and high bit depth
- Animation support(I don't know if any of this is true, but it sounds funny...)