Also, because you're using CompressorJS's default settings, you should know that PNG files > 5 MB are converted to JPG (https://github.com/fengyuanchen/compressorjs?tab=readme-ov-f...). Your site, however, keeps the .png file name ending for the converted image, again falsely suggesting lossless compression.
I presume this is an oversight; changing CompressorJS's quality setting from its default of 0.8 to 0.75 results in the exact same output as ImgFiber.
Full settings dump:
{
"strict": true,
"checkOrientation": true,
"retainExif": false,
"maxWidth": null,
"maxHeight": null,
"minWidth": 0,
"minHeight": 0,
"resize": "none",
"quality": 0.75,
"mimeType": "auto",
"convertTypes": [
"image/png"
],
"convertSize": 5000000,
"beforeDraw": null,
"drew": null
}
We did not lie about actual working when fellow developers demanded answers. I love talking about what i have build. Maybe just nudge better? instead of hating?
This site is clearly not doing that, and should thus not be called "lossless compression". The industry term for lossy compression that appears to be basically the same as the source is "visually transparent" or just "transparent". Though this doesn't seem like that either, especially when you're compressing large PNGs into JPGs while naming the output as PNGs. That's just outright deceitful, and not a good look.
I downloaded https://www.imgfiber.com/compare/original.jpg and dragged it into the try-it-now section - says it's now 1.3mb.
That's worse than Squoosh's default 897kb (https://squoosh.app/editor), though better than CompressorJS's default 2.21mb (https://fengyuanchen.github.io/compressorjs/), (edit ii) tho changing CompressorJS's quality setting to 0.75 results in the exact same output as ImgFiber.
Edit: this is on latest Chrome + macos.
structural compression
What does this mean?
That's not compression, that's just deleting people's data.
structural compression means optimizing how image data is encoded, like better Huffman tables for JPGs or smarter filtering for PNGs.
I think that's just called compression.
How are you getting better huffman tables for JPGs and how are you doing 'smarter' filtering for PNGs ?
I'm asking for deeper technical explanations because not only would beating current image compression libraries be a technical feat, a lot of people in this thread think that you aren't actually doing anything differently.
Also I am not sure if you thought the product through, when I want to compress images I want to control the output and also control the quality setting. This does not seem to do anything.
we did thought it through based on our use case :) that is working on bulk image processing where a slight loss in quality for Higher reduction in size is appreciated :) mostly for wallpaper apps or any image related sites. but yeah we will improvise to offer even cleaner approach
https://kewltools.com/image-resizer
Just enter in the desired file size and it will compress down to it
JPEG-XL definitely way to go. we’re exploring ways to integrate support for formats like JXL and AVIF soon.