I built a Trello alternative (frustrated with limitations: wanted rows and decent performance). Came in at ~55KB gzipped by following patterns, some of which I'm open sourcing as genX (genx.software - releasing this month):
- Server renders complete HTML (not JSON that needs client-side parsing) - JavaScript progressively enhances (doesn't recreate what's in the DOM) - Shared data structures (one index for all items, not one per item) - Use native browser features (DOM is already a data structure - article coming)
Most sites ship megabytes because modern tooling treats size as a rounding error. The 512KB constraint makes you think about what's expensive and get creative. Got rewarded with a perfect Lighthouse score in dev - striving to maintain it through release.
Would love feedback from this community when it's out.
[0] https://catapart.github.io/magnit-ceapp-taskboard-manager/de...
Just don't use external trackers, ads, fonts, videos.
Building a sub 512KB website that satisfies all departments of a company of non-trivial size; that is hard.
Even for larger sites, it can be trivial, but I prefer to look at it from a non SPA/state-mgmt point of view.
Not every site needs to be an SPA. Or even a 'react app'. I visit a page, record your metrics on the backend for all I care, you have the request headers etc, just send me the data I need, nothing else.
It doesn't have to be ugly or even lacking some flair, 500KB is a lot of text. Per page request, with ootb browser caching, there's no excuse. People have forgotten that's all you need.
> People have forgotten that's all you need.
Edit : No they havent, they just can't monetize optimizations.
I have a SPA that is just vanilla web components and is clean, small, fast and tidy. No outside frameworks, just my own small amount of transpiled TypeScript.
I prefer to write them that way because it meets my needs for the particular site and is very responsive. But I've also done PHP, ASP.Net, Rails and other server-side work for many years. Best tool for the job, and sometimes they are very different.
My guess is the photos.
What I really need to do is standardize my format and size, just to make things a bit more visually tidy. I'm not all that interested in keeping below a certain size, more so that I'm not doing anything I consider unnecessary for the function and feel of the site. Fairly certain whatever readership I have is either related to me or people I play D&D with, so it's really just a fun thing for me to do once in awhile.
https://caniuse.com/?search=avif
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
If only google didn't oppose jxl - but they'd have to implicitly admit that webp is garbage and they don't like doing that.
HEIC, AVIF, JXL, etc. are worth the trouble.
I’m all for minimalism and have a much lower target than 512KB for some of my work. But I think the goal of having the entire site in 512KB is a little strange; someone might have a 511KB SaaS SPA app while others struggle to have 51 10KB pages. That’s not the same.
And yet tons of personal blogs likely weigh in well over that mark, despite having no requirements beyond personally imposed ideas about how to share information with the world.
> Just don't use external trackers, ads, fonts, videos.
The Internet is likely full of "hero" images that weigh more than 512KB by themselves. For that matter, `bootstrap.min.css` + `bootstrap.min.js` is over half of that budget already.
Not that people need those things, either. But many have forgotten how to do without. (Or maybe bilekas is right; but I like the idea of making things small because of my aesthetic sense. I don't need a financial incentive for that. One of these days I should really figure out what I actually need for my own blog....)
A truly "suckless" website isn't about size. It's one that uses non-intrusive JS, embraces progressive enhancement, prioritizes accessibility, respects visitor's privacy and looks clean and functional on any device or output medium. If it ends up small: great! But that shouldn't be the point.
True. I skimmed the biggest sites in that list, and they still are extremely fast. It's not just that size limit that makes the difference, but rather knowing that there is one and therefore forcing oneself to reason and use the right tools without cramming unneeded features.
It would be worth adding some information on the page about the best tools to help the creation of small yet functionally complete and pleasant to look at static sites. A few years ago I'd have said Hugo (https://gohugo.io/), but didn't check for a while and there could be better ones. Also ultra cheap hosting options comparable to Neocities (.org) but located in the EU.
The netbook can load Firefox in just a few seconds. And Hacker News loads almost instantly as on a modern machine. (Hit enter and the page is rendered before you can blink.)
The same machine can also play back 720p H.264 video smoothly.
And yet, if I go to Youtube or just about any other modern site, it takes literally a minute to load and render, none of the UI elements are responsive, and the site is unusable for playing videos. Why? I'm not asking for anything the hardware isn't capable of doing.
If my own work isn't snappy on the Atom I consider it a bug. There are a lot of people using smartphones and tablets with processors in the same class.
but the website and web renderer are definitely not optimized for a netbook from 2010 - even modern smartphones are better at rendering pages and video than your atom (or even 8350u) computers.
That's an understatement if I've ever seen one! For web rendering single-threaded performance is what mostly matters and smartphones got crazy good single-core performance these days. The latest iPhone has faster single core than even most laptops
I use a text-only HTML viewer, no Javascript interpreter. This is either a 2M or 1.3M static binary
The size of the web page does not slow it down much, and I have never managed to crash it in over 15 years of use, unlike a popular browser
I routinely load catenated HTML files much larger than those found on the web. For example, on a severly underpowered computer, loading a 16M stored HTML file into the text-only client's cache takes about 8 seconds
I can lazily write custom commandline HTML filters that are much faster than Python or Javascript to extract and transform any web page into SQL or CSV. These filter are each ~40K static binary
As an experment I sloppily crammed 63 different web page styles into a single filter. The result was a 1.6M static binary
I use this filter every day for command line search
I'm a hobbyist, an "end user", not a developer
What are we doing here? And to brag about this while including image media in the size is just onanistic.
I would be interested to know how they define web resources. HN would only fit this description if we don't count every possible REST resource you could request, but instead just the images (3 svgs), CSS (news.css), and JS (hn.js).
The second you count any possible response to `https://news.ycombinator.com/item?...` in the total, we've blown the cap on 512kb... and that's where the actual useful content lays.
Feels like regular ol' REST-and-forms webapps aren't quite the target of this list though, so who knows.
They explain things in the FAQ. You're supposed to do a "Cloudflare URL Scan" and read the "Total bytes". For HN this is 47kB [1], which, yes, is just the 6 requests needed for / and nothing more.
[1] https://radar.cloudflare.com/scan/4c2b759c-b690-44f0-b108-b9...
> Cloudflare
any chance for a non-monopoly version?
See https://github.com/kevquirk/512kb.club/issues/1187 and https://github.com/kevquirk/512kb.club/pull/1830
---
(^1) If the page, as HN does, has some headers or additional content for logged in visitors, the numbers will generally be a bit different. But the difference will usually be small.
Other than that, I would've understood this notion better in the 90's when we were all on dialups. Maybe my perception is skewed growing up and seeing in real-time a picture loading on a website?
Now, even with outdated hardware on an ok connection, even larger sites like WAPO (3MB) loads what I feel like instantly (within 5-10 seconds). If it loaded in 2 seconds or 1 second, I really don't know how that would impact my life in any way.
As long as a site isn't sluggish while you browse around.
- on an underground subway with slow and spotty connection
- on slow airplane WiFi
- on slow hotel WiFi
- on a slow mobile network while traveling internationally
Complexity would be a subjective metric but without it I'm not sure what you take from this other than a fun little experiment, which is maybe all it's meant to be.
Set the limit first, and then request folks to join the contest:
What crazy website can _you_ build in 512KB?
Built it frustrated with Trello's limitations. The 512KB constraint forced good architecture: server-side rendering, progressive enhancement, shared indexes instead of per-item duplication. Perfect Lighthouse score so far - the real test is keeping it through release.
Extracting patterns into genX framework (genx.software) for release later this month.
It's advertising and data tracking.. Every. Single. Time.
PiHole/Adblocker have become essential for traversing the cesspool that is the modern internet.
And one of these days, I will write a viewer for GitHub links, that will clone the repo and allows me to quickly browse it. For something that is aimed at dev, the platform is horrendous.
Use bootstrap and one image larger than 16x16 and you're near 500KB already.
It's easy to blame the boogeyman but sometimes it's worth looking in the mirror too...
Now, browsers are built mostly by those who ran(today it's a mess) the web. HTML schema and spec... I've never met anyone that's contributed to it in the last 10 years.
The best developers and engineers I do know, don't consider it anymore. It's been done in our eyes. The big companies took over.
Seems like lichess dropped off
I only see domains listed. Does this refer to the main page only, or the entire site?
size-related
- <https://14kbclub.com/> - only learned about it today but I am not sure if my site would qualify (it is only ~12 KiB, but does multiple requests...)
- <https://512kb.club> - my site got removed as “ulta minimal” :(
not specifically size-related
- <https://no-js.club/members/>
- <https://textonly.website/> - my site got removed (I guess because it has a logo and this makes it not text-only...)
There used to be also a 10 KB club and per its rules my site would have qualified except for the requirement to be featured on HN or otherwise be a “noteworthy site” if I recall correctly. However, 10KB club seems to be offline for some time already...
In general the issue with these kinds of pages is mostly that they only check _one_ page (typically the homepage but sometimes I see people submit a special “reduced version” of their homepage, too...). Of course if _all_ pages were to be relevant I think even my (pretty miminmalist) page wouldn't qualify because some pages have high-resolution images I guess...
These clubs have little effect if there's no incentives in demand
hard way: custom chrome build to block websites from allocating heap > limit
How many mainstream online platform users care about the difference in KB in their experience, anyway?
The sites in the list are hobbyist clubs with a technical point of view, which wouldn’t make sense for a mass media outlet with millions of daily traffic, and real interdepartmental complexity and compliance issues to deal with.
Yeah, that sounds awesome.
Nobody _needs_ to run a 4-minute mile, or win a chess tournament, or climb a mountain, or make the most efficient computer program. They're still worthwhile human endeavors.
That's when you fit the core of your website into 14KB so it can be sent in a single round trip.
512KB is a lot. You can fit a reasonable "car" into it.
I hope the club does a routine check with headless browsers or something.
JavaScript gets all the hate for size, but images easily surpass even your most bloated frameworks.
Which is why the websites on this list largely don't use media.
---
The problem with JavaScript is the network size (well, not as much); it's the execution time.
A recent retranslation romhack exists[0] and it's pretty good.
For e.g., if someone uses Google Analytics, that alone comes to 430kb (which most people do)
Perhaps someone might not use Google Analytics. Perhaps someone might apply 430kb to actual content instead.