Back in my days we called this "progressive enhancements" (or even just "web pages"), and was basically the only way we built websites with a bit of dynamic behavior. Then SPAs were "invented", and "progressive enhancements" movement became something less and less people did.
Now it seems that is called JavaScript islands, but it's actually just good ol' web pages :) What is old is new again.
Bit of history for the new webdevs: https://en.wikipedia.org/wiki/Progressive_enhancement
Astro's main value prop is that it integrates with JS frameworks, let's them handle subtrees of the HTML, renders their initial state as a string, and then hydrates them on the client with preloaded data from the server.
TFA is trying to explain that value to someone who wants to use React/Svelte/Solid/Vue but only on a subset of their page while also preloading the data on the server.
It's not necessarily progressive enhancement because the HTML that loads before JS hydration doesn't need to work at all. It just matches the initial state of the JS once it hydrates. e.g. The <form> probably doesn't work at all without the JS that takes it over.
These are the kind of details you miss when you're chomping at the bit to be cynical instead of curious.
Sending functional HTML, and then only doing dynamic things dynamically, that's where the value is for web _apps_. So if what you point out is the value proposition for Astro, then I am not getting it, and don't see its value.
Just compare the two cases, assuming 100ms for the initial HTML loading and 200ms for JS loading and processing.
With full JS, you don't see anything for 300ms, the form does not exists (300ms is a noticeable delay for the human eye).
With frameworks such as Astro, after 100ms you already see the form. By the time you move the mouse and/or start interacting with it, the JS will probably be ready (because 200ms is almost instant in the context of starting an interaction).
This is not new at all, old school server side processing always did this. The advantage is writing the component only once, in one framework (React/vue/whatev). The server-client passage is transparent for the developer, and that wasn't the case at all in old school frameworks.
Note that I'm not seeing this is good! but this is the value proposition of Astro and similar frameworks: transparent server-client context switching, with better performance perceived by the user.
What a value!
I guess I may be chomping at the bit to be cynical, but I have quite a bit of experience in these fields, and I don't think Astro sounds especially transformative.
I think your comment gets at a very specific and subtle nuance that is worth mentioning, namely that typically if you were a proghance purist, you'd have a fallback that did work; a form the submitted normally, a raw table that could be turned into an interactive graph, etc..
I don't think these details are mutually exclusive though, and that it was certainly valid in those days to add something that didn't have a non-js default rendering mode, it's just that it was discouraged from being in the critical path. Early fancy "engineered" webapps like Flipboard got roasted for poorly re-implimenting their core text content on top of canvas so they could reach 60fps scrolling, but if JS didn't work their content wasn't there, and they threw out a bunch of accessibility stuff that you'd otherwise get for free.
Now that I'm thinking back, it's hard to recall situations *at that time* where there would be both something you couldn't do without JavaScript and that couldn't also have a more raw/primitive version, but one example that comes to mind and would still be current are long-form explanations of concepts that can be visualized, such as those that make HN front page periodically. You would not tightly couple the raw text content to the visualization, you would enhance the content with an interactive visual, but not necessarily have fallback for it, and this would still be progressive enhancement.
Here's another good example from that time, which is actually only somewhat forward compatible (doesn't render on my Android), but the explanation still renders https://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
Edit: according to WP history, around December 2020
https://en.wikipedia.org/w/index.php?title=Hydration_(web_de...
Second edit: "Streaming server rendering", "Progressive rehydration", "Partial rehydration", "Trisomorphic rendering"... Seems I woke up in a different universe today.
Quote:
2013-2015: The concept of SSR in JavaScript frameworks emerged with React (released in 2013). Initially, it was referred to as "reconciliation" or "bootstrapping" — React would match the DOM with the virtual DOM.
2015: Around the release of React 0.14 and React 15 (2016), the term "hydration" began appearing in the React ecosystem to describe the process of attaching event listeners to server-rendered markup.
The first known mention in React documentation and community discussions around 2015-2016 clarified that React would "hydrate" the HTML.
This distinguished it from full re-rendering, which would discard the server-rendered DOM.
then chunking was the next step, and basically then logical endpoint is this mix-and-match strategy that NextJS is "leading" (?), by allowing things to be streamed in while sending and caching as much of the static parts up front as possible.
Nicely sums up a lot of interactions these days
I’m not sure javascript islands is that but I appreciate a new approach to an old pattern.
I think we are overdue for a rediscovery of object oriented programming and OOP design patterns but it will be called something else. We just got through an era of rediscovering the mainframe and calling it “cloud native.”
Every now and then you do get a new thing like LLMs and diffusion models.
Another one: WASM is a good VM spec and the VM implementations are good, but the ecosystem with its DSLs and component model is getting an over engineered complexity smell. Remind anyone of… say… J2EE? Good VM, good foundation, massive excess complexity higher up the stack.
I originally started as a web developer during the time where PHP+jQuery was the state of the art for interactive webpages, shortly before React with SPAs became a thing.
Looking back at it now, architecturally, the original approach was nicer, however DX used to be horrible at the time. Remember trying to debug with PHP on the frontend? I wouldn’t want to go back to that. SPAs have their place, most so in customer dashboards or heavily interactive applications, but Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
I do remember that, all too well. Countless hours spent with templates in Symfony, or dealing with Zend Framework and all that jazz...
But as far as I remember, the issue was debuggability and testing of the templates themselves, which was easily managed by moving functionality out of the templates (lots of people put lots of logic into templates...) and then putting that behavior under unit tests. Basically being better at enforcing proper MVC split was the way to solve that.
The DX wasn't horrible outside of that, even early 2010s which was when I was dealing with that for the first time.
More and more logic moved to the client (and JS) to handle the additional interactivity, creating new frameworks to solve the increasing problems.
At some point, the bottleneck became the context switching and data passing between the server and the client.
SPAs and tools like Astro propose themselves as a way to improve DX in this context, either by creating complete separation between the two words (SPAs) or by making it transparent (Astro)
True, don't remember doing much unit testing of JavaScript at that point. Even with BackboneJS and RequireJS, manual testing was pretty much the approach, trying to make it easy to repeat things with temporary dev UIs and such, that were commented out before deploying (FTPing). Probably not until AngularJS v1 came around, with Miško spreading the gospel of testing for frontend applications, together with Karma and PhantomJS, did it feel like the ecosystem started to pick up on testing.
> Astro find a very nice balance of having your server and client code in one codebase, being able to define which is which and not having to parse your data from whatever PHP is doing into your JavaScript code is a huge DX improvement.
the point is pretty much that you can do more JS for rich client-side interactions in a much more elegant way without throwing away the benefits of "back in the days" where that's not needed.
Modern PHP development with Laravel is wildly effective and efficient.
Facebook brought React forth with influences from PHPs immediate context switching and Laravel’s Blade templates have brought a lot of React and Vue influences back to templating in a very useful way.
Isn't that basically just what Symfony + Twig does? Server-side rendering, and you can put JS in there if you want to. Example template:
<html>
<head>[...]</head>
<body>
{% if user.isLoggedIn %}
Hello {{ user.name }}!
{% endif %}
</body>
</html>
The syntax is horrible, and seeing it today almost gives me the yuckies, but seems like the same idea to me, just different syntax more or less. I'm not saying it was better back then, just seems like very similar ideas (which isn't necessarily bad either)Best of them all has to be hiccup I think, smallest and most elegant way of describing HTML. Same template but in as a Clojure function returning hiccup:
(defn template [user]
[:html
[:head [...]]]
[:body
(when (:logged-in user)
[:div "Hello " (:name user)])])
Basically, just lists/vectors with built-in data structures in them, all part of the programming language itself.Have a look at: https://www.gnu.org/software/guile/manual/html_node/SXML.htm... or https://docs.racket-lang.org/sxml/SXML.html
And as you say, part of the language itself, which means no need to learn something different, and no need to learn a pseudo-HTML or lookalike like with JSX, which then needs to be actually parsed by the framework (or its dependencies), unlike SXML, which is already structured data, already understood perfectly in the same language and only needs to be rendered into HTML.
How much of the frontend and how much of the backend are we talking about? Contemporary JavaScript frameworks only cover a narrow band of the problem, and still require you to bootstrap the rest of the infrastructure on either side of the stack to have something substantial (e.g., more than just a personal blog with all of the content living inside of the repo as .md files).
> while avoiding the hydration performance hit
How are we solving that today with Islands or RSCs?
In terms of the front-end, there’s really no limit imposed by Next.js and it’s not limited to a narrow band of the problem (whatever that gibberish means), so I don’t know what you’re even talking about.
> How are we solving that today with Islands or RSCs?
Next.js/RSC solves it by loading JavaScript only for the parts of the page that are dynamic. The static parts of the page are never client-side rendered, whereas before RSC they were.
This is fine generally because you have the choice to pick the right tool for the job, but in the context of "a single, cohesive unit" you can only get that with Next.js if you all that you care about are those specific abstractions and want your backend and frontend to be in the same language. Even then you run into this awkwardness where you have to really think about where your JavaScript is running because it all looks the same. This might be a personal shortcoming, but that definitely broken the illusion of cohesion for me.
> The static parts of the page are never client-side rendered, whereas before RSC they were.
Didn't the hydration performance issues start when we started doing the contemporary SSR method of isomorphic javascript? I think Islands are great and it's a huge improvement to how we started doing SSR with things like Next.js Pages Router. But that's not truly revolutionary industry wide because we've been able to do progressive enhancement long before contemporary frameworks caught up. The thing I'm clarifying here is the "before RSC" only refers to what was once possible with frameworks like Next.js and not what was possible; you could always template some HTML on the server and progressively enhance it with JavaScript.
You'd render templates in Jade/Handlebars/EJS, break them down into page components, apply progressive enhancement via JS. Eventually we got DOM diffing libraries so you could render templates on the client and move to declarative logic. DX was arguably better than today as you could easily understand and inspect your entire stack, though tools weren't as flashy.
In the 2010-2015 era it was not uncommon to build entire interactive websites from scratch in under a day, as you wasted almost no time fighting your tools.
Dear God. In 20 years people will hire HTML experts as if they are COBOL experts today.
Hah, if only... Time and time again the ecosystem moves not to something that is better, but "same but different" or also commonly "kind of same but worse".
There are so many cases where the "worse" solution "won", and there is a reason "worse is better" is such a popular mantra: https://en.wikipedia.org/wiki/Worse_is_better
I used it for my personal website, and recently used it when reimplementing the Matrix Conference website. It's really a no-fuss framework that is a joy to use.
Among the things I love about Astro:
- It's still html and css centric - Once built, it doesn't require js by default - You can still opt-into adding js for interactivity here and there - Content collections are neat and tidy - Astro massively optimizes for speed, and the maintainers know how to do it - It had a very helpful devbar to help you visually figure out what easy fix can make your website snappier (like lazily loading images if it detects them below the fold)
For the "optimize for speed" bit, an example is that the css minifier cleverly inlines some CSS to avoid additional queries. The Image component they provide will set the width and height attribute of an image to avoid content layout shifts. It will also generate responsive images for you.
I've never used Astro so forgive my ignorance, but isn't that just creating a .html file, a .css file and then optionally provide a .js file? What does Astro give you in this case? You'd get the same experience with a directory of files + Notepad basically. It's also even more optimized for speed, since there is no overhead/bloat at all, including at dev-time, just pure files, sent over HTTP.
> an example is that the css minifier cleverly inlines some CSS to avoid additional queries
Is that a common performance issue in the web pages you've built? I think across hundreds of websites, and for 20 years, not once have "CSS queries" been a bottleneck in even highly interactive webpages with thousands of elements, it's almost always something else (usually network).
For the first one, the main benefits of Astro over static html and css (for my use cases) are the ability to include components and enforce the properties that must be passed. A typical example would be [here][0] where I define a layout for the whole website, and then [on each page that uses it](https://github.com/matrix-org/matrix-conf-website/blob/main/...) I have to pass the right properties. Doable by hand, but it's great to have tooling that can yell at me if I forgot to do it.
Content Collections also let me grab content from e.g. markdown or json and build pages automatically from it. The [Content Collections docs][1] are fairly straightforward.
As for performance issues, I've spent quite a bit of time on the countryside where connectivity was an issue and every extra request was definitely noticeable, hence the value of inlining it (you load one html file that has the css embedded, instead of loading an html file that then tells your browser to load an extra css file). The same can be true in some malls where I live.
[0]: https://github.com/matrix-org/matrix-conf-website/blob/main/... [1]: https://docs.astro.build/en/guides/content-collections/
HTTP/2 does not change this equation much. Server Push is dead, and bypasses caching anyway. Early Hints can help if configured correctly, but still require the client to make the request roundtrip to fetch that asset.
Astro is super for completely static stuff too. Sometimes static stuff can be complex and there a modern framework like Astro shines.
I will share a couple of files to explain.
The site is almost completely static. It serves minimal JS for:
(1) Prefetching (you can block that and nothing will break)
(2) Mobile menu (you cannot make an accessible mobile menu without JS)
The site is for the docs and demos of a JS library. I want many demos on it, to be able to see what patterns the lib can handle and where things break down. I want to be able to add/remove demos quickly to try ideas. Straight HTML written in index.html files would not allow me to do that (but it is fine for the site where I have my CV, so I just use that there).
This is the Astro component I made that makes it super easy for me to try whatever idea I come up with:
https://github.com/demetris/omni-carousel/blob/main/site/com...
Here is one page with demos that use the component:
https://github.com/demetris/omni-carousel/blob/main/site/pag...
Basically, without a setup like this, I would publish the site with 3 or 4 demos, and I would maybe add 1 or 2 more after a few months.
Cheers!
Again I'm failing to see exactly what Astro is "innovating" (as you and others claim they're doing). It's nothing wrong with taking a workflow and making it really stable/solid, or making it really fast, or similar. But for the claim to be "innovative" to be true, they actually have to do something new, or at least put together stuff in a new way :)
As you said, in the example I shared Astro is an SSG. It happens to use server-side JS but this is is irrelevant.
But it is more than that. Astro is an SSG and it is also a *very well made* SSG.
I have used all the usual suspects: Ruby ones, Go ones, Python ones, JS ones. The closest I came to having fun was 11ty, but 11ty is a bit too chaotic for me. Astro is the one that clicked. And the one that was fun to use right from day 1.
I am not a JavaScript person, mind you. JavaScript is not my strongest FE skill. The JS conventions, tricks, and syntaxes of modern FE frameworks, even less so.
So Astro did not click for me because of that. It clicked because of how well it is made and because of how fun it is to use.
Oh! It does this!
Oh! It does that!
Oh! It gives you type safety for your Markdown meta! (What?!)
Oh! It gives you out of the box this optimization I was putting together manually! You just have to say thisOptim: true in the configuration file!
Astro is a very well made tool that improves continually and that aligns with my vision of the platform and of how we should make stuff for the platform.
SSI hasn't changed in 20+ years and it's extremely stable in all webservers. A very tiny attack surface with no maintainence problems. It just does includes of html fragments. The perfect amount of templating power to avoid redundancy but also avoid expoitable backends.
It's worked out so wonderfully. By being HTML/CSS centric, it forces a certain predictable organization with your front end code. I handed the frontend to another developer, with a React background, and because it's so consistently and plainly laid out, the transition happened almost overnight.
I don't know that there's a serious solution to it because complexity can't come with zero friction but just my gut feeling was to back out and go with something else for now.
You see the same thing in political conservative/traditional circles, where basically things were good when they were young, and things today are bad, but it all differs on when the person was born.
when things decline that's still an accurate represenation, not just an artifact of subjectivity
People frequently conflate the two.
Whereas with SvelteKit, it builds happily and does this beautiful catch-all mechanism where a default response page, say 404.html in Cloudflare, fetches the correct page and from user-perspective works flawlessly. Even though behind the scenes the response was 404 (since that dynamic page was never really compiled). Really nice especially when bundling your app as a webview for mobile.
Now, these are just the limitations I can think of, but there are probably more. And to be fair, why "break" the web this way, if you can just use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
> use query params: /todo?id=123. This solves all the quirks of the above solution, and is exactly what any server-side app (without JS) would look like, such as PHP etc.
We had PATH_INFO in virtually every http server since CGI/1.0 and were using it for embedding parameters in urls since SEO was a thing, if not earlier. Using PATH_INFO in a PHP script to access an ID was pretty common, even if it wasn't the default.
By way of example, here's a sample url from vBulletin, a classic PHP application <https:/ /forum.vbulletin.com/forum/vbulletin-sales-and-feedback/vbulletin-pre-sales-questions/4387853-vbulletin-system-requirements>[0] where the section, subsection, topic id, and topic are embedded into the URL path, not the query string.
[0] https://forum.vbulletin.com/forum/vbulletin-sales-and-feedba...
But I don't see this is as big of a problem. With this I can switch and choose—SSR dynamic pages or use hacky catch-all mechanism. For any reasonably large site you probably would SSR for SEO and other purposes. But for completely offline apps I have to do zero extra work to render them as is.
Personally, I much prefer route paths vs query parameters not just because they look ugly but because they lose hierarchy. Also, you can't then just decide to SSR the pages individually as they're now permanently fixed to same path.
Besides, if you catch-all to a 200.html page, how would you serve 404s? Yes, you can integrate a piece of JS in the 200.html file and have it "display" 404, but the original HTTP response would have been 200 (not 404). A lot of bending web standards and technology, and I can see how framework authors probably decide against that. Especially given how much shit JS frameworks get for "reinventing the wheel" :)
Routes and Components per Page are dynamically created while export Next or build Astro static pages. In both frameworks you create the pages / slugs via getStaticPaths. And if ISR/ISP is enabled, even new pages (that are not known during build time) are pre-rendert while running the server.
In Next it is called dynamic routes[1] and in Astro dynamic pages[2]. Catch all slugs in Next and Astro are [...slug] e.g..
[1] https://nextjs.org/docs/pages/building-your-application/rout...
[2] https://docs.astro.build/en/guides/routing/#example-dynamic-...
[0]: `https://docs.astro.build/en/guides/routing/#static-ssg-mode
As background, I wanted to make a PoC with NextJS bundled into a static CapacitorJS app somewhat recently and had to give up because of this.
You can try tricking NextJS by transforming the pages into "normal" ones with eg query parameters instead of path, but then you need complicated logic changing the pages as well as rewriting links. As you of course want the normal routes in web app. Just a huge PITA.
[0]: lockmeout.online
Boiling down the conversation I see in the article, it just seems to be: the browser as a HMI vs the browser as a an application runtime. Depending on what you want to do one might be a better fit than the other. But the points it puts forward are fluff arguments like "it's a breadth of fresh air" or "it loads faster".
It's difficult to articulate the source of just how broken the discussion space is; nor have I made a particularly strong argument myself. But I think it's important to keep pushing back on conversations that position framework's like they are brands winning hearts and minds. Ala the fashion industry.
The fashion industry is the best analogy I've seen so far for frontend frameworks. It's obvious that the amount of technical rigor involved with declaring something "content-driven" and "server-first" is approximately zero.
Astro is trying to position itself in opposition to things like Next.js or Nuxt wich are specifically marketed as application frameworks?
And the architecture is more suited to something like a content site, because of the content collections, built-in MDX support, SSR, image handling, and server routing?
What do you mean when you say "a content site"?
To me, "content" == "literally anything that resides in the DOM".
But, clearly we aren't talking about that (I hope).
Fluff arguments do exist, but you can also measure. The site is static with minimal JS on the one page, and a bit more JS on the other page, so nothing surprising in the numbers, and nothing you can say was achieved thanks to the magic of Astro, but I wanted to shared them:
HOME PAGE
TTFB: .024s
SR: .200s
FCP: .231s
SI: .200s
LCP: .231s
CLS: 0
TBT: .000s
PW: 108KB
DEMOS PAGE
TTFB: .033s
SR: .300s
FCP: .281s
SI: .200s
LCP: .231s
CLS: 0
TBT: .000s
PW: 174KB
It's really fast, you can edit it with Notepad, and you can probably saturate your bandwidth with a consumer level PC.
It's fluff because, well, our expectations are so unbelievably low. By the time you've bolted on every whizbang dingus leveraging four different languages (two of which are some flavor of Javascript), your twelve page site takes a couple of minutes to compile (what?), and it chokes your three load-balanced AWS compute nodes.
Web applications are hard. I get that. Web sites? They were, by design, incredibly simple. We make them complicated for often unclear reasons.
I appreciate what the Astro folks are trying to do, and it's very clever. But your basic Web site need not require freaking npm in order to "return to the fundamentals of the Web".
You can then use all of those npm packages to do whatever processing on your data that you want to do to generate the content and the pages and then just serve it as HTML.
I'm a backend dev, but Astro is the first time a front end framework has made sense to me for years. It fits my mental model of how the web works - serving up HTML pages with some JS just like we did 20 years ago. Its just that I can have it connect to a DB or an API to pull the data at build time so that I can have it generate all of the pages.
As for build time, I don't have a clue - I haven't used astro (and don't plan to. Datastar + whatever backend framework you want is better). But I'm generally in favour of the direction they're bringing JS frameworks.
I was amazed by how easy it was compared to my experience with Wordpress for this several years ago.
And I can host it for free on something like Netlify and I don’t need to worry about the site being hacked, like with WP.
I even built a very simple git-based CMS so that the client can update the content themselves.
Web dev has really come a long way, despite what a lot of people say.
But at least in Germany there are some agencies doing nothing else.
$550/TB for those who want to save a search.
Another differences and benefit of Astro is the island architecture, compared to other frameworks. This means you can implement micro frontends. Island architecture and micro frontends are features that companies or projects may want if they have multiple teams. For example, one team could be working on the checkout process, another on the shopping basket, and another on product listings.
Now, you can use Astro to combine these components on a single route or page. And you control how this components are rendered. Astro also allows you to share global state between these islands.
This approach is beneficial because teams can develop and ship a single feature while having full responsibility for it. However, it also has downsides, and similar outcomes can be achieved with non-island architectures.
For instance, if all teams use React, it is common for each team to use a different version of React, forcing the browser to load all these versions. The same issue arises if one team uses Vue, another uses Angular, and another uses React or any other framework.
I'am not fully convinced that it will change the web. It is basically a Next or Nuxt without the library/framework login. And it overs the Island-Architecture, that is usually only beneficial for very huge projects.
But, you should try it. I work with Astro since there first release, now for several years, and I can recommend you to give it a try.
It is also a nice tool, if you want to get ride of React or Vue and move to web-components or if you want to replace Next or Nuxt. You can do this with Astro, step by step.
I feel a lot of the hype around Astro has more to do with vite than anything else. And there yes, without doubt, vite is amazing.
Like when?
On the positive side their use of web components is a nice bet.
Been on the Next.js journey since v10, lived through the v13 debacle and even now on v15, I've very much cooled on it.
I find both React and Next.js move way too fast and make incredibly radical changes sub-annually. It's impossible to keep up with. Maybe it could be justified if things improved from time to time, but often it just feels like changes for changes' sake.
I did not like how Remix to RR7 transition was made though, my project built using Remix was not an easy upgrade and I am rewriting a lot of it on RR7 now.
Unfortunately in fashion driven industry, it isn't always easy to keep to the basics.
My understanding is that Astro is able to more-or-less take a component from any combo of popular frameworks and render it, whereas Fresh is currently limited to just Preact via Deno. I think the limitation is to optimize for not needing a build step, and not having to tweak the frameworks themselves like Astro does (did?).
I'm not affiliated; I've just looked at both tools before.
Astro brings a friendly UI to maintain and update the sites? Like the WordPress panel and editor.
Many medium businesses don't even need that btw. In many instances marketing people just want to have control over websites, that they should not be given control over, since they usually are incapable of knowing the impact of what they are doing, when they add something like Google tagmanager to their site. They also tend to often break things, when using Wordpress, because they do not understand how it works under the hood, so that side of things is also not without hassle. And then the devs are called to fix what marketing broke. Even with Wordpress. At that point it would often be easier to let the devs build a non-Wordpress site, and any ideas about things that are not just content in markdown files need to be requests for the dev team to evaluate, and possibly work on, when deemed safe and lawful.
Sadly the power dynamics in businesses are often stacked against conscientious developers.
Have you ever worked with any SMBs before? This is at least 5 technical levels above their head. Would make as much sense as telling them, "just use this CLI tool".
We're talking about people who will email you from their phone that the website is down, but it turns out it's just their home internet that is down.
Or think that the website disappeared from the internet. When in reality it's now the #2 result in google and they never knew they you could type a URL directly into the browser.
A WP deployment on a simple shared hosting plan like that could run itself without needed a dev or sysadmin.
Maybe in some cases but that hasn't been my experience at all (or the experience of all the devs I know IRL).
Just a couple of weeks ago one of my clients installed a plugin which didn't allow users to log in.
And then come the legal fees for making the site actually conforming with the law, such as GDPR. Those fees are increased, because of people wanting to do stuff they need to declare to visitors of the site, for which they want reassurance, that all is well.
And then come the costs for paying a dev anyway, to fix things that they break or that become broken over time.
So no, 9.99$/month are very very far from a realistic price these businesses pay.
I'm not saying WP is great. Taking over a WP project from someone else can be daunting in tech debt and weird choices. But in terms of having a simple brochure website for businesses that get < 10k weekly visitors, it's pretty quick, cheap, and easy.
No real maintenance? So either you let your PHP version and plugins become outdated, or you sooner or later have to fix things breaking. Maybe you simply did not notice any breakage, because you don't do maintenance for customers?
A brochure website? Does that mean people enter their e-mail to be sent a brochure? (Then paragraph 1 applies again) Or brochure meaning, that you merely display information on pages and that's it?
I think for small info sites what you describe can be true, but for anything slightly larger not, especially not for small businesses.
Eg. https://www.gatsbyjs.com/docs/glossary/headless-wordpress/
That's a really low bar. Why not static pages? Why even use a framework at all if you're thinking of using Astro?
Using a framework has upsides over writing static pages manually. Most notably, you can decompose your website into reusable components which makes your implementation more DRY. Also, you can fluently upgrade to a very interaction-heavy website without ever changing tech or architecture. But that's just what I value. I whole-heartedly recommend trying it out.
If you use static pages, how do you make sure that shared UI like navbars all update if you decide to make a change?
<html>
{% include "components/head.html" %}
<body>
{% include "components/navbar.html" %}
...
</body>
</html>
Some even allow you to pass variables, so something like: {% include "components/button.html" text="example" url="https://example.com" %}
[1] https://htmx.org/essays/template-fragments/#known-template-f...
Yes, I've used stuff like Templ for Go or Razor Pages for .NET.
Even if the raw HTML rendering performance is significantly better, there are other factors to consider in terms of dx.
1) Most backend languages will not hot reload modules in the client which is what Vite gives you.
Very often the whole backend application needs to be recompiled and restarted. Even with something like the .NET CLI which does have a hot reload feature (and it's absolute garbage btw) the whole page needs to be reloaded.
PHP has an advantage here since every request typically "runs the whole application".
But even with PHP, JS and CSS assets do not have hot reload unless you're also running Vite in parallel (which is what Laravel does).
With Astro you can run a single Vite dev server which takes care of everything with reliable and instant hot reload.
2) With Astro you will get islands which are simply not feasible with any non-JS backend. Islands are so much more powerful than old school progressive enhancement techniques. When we were using eg jQuery 15+ years ago it was a massive pain to coordinate between backend dynamic HTML, frontend JS code, and CSS. Now with islands you can encapsulate all that in a single file.
3) You also get CSS co-location. Meaning you can write an Astro server component with its own CSS scoped to the particular piece of markup. Again, CSS colocation is a huge win for dx. These days I write vanilla CSS with PostCSS but with Astro it's trivial to integrate any other CSS workflow: Tailwind, SCSS, etc.
4) Finally, you have to consider bundling of frontend assets. I don't think it's an exaggeration to say that solutions like Vite are really the best you can get in this space. Internally it uses Go and Rust but it's all abstracted for you.
If you have a use case where you really need exceptional HTML rendering performance in a monolithic application, Astro (or really anything in JS) is definitely a bad fit. But you can easily run an Astro server app on eg Cloudflare Workers which could would work in many of those use cases too and reduce latency and adapt dynamically to load.
Edit: Ah, finally, it loaded after about 30 seconds.
Edit 2: Fairly neat.
Thank you! Appreciate you sticking around and trying it again :) I am fairly proud of it, even in its simplicity.
> Does it make it easier to throw in necessary JS (e.g. for comments)?
With astro you can combine html, css and js in a single file (.astro). You write plain JS (TypeScript) within <script> tag. There, you can, e.g. import your comment library, point to separate .js/*.ts file or write whatever logic you want for client-side JS.
See the docs for example JS usage in astro components:
https://docs.astro.build/en/guides/client-side-scripts/#web-...
You should try it out, not comparing.
Speed is probably the same as jekyll - but relative to my react vite and nextjs apps it's about 10 times faster.
I would definitely use Astro for more complicated websites than content driven - but would probably return to nextjs or more hefty full stack solutions for complicated web apps.
Potentially the heuristics would be about the level of user state management - e.g. if you're needing to do various workflows vs just presenting content.
But if my "website" is an application, Javascript makes the whole user experience better, if implemented well. It doesn't matter that the user will wait for 1 more second if they will have to spend the entire day working on it.
How else can you fully grasp what's possible on that platform and the costs of different abstractions?
That said, Astro also seems to be developed under a venture-backed company. Is it still less likely to end up like Next.js and React under Vercel's influence?
This is satire, right? If only there was any other server side language that could do the same and produce static compliant super-light HTML-first pages!
I'm aware there's a new PHP web framework that's somewhat similar to Astro, but I can't recall the name.
Astro gives you sensible defaults out of the box. It’s designed for modern web development, so things like partial hydration, automatic image optimisation, and using components from different frameworks just work.
And, also, "php" in your question could be ruby, go, C or anything else that runs on the server.
I prefer htmx or, better yet, Datastar which are both small, backend agnostic, js libraries for making ssr html interactive to varying degrees. You could, in theory, use astro with them but probably better to just use something else.
It’s php for javascript devs?
Astro needs to run on a server that can run node etc
And php can equally have its html cached.
It needs to run on your computer to generate the HTML, but you can just run npm run build then copy the contents from the dist folder to your apache server, or whether you want to host it.
At least, thats how I do it.
I haven't used PHP for about 20 years so I'm sure its changed a lot.
Do you know how you can do this in spring? Let's say I used Thymeleaf, is there a maven target I can use to walk over a database and generate every iteration of a html website?
I guess I'd argue "Traditional Frameworks" were the ones that never stopped doing this. Laravel, Django, Rails etc. Then the SPA frameworks came along and broke everything.
Also - what on earth is "f*"? I originally assumed it was shorthand for "fuck" but is "fuck dream" a common expression? And wouldn't you normally write it as "f***"?
I would thinking F**ing, to delve deeper into the meta discussion
Can it be reliable for production use? Yes.
Can non-techy make it reliable for production use? Who knows.
E-commerce and marketing sites are at the two opposite sides of complexity spectrum.
Astro would be perfect for marketing page (non-techy could approach that) and doable for e-commerce (for experienced dev).
Whether it SHOULD be used for e-commerce would be another question.
I prefer htmx and, better yet, datastar as they're backend-agnostic.
Datastar does everything htmx does and much more. And, iirc, is also smaller. Just explore their site, docs, essays etc
Seriously. This is how things are done in most nonjs frameworks
Basically, not suitable for anything complex.
What makes it so great is not that it serves a particular niche (like "content-driven websites") but that it provides a developer experience that makes it incredibly easy to scale from a static website to something very complex and interaction-heavy without compromising UX.
Same thing happened with microservice architecture.
I can't with this goddamn LLM blog posts, it just drowns everything.
Sucks when everything you write sounds like a bot because you're autistic.
The fact that LLMs write like that is proof of that people write like this too, as LLMS produce statistical averages of the input writings.
I'm not sure why em dashes are so popular, though. I don't think I've ever seen human writing that had as many em dashes as LLMs use.
Feeling less-than-human isn't great.
>With Astro you're not locked into a single way of doing things. Need React for a complex form? Chuck it in. Prefer Vue for data visualisation? Go for it. Want to keep most things as simple Astro components? Perfect.
>What struck me most after migrating several projects is how Astro makes the right thing the easy thing. Want a fast site? That's the default. Want to add interactivity? Easy, but only where you need it. Want to use your favourite framework? Go ahead, Astro won't judge.
>Developer experience that actually delivers
I am downvoted so I guess I'm wrong. It's just bland and form in a way ChatGPT usually outputs. Sorry to the author if I'm wrong.