The reason it's naive is although you can use datastar to drive SVG, a canvas or even a game engine the minute you do people think you are doing magic game dev sorcery and dismiss your demo. I wanted to show that your average crud app with a bunch of divs is going to do just fine.
I break it down in this post.
https://andersmurphy.com/2025/04/07/clojure-realtime-collabo...
If you go to google chrome and throttle the site to 3G it will still run fine.
Rendering on the server like this will be faster for low end devices than rendering on the client (as the client doesn't have to run or simulate the game). It just gets raw HTML it has to render.
Effectively, the bulk of the work on the client will be done by the browser native rendering code and native compression code.
The other thing that might not be obvious. Is #brotli compression is not set to 11, it's set to 5 so similar CPU cost to gzip. But, the compression advantage comes from compressing the SSE stream. Tuning the shared window size cost memory on client and server but gives you a compression ratio of 150-250:1 (vs 30:1), at the cost of 263kb on both server on client (for context gzip has a fixed window of 32kb). This not only saves bandwidth and make the game run smoothly on 3G it also massively reduces CPU cost on both client and server. So it can run on lower end devices than a client heavy browser app.
So server driven web apps are better for low end devices. The same way you can watch YouTube on a low end phone but not play some games.
I'm not sure about that - it's a hypothesis with merit, but as an anecdote, my Firefox on a new reasonably beefy Android gets quite laggy and unresponsive.
I'm sure you can write hyper specific hand tuned code for this example that will run better, but you'll lose all of the flexibility (and you have to load that hand tuned code first too). I could send down purely a string of data and have either an expression swap classes or wrap it in a web component (you can do both in Datastar). But, in short passing json and running your logic on the client adds up.
Like anything though test and measure.
If I didn't know better, I'd say this was an April Fool's joke.
edit: damn, purple civilization got hands
If this is "the Future", I'm branching off to the timeline where local-first wins.
There's built in cross compilation for building a static binary across window/mac/linux.
It's the number 1 feature in Go, lol.
Stop strawmanning.
I assume you are talking about WebAssembly/WASM/etc?
Foreword: I'm not super up-to-date on the state of these things, I'll refer to to as WASM from here on out but if that's not the right term then substitute it for "whatever it is that lets you write code, compile it to something that runs in the browser that isn't JS".
I don't think the future is clear at all. Honestly, I'd expect to see some kind of "Compile your JS ahead of time to this WASM bundle" before we see web developers switching over in droves to some other language that can be compiled to WASM.
Unless you take over full rendering, my understanding is you have to provide some kinds WASM<->DOM bridge to interact with it, my knowledge may be dated.
I write web apps in Javascript (Typescript), real "apps", not "everything should be a SPA just because", and would be interested in anything that improves performance and/or developer experience. There are some data-crunching operations that might run faster in something WASM and/or some aspects that I'd love to share between client and server (and the server can't run JS in this case). That said, everything I have seen is a significant downgrade in developer experience for something that is semi-supported.
I look forward to WASM support maturing and the developer experience improving. To my knowledge there is not a Vue/React-WASM-type framework out there yet or any framework for building web apps in WASM (without starting from a blank canvas).
Not sure if these qualify, but these Rust web frameworks use wasm:
Ripley: These techs are here to protect you. They're frameworks.
Newt: It won't make any difference.
Also, multiplayer for free on every page due to SSE (if you want it).
I tried HTMX and I found that it is really, really hard to manage complexity once the codebase gets big.
Is there an example of Datastar being used with Go in a highly interactive application that goes beyond just a TODO app so I could see how the project structure should be organized?
However, you quickly realise the limitation. You can even see this in the Turbo 8 demo (see this issue https://github.com/basecamp/turbo-8-morphing-demo/issues/9). You can try to fix this with `data-turbo-permanent` but you'll now run into another issue that you can't clear that field without resorting to JavaScript. Which, brings me to the next thing, I found I was still writing quite a bit of JavaScript with turbo. Like HTMX pushes you to use alpin.js/hypercript turbo pushes you to use Stimulus.js.
Turbo.js is not push based it's mostly polling based. Even when you push a refresh event, it pushes the client to re-fetch the data. Sure this is elegant in in that you re-use your regular handlers but it's a performance nightmare as you stampede your own server. It also prohibits you from doing render sharing between clients (which is what opens up some of the really cool stuff you can do with datastar).
I was using turbo.js with SSE so no complaints there. But, most turbo implementations use websockets (which if you have any experience with websockets is just a bad time: messages can be dropped, no auto reconnect, not regular http, proxy and firewalls can block it etc).
Finally, according to the docs Turbo Native doesn't let you use stream events (which is what gives you access to refresh and other multiplayer features).
I like turbo, I'd use it over react if I was using Rails. I use it for my static blog to make the navigation feel snappy (turbo drive). It gives you a lot without you having to do anything. But, the minute you start working on day 2 problems and you are not using rails the shine fades pretty quickly. There are 3 ways to do things, frames, streams and morph. None of them are enough to stop you having to import stimulus or alpine and honestly it's just a bit of a mess.
If you need help with turbo the best blogs posts are from (Radan Skoric https://radanskoric.com/archives/).
Specifically these:
https://radanskoric.com/articles/turbo-morphing-deep-dive-id...
https://radanskoric.com/articles/turbo-morphing-deep-dive
I think he's also got a book on turbo he's releasing soon (if you go with turbo it's probably worth getting).
Those posts helped me grok Torbo 8 morph and ultimately what sold me on datastar. Morph, signals and SSE is all you need.
As for mobile I'll just wrap it in a webview (as an X native mobile dev I can tell you it will lead to a lower maintenance app than native or react native).
TLDR: datastar solves all the problems I ran into with turbo and more. It's faster, smaller, simpler, more examples, better docs and easier.
It'd be hilarious if it weren't so deeply discouraging and tragic.
You might have heard of Eroom's Law, which is Moore's Law backwards. It states that software bloat will soak up all gains from Moore's Law.
Well, with cloud we now have a whole industry with an economic incentive to put Eroom's Law into practice, since cloud makes more money the more inefficient things can become. So to do what a simple local app could do in a minute must now be done across five different services with microservice backends, etc.
Potentially could be solved with some client side cache but still..
I really enjoy HTMX and it's a blessing for my small-scale reactivity web interfaces, but I can immediately tell: "Well, this is hard to organize in a way that will scale with complexity well. It works great now, but I can tell where are the limits". And when I had to add alpine.js to do client-side reactivity, it immediately was obvious that I'd love to have both sides (backend and frontent) unified.
Still need more time opportunities to roll some stuff with datastar in it, but ATM I'm convinced datastar is the way to go.
For reference, my typical "web tech stack": Rust, axum, maud, datastar, redb.
> SSE enables microsecond updates, challenging the limitations of polling in HTMX.
How is this true? SSE is just the server sending a message to the client. If server and client are in opposite sides of the world, it will not be a matter of microseconds...
Say your ping is 100 (units are irrelevant here). It will take you 100 before you see your first byte but if the server is sending updates down that connection you will have data at whatever rate the server can send data. Say the server sends every 10.
Then you will have updates on the client at 100 110 120 130 etc.
It's not quite right. You'll never have updates in microseconds even if your ping is, say, 7ms.
At best you can be ~2-4x as fast as long polling on HTTP/1 -- an order of magnitude is a ridiculous statement.
In a way, you can with optimistic updates. That requires having a full front end stack, though, and probably making the app local-first if you really wanted to hammer that nail.
There's always the cost of the round trip to verify, which means planning a solid roll-back user experience, but it can be done.
But I'm unfamiliar with any polling pattern where poll requests are expected to overlap. If updates take microseconds, does that mean I can comfortably run 10,000 of these in a second?
I even think datastar looks cool, but I just think that quote is misleading, and I still think that.
I'd like to see some realistic results, like the kind measured by this[1] type of benchmark. "Microsecond" updates sounds like microbenchmarks with very carefully crafted definitions.
[1]: https://krausest.github.io/js-framework-benchmark/2025/table...
I found this talk really interesting. It's a cool framework for very interactive applications.
For people who are looking for HTMX alternatives, I think Alpine AJAX is another choice if you are already using AlpineJS
If this is the framework of the future, cyber criminals are going to have a bright future!
I have ideas around ways around this but it's a per language template middleware.
I also didn't have a problem with CSP and HTMX.
Nor with SvelteKit.
I'm not sure why you think these are all equivalent to DataStar's hard requirement on unsafe-eval.
FYI, this is the reason I didn't try out DataStar.
unsafe-eval constrained to function constructors without inline scripts is only a concern if you are rendering user submitted HTML (most common case I see is markdown). Regardless of your CSP configuration you should be sanitizing that user submitted HTML anyway.
Web development may not be your thing.
HTMX's OOB swap appears to be an after thought.
It seems like Datastar is doing away with that entirely and binding the UI more tightly to JavaScript to function correctly.
What are they looking for (in your experience)?
In my experience, most people use an app (website) to solve some problem (buy something, pay taxes, whatever). They care more about functionality than how smooth the loading animation and transition was. Progressive enhancement seems like a very good way to build something people actually use (and rely on).
There's nothing wrong with using JavaScript. There's nothing wrong with depending on JavaScript for specific functionality. However, I don't think it's acceptable to completely break down if that JavaScript fails to load.
With SPAs and "modern" web development practices, it's all or nothing. Either everything works, or nothing works.
Not with things I build.
I think you may be overthinking the appearance aspect of a failure state. The application doesn't need to look the same when CSS fails to load. However, the application should be functional.
This thinking carries over to the backend as well. My application server doesn't require all services to be up and running. Instead, it's able to query and tell what is working and what isn't. That information is then bubbled down to the UI in one way or another. That may mean certain functionality is unavailable (ex, if solr isn't reachable then search is disabled).
I've never liked where those code bases ultimately end up, which seems to be just a twisted maze of hacky solutions to make a bunch of poorly aligned code to work together.
This is not fresh perspective. I used to be on "team everything on server" but it's a mistake on insist on that today.
The parent comment wanted a way to "serve" requests on the client device, so I suggested a way to do that that is also compatible with normal d* usage...
How to get the data to the client and keep it in sync is an entirely other problem that they'd face with any framework/approach. Would be foolish for some applications and perfectly possible for others. Offline/local-first apps are getting considerable attention these days, for good reason.
I'm a big fan and cheerleader of everything you've done. Keep up the great work.
[0] https://www.markus-lanthaler.com/hydra/
It's great seeing a rise of web stacks that embrace small libraries and native web technologies, and reject mega monolithic frameworks. It's about time the industry moved away from the React/Vue/npm insanity.
I'm also intrigued by Nue, but Datastar fits nicely in a full stack solution. The choice of SSE is brilliant. It's great tech that's generally underutilized.
What datastar is great for is: throw all that overcomplicated frontend-junk away and concentrate on the real innovation or simply get things done. Not to forget: ignore the dependecy-hell of every nodejs-based project you can encounter. After the lowcode-initiative, now it's time for the "nodeps"-initiative. Deprecate npmjs.com.
I feel if anything Datastar is targeted ad veteran devs who are done with endless ecosystem churn, who want to use their favorite backend language and to make performant fullstack realtime collaborative apps.
If you haven't run the gauntlet you probably won't see the appeal.
convinced me to maybe try datastar out next prototype where its applicable. reminds me of htmx with hyperscript if hyperscript wasnt kind of a joke. to clarify, the author of hyperscript calls it a sort-of joke and im not trying to slam it.
ive used htmx and hyperscript for prototyping because its entertaining and the novelty is motivating. i found similar issues as the author of this post where i talk myself out of using htmx for the product by the end of prototyping.
all that said we've been evaluating a react sdk provided by ESRI (experience builder) and diving into that makes me stare longingly at datastar where it seems like i could use signals to update client-side data from 3rd party apis
> Backend Setup
> Data star uses Server-Sent Events (SSE) to stream zero or more events from the web server to the browser. There’s no special backend plumbing required to use SSE, just some syntax. Fortunately, SSE is straightforward and provides us with some advantages.
As a django developer, this is very far from true. With htmx i get almost no backend changes (mainly in template code), where datastar would require me to rewrite it and may not be possible to implement at all.
If laravel can do it django can.
Looks good though, like remix except without those pesky route handlers. Then again I didn't get around to using the RR version. I wish the doc had a "differences with RR" section
Original thinking is sorely lacking in the majority of the web dev community.
There was a funny convo about this a bit
It took us a while back in the day after the XHTML arc, but for sure it'll be ok.
I know this looser SGML universe might feel a little kooky, but trust me it wears baggy trousers, rocks gifs with a hard g and offers great <hugs>.
<thank><you>
<nothankyou/>
You might be using a template system for that. E.g. Jinja2, moustache, askama, templ, etc depending on your backend language and libraries.
[1] https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
Infact, a lot of the patterns in the likes of HTMX will be standardised.
Not sure the negativity. It's a superset of HTMX and it's 40% smaller with more features. Can you please tell me issue? I'm to dumb dumb grug please teach me senpai
it's another word for event
(or, if you wish, a stream where you have an Option<Event> at each timestamp)