% curl https://xslt.rip/
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/index.xsl" type="text/xsl"?>
<html>
<head>
<title>XSLT.RIP</title>
</head>
<body>
<h1>If you're reading this, XSLT was killed by Google.</h1>
<p>Thoughts and prayers.</p>
<p>Rest in peace.</p>
</body>
</html>The author is frontend designer and has a nice website, too: https://dbushell.com/
I like the personal, individual style of both pages.
Heh, I honestly thought the domain name stood for "D-Bus Hell" and not their own name.
Anecdotal, but I don't measure my productivity, because it's immeasurable. I don't want to be reduced to lines of code produced or JIRA tickets completed. We don't even measure velocity, for that matter. Plus when I do end up with a task that involves writing something, my productivity depends entirely on focus, energy levels and motivation.
It felt like it got in the way about half the time. The only place I really liked it was for boilerplate SQL code... when I was generating schema migration files, it did pretty good at a few things based on what I was writing. Outside that, I don't feel like it helped me much.
For the Google search results stuff, Gemini, I guess... It's hit or miss... sometimes you'll get a function or few things that look like they should work, but no references to the libraries you need to install/add and even then may contain errors.
I watched a friend who is really good with the vibe coding thing, but it just seemed like a frustrating exercise in feeding it the errors/mistakes and telling it to fix them. It's like having a brilliant 10yo with ADD for a jr developer..
And doesn’t bother you when the tab is closed.
I can see why a lot of high school and college kids are going to need to claw.
Now, I could see a single person potentially managing 2-3 AI sessions across different projects as part of a larger application. Such as a UI component/section along with one or more backend pieces. But then, you're going to need 2-3x the AI resources, network use, etc. Which is something I wouldn't mind experimenting with on someone else's dime.
He’s not only lying to you, he’s also lying to himself.
Recent 12-month studies show that less than 2% of AI users saw an increase in work velocity, and those were only the very top-skilled workers. Projection also indicated that of the other 98%, over 90% of them will never work faster with AI than without, no matter how long they work with AI.
TL;DR: the vast majority of people will only ever be slower with AI, not faster.
I agree it is a clever way. But it also shows exactly how hard it is to use XML and XSLT in a "proper way": Formal everything is fine to do it in this way (except the server is sending 'content-type: application/xml' for the /index.xsl, it should be 'application/xslt+xml').
Almost all implementations in XML and XSLT that I have seen in my career showed a nearly complete lack of understanding of how they were intended to be used and how they should work together. Starting with completely pointless key/value XMLs (I'm looking at you, Apple and Nokia), through call-template orgies (IBM), to ‘yet-another-element-open/-close’ implementations (almost every in-house application development in PHP, JAVA or .NET).
I started using XSLT before the first specification had been published. Initially, I only used it in the browser. Years later, I was able to use XSLT to create XSDs and modify them at runtime.
It was not rendering that killed other browsers. Rendering isn't the hard part. Getting most of rendering working gets you about 99% of the internet working.
The hard part, the thing that killed alternative browsers, was javascript.
React came out in 2012, and everyone was already knee-deep in earlier generation javascript frameworks by then. Shortly after, Google would release the V8 engine which was able to bring the sluggish web back to some sense of usable. Similarly, Mozilla had to spend that decade engineering their javascript engine to claw itself out of the "Firefox is slow" gutter that people insisted.
Which is funny because if you had adblock, I'm not convinced firefox was ever slow.
A modern web browser doesn't JUST need to deal with rendering complexity, which is manageable and doable.
A modern web browser has to do that AND spin up a JIT compiler engineering team to rival Google or Java's best. There's also no partial credit, as javascript is used for everything.
You can radically screw up rendering a page and it will probably still be somewhat usable to a person. If you get something wrong about javascript, the user cannot interact with most of the internet. If you get it 100% right and it's just kind of slow, it is "unusable".
Third party web browsers were still around when HTML5 was just an idea. They died when React was a necessity.
If you want to start a new browser project, and you're not interested in writing a JS engine from scratch, there are three off-the-shelf options there to choose from.
It’s the extend part of embrace, extend, extinguish. The extinguish part comes when smaller and independent players can’t keep up with the extend part.
A more direct way of saying it is: adopt, add complexity cost overhead, shake out competition.
Thing is you couldn't swing a dead cat in 00s without hitting XML. Nearly every job opening had XML listed in requirements. But since mid-2010s you can live your entire career without the need to work on anything XML-related.
COBOL code is also still there.
Sorry, web frontend is not the "whole XML tech stack", despite popular belief.
And yes all of the above are mainstream in their respective industry.
I don’t know how many times I had to manually write <![CDATA[ … ]]>
I know all markup languages have their quirks, XML could be come impressively complex and inscrutable.
... but the legibility and hand-maintainability was colossally painful. Having to tag-match the closing tags even though the language semantics required that the next closing tag close the current context was an awful, awful amount of (on the keyboard) typing.
I now wonder if XSLT is implemented by any browser that isn't controlled by Google (or derived from one that is).
Edge IE 11 mode is still there for you. Which also supports IE 6+ like it always did, presumably. They didn’t reimplement IE in Edge; IE is still there. Microsoft was all in on xml technologies back in the day.
I personally don't quite believe it's all that black and white, just wanted to point out that the "open web" argument is questionable even if you accept this premise.
But I think that this website is being hyperbolic: I believe that Google's stated security/maintenance justifications are genuine (but wildly misguided), and I certainly don't believe that Google is paying Mozilla/Apple to drop XSLT support. I'm all in favour of trying to preserve XSLT support, but a page like this is more likely to annoy the decision-makers than to convince them to not remove XSLT support.
[0]: https://www.maxchernoff.ca/tools/Stardew-Valley-Item-Finder/
[1]: https://www.maxchernoff.ca/atom.xml
[2]: https://github.com/whatwg/html/pull/11563#issuecomment-31909...
[3]: https://github.com/gucci-on-fleek/lua-widow-control/blob/852...
You are on some very very small elite team of web standards users then
For my Atom feed, sure. I'm already special-casing browsers for my Atom feed [0], so it wouldn't really be too difficult to modify that to just return HTML instead. And as others mentioned, you can style RSS/Atom directly with CSS [1].
For my Stardew Valley Item Finder web app, no. I specifically designed that web app to work offline (as an installable PWA), so anything server-side won't work. I'll probably end up adding the JS/wasm polyfill [2] to that when Chrome finally removes support, but the web app previously had zero dependencies, so I'm a little bit annoyed that I'll have to add a 2MB dependency.
[0]: https://github.com/gucci-on-fleek/maxchernoff.ca/blob/8d3538...
There is actually a example of such situation. Mozilla removed adobe pdf plugin support a long time ago and replaced it with pdf.js. It's still a slight performance regression for very giant pdf. But it is enough for most use case.
But the bottom line is "it's actually worth to do it because people are using it". They won't actively support a feature that little people use because they don't have the people to support it.
Companies always cut too deep. If only they were making enough money to properly support Chrome.
/sarcasm
To be extra clear: I want to have <a href="feed.xml">My RSS Feed</a> link on my blog so everyone can find my feed. I also want users who don't know about RSS to see something other than a wall of plain-text XML.
As I mention in my other comment to you, I don't know why you want an RSS file to be viewable. That's not an expected behavior. RSS is for aggregators to consume, not for viewing.
> I don't know why you want an RSS file to be viewable.
Well, spend two seconds thinking about it. Or just, like, read what they wrote.
<link rel="alternate" type="application/rss+xml" title="Blog Posts" href="/feed.xml">
Someone who wants to subscribe can just drop example.com/blog in to the feed reader and it will do the right thing. The "RSS Feed" interactive link then could go to a HTML web page with instructions for subscribing and/or a preview.
Intentionally in a humourous way, yes
Where it lost me was:
>RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by multiple government sites. Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
I mean yes Google lobbies, and certainly can lobby for bad things. And though I personally didn't know much of anything about XSLT, I from reading a bit about it I certainly am ready to accept the premise that we want it. But... is Google lobbying for an XSLT law? Does "control legislation" mean deprecate a tool for publishing info on government sites?
I actually love the cheeky style overall, would say it's a brilliant signature style to get attention, but I think this implying this is tied to a campaign to control laws is rhetorical overreach even by its own intentionally cheeky standards.
I believe the intended meaning, in context, is "... for publishing the literal text of laws on government sites".
But that leaves us back where we started because characterizing that as "control the laws" is an instance of the the rhetorical overreach I'm talking about, strongly implying something like literal control over the policy making process.
At least, this is how I read that part.
It would be ridiculous to suggest that anyone's access to published legislation would be threatened by its deprecation.
This is probably the part where someone goes "aha, exactly! That's why it's okay to be deprecated!" Okay, but the point was supposed to be what would a proponent of XSLT mean by this that wouldn't count as them engaging in rhetorical overreach. Something that makes the case against themselves ain't it.
Also towards the bottom of the site:
> Tell your friends and family about XSLT.
It's hard enough telling them to also get off Instagram and Whatsapp and switch to Signal to maintain privacy. I'm going to have a hard time explaining what XSLT is!
You cannot “convince decision-makers” with a webpage anyway. The goal of this one is to raise awareness on the topic, which is pretty much the only thing you can do with a mere webpage.
I guess I'm not seeing how that follows. It can still be complimentary to the overall goal rather than a failure to understand the necessity of persuasion. I think the needed alchemy is a serving of both, and I think it actually is trying to persuade at least to some degree.
I take your point with endangered animal awareness as a case of a cause where more awareness leads to diminishing returns. But if anything that serves to emphasize how XSLT is, by contrast, not anywhere near "save the animals" level of oversaturation. Because save the animals (in some variation) is on the bumper sticker of at least one car in any grocery store parking lot, and I don't think XSLT is close to that.
Google has used its weight to build a technically better product, won the market, and are now driving the whole web platform forward the way they like it.
This has nothing to do with the cost of maintaining the browser for them.
I'm not sure what else it would be about - I don't see why they would especially care about removing XSLT support if cost isn't a factor.
There is this weird idea that wealthy people & corporations arent like the rest of us, and no rules apply to them. And to a certain extent its true that things are different if you have that type of wealth. But at the end of the day, everyone is still human, and the same restrictions still generally apply. At most they are just pushed a little further out.
I am sure they've got good reasons they want to do this: them having the same problems as an unstaffed open source project getting vocal user requests is not one of them.
You're completely right in your literal point quoted above, but note what I was emphasizing. In this example, "save the animals" was offered as an example of a problem oversaturated in awareness to a point of diminishing returns. If you don't think animal welfare illustrates that particular idea, insert whatever your preferred example is. Free tibet, stop diamond trade, don't eat too much sodium, Nico Harrison shouldn't be a GM in NBA basketball, etc.
I think everyone on all sides agrees with these messages and agrees that there's value in broadcasting them up to a point, but then it becomes not an issue of awareness but willpower of relevant actors.
You also may well be right that developers would react negatively, honestly I'm not sure. But the point here was supposed to be that this pages author wasn't making the mistake of strategic misunderstanding on the point of oversaturating an audience with a message. Though perhaps they made the mistake in thinking they would reach a sympathetic audience.
I don't think many do.
It's just that raising awareness is the first step (and likely the only one you'll ever see anyway, because for most topics you aren't in a position where convincing *you* in particular has any impact).
Rational arguments come later, and mostly behind closed doors.
Thats why the other side usually try to smear protests as being crazy mobs who would never be happy. The moment you convince uninvolved people of this, the protestors lose most power.
> Rational arguments come later, and mostly behind closed doors.
I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after. If you're resorting to protest you are trying to leverage public support into a more powerful position. That's about how much power you have not the soundness of your argument.
No, that's the exception rather than the rule. That's a convenient thing to teach to the general public and that's why people like MLK Jr. and Gandhi are being celebrated, but most movement that make actual policy changes do so while disregarding bystanders entirely (or even actively hurting bystanders. That's why terrorism, very unfortunately, is effective in practice).
> which usually involves how rational the protestors are precieved as
I'm afraid most people don't really care about how rational anyone is perceived at. Trump wouldn't have been elected twice if that was the case.
> Decision makers are affected by public sentiment, but public sentiment of the uninvolved public generally carries more weight.
They only care about the sentiment of the people that can cause them nuisance. A big crowd of passively annoyed people will have much less bargaining power than a mob of angry male teenagers doxxing and mailing death threats: see the gaming industry.
> I disagree with this. Rational arguments behind closed doors happen before resorting to protest not after.
Bold claim that contradicts the entire history of social conflicts…
They should probably be called "decision-maders"
Why? Last time this came up the consensus was that libxstl was barely maintained and never intended to be used in a secure context and full of bugs.
I'm full in favour of removing such insecure features that barely anyone uses.
I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust. But good luck with that.
Sure, I agree with you there, but removing XSLT support entirely doesn't seem like a very good solution. The Chrome developer who proposed removing XSLT developed a browser extension that embeds libxslt [0], so my preferred solution would be to bundle that by default with the browser. This would:
1. Fix any libxslt security issues immediately, instead of leaving it enabled for 18 months until it's fully deprecated.
2. Solve any backwards compatibility concerns, since it's using the exact same library as before. This would avoid needing to get "consensus" from other browser makers, since they wouldn't be removing any features.
3. Be easy and straightforward to implement and maintain, since the extension is already written and browsers already bundle some extensions by default. Writing a replacement in Rust/another memory-safe language is certainly a good idea, but this solution requires far less effort.
This option was proposed to the Chrome developers, but was rejected for vague and uncompelling reasons [1].
> I think if the XSLT people really wanted to save it the best thing to do would have been to write a replacement in Rust.
That's already been done [2], but maintaining that and integrating it into the browsers is still lots of work, and the browser makers clearly don't have enough time/interest to bother with it.
[0]: https://github.com/mfreed7/xslt_extension
[1]: https://github.com/whatwg/html/issues/11523#issuecomment-315...
>>> To see how difficult it would be, I wrote a WASM-based polyfill that attempts to allow existing code to continue functioning, while not using native XSLT features from the browser.
>> Could Chrome ship a package like this instead of using native XSLT code, to address some of the security concerns? (I'm thinking about how Firefox renders PDFs without native code using PDF.js.)
> This is definitely something we have been thinking about. However, our current feeling is that since the web has mostly moved on from XSLT, and there are external libraries that have kept current with XSLT 3.0, it would be better to remove 1.0 from browsers, rather than keep an old version around with even more wrappers around them.
The bit that bothers me is that Google continue to primarily say they’re removing it for security reasons, although they have literally made a browser extension which is a drop-in replacement and removes 100% of the security concerns. The people that are writing about the reasons know this (one of them is the guy that wrote it), which makes the claim a blatant lie.
I want people to call Google specifically out on this (and Apple and Mozilla if they ever express it that way, which they may have done but I don’t know): that their “security” argument is deceit, trickery, dishonest, grossly misleading, a bald-faced lie. If they said they want to remove it because barely anyone uses it and it will shrink their distribution by one megabyte, I would still disagree because I value the ability to apply XSLT on feeds and other XML documents (my Atom and RSS feed stylesheets are the most comprehensive I know of), but I would at least listen to such honest arguments. But falsely hiding behind “security”? I impugn their honour.
(If their extension is not, as their descriptions have implied, a complete, drop-in replacement with no caveats, I invite correction and may amend my expressed opinion.)
Even projects like Linux deprecate old underused features all the time. At least the Internet has real metrics about API usage which allows for making informed decisions. Folks describing how they are part of that small fraction of users doesn't really change the data. What's also interesting is that a very similar group of people seem to lament about how it's impossible to write a new browser these days because there are too many features to support.
Adding the support back via an extension isn't cost free.
But when it “isn’t cost-free”… they’ve already done 99.9% of the work required (they already have the extension, and I believe they already have infrastructure to ship built-in functionality in the form of Web Extensions—definitely Firefox does that), and I seem to recall hearing of them shifting one or two things from C/C++ to WASM before already, so really it’s only a question of whether it will increase installer/installed size, which I don’t know about.
And yeah Chrome is really strict about binary size these days. Every kB has to be justified. It doesn't support brotli compression because it would have added like 16kB to the binary size.
it just has slightly less chance of affecting something else
I think you can recognize that the burden of maintaining a proven security nightmare is annoying while simultaneously getting annoyed for them over-grabbing on this.
It's like removing JPEG support because libjpg is insecure!
Being this is HN, did anyone suggest rewriting it in rust? :)
Was SOAP a bad system that misunderstood HTTP while being vastly overarchitected for most of its use cases? Yes. Could overuse of XML schemas render your documents unreadable and overcomplex to work with? Of course. Were early XML libraries well designed around the reality of existing programming languages? No. But also was JSON's early implementation of 'you can just eval() it into memory' ever good engineering? No, and by the time you've written a JSON parser that beats that you could've equally produced an equally improved XML system while retaining the much greater functionality it already had.
RIP a good tech killed by committees overembellishing it and engineers failing to recognise what they already had over the high of building something else.
This is based on my personal experience of having to parse XML in Ruby, Perl, Python, Java and Kotlin. It is a pain every time and I have run into parser bugs at least twice in my career while I have never experience a bug in a JSON parser. Implementing a JSON parser correctly is way simpler. And they are also generally more user friendly.
XmlReader -> (XmlDocument or XmlSerializer) generally hits all use cases for serialization well. XmlReader is super-low-level streaming, when you need it. XmlDocument is great when you need to reason with Xml as the data structure, and XmlSerializer quickly translates between Xml and data structures as object serialization. There's a few default options that are wrong; but overall the API is well thought out.
In Newtonsoft I couldn't find a low level JsonReader; then in System.Text.Json I couldn't find an equivalent of mutable JObject. Both are great libraries, but they aren't comprehensive like System.Text.Json.
My favorite is when people start reimplementing schema ideas in json. Or, worse, namespaces. Good luck with that.
Here is where you lose me
The JSON spec fits on two screen pages https://www.json.org/json-en.html
The XML spec is a book https://www.w3.org/TR/xml/
It absolutely does not. From the very first paragraph:
It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999.
which is absolutely a book you can download and read here: https://ecma-international.org/publications-and-standards/st...
Furthermore, JSON has so many dangerously-incompatible implementations the errata for JSON implementations fills in multiple books, such as advice to "always" treat numbers as strings, popular datetime "extensions" that know nothing of timezones and so on.
> The XML spec is a book https://www.w3.org/TR/xml/
Yes, but that's also everything you need to know in order to understand XML, and my experience implementing API is that every XML implementation is obviously-correct, because anyone making a serious XML implementation has demonstrated the attention-span to read a book, while every JSON implementation is going to have some fucking weird-thing I'm going to have to experiment with, because the author thought they could "get the gist" from reading two pages on a blog.
JSON as a standalone language requires only the information written on that page.
JSON.parse("{\"a\":9999999999999999.0}")
Either no browsers implement JSON as written on that page, or you need to read ECMAScript-262 to understand what is going on.If you write a JSON parser in Python, say, then you will need to understand how Python works instead.
In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
Thankfully XML specifies what a number is and anything that gets this wrong is not implementing XML. Very simple. No wonder I have less problems with people who implement XML.
> In other words, I think you are confusing "json, the specified format" and "the JSON.parse function as specified by ECMAScript-262". These are two different things.
I'm glad you noticed that after it was pointed out to you.
The implications of JSON.parse() not being an implementation of JSON are serious though: If none of the browser vendors can get two pages right, what hope does anyone else have?
I do prefer to think of them as the same thing, and JSON as more complicated than two pages, because this is a real thing I have to contend with: the number of developers who do not seem to understand JSON is much much more complicated than they think.
If we go with the XML Schema definition of a number (say an integer), then even then we are at the mercy of different implementations. An integer according to the specification can be of arbitrary size, and implementations need to decide themselves which integers they support and how. The specification is a bit stricter than JSON's here and at least specifies a minimum precision that must be supported, and that implementations should clearly document the maximum precisions that they support, but this puts us back in the same place we were before, where to understand how to parse XML, I need to understand both the XML spec (and any additional specs I'm using to validate my XML), plus the specific implementation in the parser.
(And again, to clarify, this is the XML Schema specification we're talking about here — if I were to just use an XML-compliant parser with no extensions to handle XSD structures, then the interpretation of a particular block of text into "number" would be entirely implementation-specific.)
I completely agree with you that there are plenty of complicated edge cases when parsing both JSON and XML. That's a statement so true, it's hardly worth discussion! But those edge cases typically crop up — for both formats — in the places where the specification hits the road and gets implemented. And there, implementations can vary plenty. You need to understand the library you're using, the language, and the specification if you want to get things right. And that is true whether you're using JSON, XML, or something else entirely.
This is not my experience. Just this week I encountered one that doesn’t decode entity/character references in attribute values <https://news.ycombinator.com/item?id=45826247>, which seems a pretty fundamental error to me.
As for doctypes and especially entities defined in doctypes, they’re not at all reliable across implementations. Exclude doctypes and processing instructions altogether and I’d be more willing to go along with what you said, but “obviously-correct” is still too far.
Past what is strictly the XML parsing layer to the interpretation of documents, things get worse in a way that they can’t with JSON due to its more limited model: when people use event-driven parsing, or even occasionally when they traverse trees, they very frequently fail to understand reasonable documents, due to things like assuming a single text node, ignoring the possibilities of CDATA or comments.
Try not to confuse APIs that you are implementing for work to make money, with random "show HN AI slop" somebody made because they are looking for a job.
> [...] serious XML implementation [...]
You are cherry-picking here
FFS, have your parser fail on inputs it can not handle.
Anyway, the book defining XML doesn't tell you how your parser will handle values you can't represent on your platform either. And it also won't tell you how our parser will read timestamps. Both are completely out of scope there.
The only common issue in JSON that entire book covers is comments.
The SOAP specification does tell you how to write timestamps. It's not a single book, and doesn't cover things like platform limitations, or arrays. If you want to compare, OpenAPI's spec fills a booklet:
I wish browser developers would understand that.
JSON.parse("9007199254740993") === 9007199254740992The beloved minimalist spec. . No way anything could be wrong with that: https://seriot.ch/projects/parsing_json.html
Turns out there are at least half a dozen more specs. trying and failing to clarify that mess.
Basically the difference is that underlying data structures are different.
JSON supports arrays of arbitrary items and dictionaries with string keys and arbitrary values. It aligns well with commonly used data structures.
XML node supports dictionary with string keys and string values (attributes), one dedicated string attribute (name), array of nodes (child nodes). This is very unusual structure and requires dedicated effort to map to programming language objects and structures. There were even so-called "OXM" frameworks (Object-XML Mapper), similarly to ORM.
Of course in the end it is possible to build a mapping between array, dictionary and DOM. But JSON is much more natural fit.
XML is meant to write phrase-like structures. Structures like this:
int myFunc(int a, void *b);
This is a phrase. It is not data, not an array or a dictionary, although technically something like that will be used in the implementation. Here it is written in a C-like notation. The idea of XML was to introduce a uniform substrate for notations. The example above could be like: <func name="myFunc">
<data type="int"/>
<args>
<data type="int"/>
<addr/>
</args>
</func>
This is, of course, less convenient to write than a specific notation. But you don't need a parser and can have tools to process any notation. (And technically a parser can produce its results in XML, it is a very natural form, basically an AST). Parsers are usually a part of a tool and do not work on their own, so first there is a parser for C, then an indexer for C, then a syntax highlighter for C and so on: each does some parsing for its own purpose, thus doing the same job several times. With the XML processing scenario is not limited to anything: the above example can be used for documentation, indexing, code generation, etc.XML is a very good fit for niche notations written by few professionals: interface specifications, keyboard layouts, complex drawings, and so on. And it is being used there right now, because there are no other tool like it, aside from a full-fledged language with a parser. E.g. there is an XML notation that describes numerous bibliography styles. How many people need to describe bibliography styles? Right. With XML they start getting usable descriptions right away and can fine-tune them as they go. And these descriptions will be immediately usable by generic XML tools that actually produce these bibliographies in different styles.
Processing XML is like parsing a language, except that the parser is generic. Assuming you have no text content it goes in two steps: first you get an element header (name and attributes), then the child elements. By the time you get these children they are no longer XML elements, but objects created by your code from these elements. Having all that you create another object and return it so that it will be processed by the code that handles the parent element. The process is two-step so that before parsing you could alter the parsing rules based on the element header. This is all very natural as long as you remember it is a language, not a data dump. Text complicates this only a little: on the second step you get objects interspersed with text, that's all.
People cannot author data dumps. E.g. the relational model is a very good fit for internal data representation, much better than JSON. But there is no way a human could author a set of interrelated tables aside from tiny toy examples. (The same thing happens with state machines.) Yet a human can produce tons of phrase-like descriptions of anything without breaking a sweat. XML is such an authoring tool.
I'm glad to have all sorts of specialists on our team, like DBAs, security engineers, and QA. But we had XSLT specialists, and I thought it was just a waste of effort.
Hope I can quote it to Transofrmer architecture One day
This is also why I dislike AI browsers in general. They generate a view to the user that may not be real. They act like a proxy-gate, intercepting things willy-nilly. I may be oldschool, but I don't want governments or corporations to jump in as middle-man and deny me information and opportunities of my own choosing. (Also Google Suck, I mean Google Search, sucks since at the least 5 years now. That was not accidental - that was deliberate by Google.)
Lina Khan had the right idea and mandate, but she was too fucking slow.
When the Dems swing back into power, the gutting of big tech needs to be swift and thorough. The backbone needs to be severed. I'm screaming at my representatives to do this.
Google took over web tech, turned the URL bar into their Search product. They force brands to buy ads for their name brands - think about how much money they make by selling ads on the keywords "Airpods" or "Nintendo Switch". They forced removal of ad blocking tech unilaterally. They buy up all the panes of glass they don't already own. They don't allow you to install your own software on mobile anymore. And you have to buy ads for your app too, otherwise your competitor gets installed. If you develop software, you're perpetually taxed and have to do things their way. They're increasingly severing the customer relationship. They're putting themselves in as middle men in the payments industry, the automotive industry, the entertainment industry...
Look at how many products they've built and thrown away in the game of trying to broker your daily life.
I could go on and on and on... They're leeches. Giant, Galactus-sized leeches.
The bulk of the money they make is from installing themselves as middlemen.
And anyone thinking they're you're friends - they conspired to suppress wages, and they're actively cutting jobs and rebuilding the teams in India. Congrats, they love you. They're gutting America and are 100% anti-American. I love India and have nothing against its people, I'm just furious that this domestic company - this giant built on the backs of American labor and its population - hates its own country so much. (You know they hate us because they're still stuffing Corporate Memphis down our throat.)
Edit: I have to say one thing positively because Google makes me so negative. This website is beautiful. I was instantly transported back in time. But it's also a nice modern reinterpretation of retro web design. I love it so much.
Why did everything consolidate terribly in the '80s and '90s? Because we basically stopped enforcing antitrust in the '70s, due to Chicago School jackasses influencing policy and jurisprudence.
We need to undo their fake-pro-markets horse-shit and get back to having robust markets in every sector, not just software (but yes, certainly in software too). That'll require a spree of breaking up big companies across the economy.
One of the things that startled me when working for Google is how much of their decisionmaking actually looks like "This sucks and we don't want to be responsible for it... But there isn't anyone else who can be, so I guess it's us."
I'm not saying this is optimal or that it should be the way it is, but I am saying there are problems with alternative approaches that need to be addressed.
To give a comparison: OpenGL tried a collaborative and semi-open approach to governance for years, and what happened was they got more-or-less curb-stomped by DirectX, so much so that it drove Windows adoption for years as "the architecture for playing videogames." The mechanism was simple: while OpenGL's committee tried to find common ground among disparate teams with disparate needs, Microsoft went
1) we control this standard; here are the requirements you must adhere to
2) we control the "DirectX" trademark, if you fail to adhere to the standards we decertify your product.
As a result, you could buy a card with "DirectX" stamped on it, slap it into your Windows machine, and it would work. You couldn't do anything like that with OpenGL hardware; the standard was so loose (and enforcement so nonexistant) that companies could, via the "gestalt" feature-detection layer, claim a feature was supported if they had polyfilled a CPU-side software renderer for it. Useless for games (or basically any practical application), but who's gonna stop them from lying?
Browsers aren't immune to market forces; a standard that is too inflexible or fails to reflect the actual implementation pressures and user needs will be undercut by alternative approaches.
I'm not saying current governance of the web is that bad, but I bring up the history of OpenGL as an example of why an open, cooperative approach can fail and the pitfalls to watch out for. In the case of this specific decision regarding XSLT, it appears from the outside looking in that the decision is being made in consensus by the three largest browser engine developers and maintainers. What voice is missing from that table, and who should speak for them?
(Quick side-note: Apple managed to dodge a lot of the OpenGL issues by owning the hardware stack and playing a similar card to Microsoft's with different carrots and sticks: "This is the kernel-level protocol you must implement in hardware. We will implement OpenGL in software. And if your stuff doesn't work we just won't sell laptops with your card in them; nobody in this ecosystem replaces their graphics hardware anyway").
It is so possible to preserve XSLT and other web features e.g. by wrapping them in built-in (potentially even standardized) polyfills, but that kind of work isn't incentivized over new features and big flashy refactors.
When you are the biggest organization in a space, it's your space whether you feel qualified to lead or not. The right course of action is "get qualified, fast." The top-level leadership did not strike me as willing to shoulder that responsibility.
My personal preferred outcome to address the security concerns with XSLT would probably be to replace the native implementation with a JavaScript-sandboxed implementation in-browser. This wouldn't solve all issues (such an implementation would almost certainly be operating in a privileged state, so there would still be security concerns), but it would take all the "this library is living at a layer that does direct unchecked memory manipulation, with all the consequences therein" off the table. There is, still, a case to be made perhaps that if you're already doing that, the next logical step is to make the whole feature optional by jettisoning the sandboxed implementation into a JavaScript library.
That said, I never used XSLT for anything, and I don’t see how is its support in browsers tied to RSS. (Sure you could render your page from your rss feed but that seems like a marginal use case to me)
Although I don't have firm evidence, haven't worked at Google, and you likely know company dynamics better than I.
This really is just a storm in a waterglass. Nothing like the hundreds or tens of thousands of flash and java applet based web pages that went defunct when we deprecated those technologies.
Sure, but Flash and Java were never standards-compliant parts of the web platform. As far as I'm aware, this is the first time that something has been removed from the web platform without any replacements—Mutation Events [0] come close, but Mutation Observers are a fairly close replacement, and it took 10 years for them to be fully deprecated and removed from browsers.
[0]: https://developer.mozilla.org/en-US/docs/Web/API/MutationEve...
Again: this is nothing like Flash or Java applets (or even ActiveX). People were seriously considering Apple's decision to not support Flash on iPhone as a strategic blunder due to the number of sites using it. Your local news station probably had video or a stock market ticker using Flash. You didn't have to hunt for examples.
I've spent the last several years making a website based on XML and XSLT. I complain about the XML/XSLT deprecation from browsers all the time. And the announcements in August that Google was exploring getting rid of XSLT in the browser (which, it turned out, wasn't exploratory at all, it was a performative action that led to a foregone conclusion) was so full of blowback that the discussion got locked and Google forged ahead anyway.
> Is it possible that installing an XSLT processor on the server is not as big a hassle as everyone pretends?
This presumes that everyone interested in making something with XML and XSLT has access to configure the web server it's hosted on. With support in the browser, I can throw some static files up just about anywhere and it'll Just Work(tm)
I don't have any desire to learn JavaScript (or use someone else's script) just to do some basic templating.
XML Parsing Error: mismatched tag. Expected: </item>.
Location: https://example.org/rss.xml
Line Number 71, Column 3:
</channel>
--^
Chrome shows a useless white void.I enabled the nginx XSLT module on a local web server serve the files to myself that way. Now when it fails I can check the logs to see what instruction it failed on. It's a bad experience, and I'm not arguing otherwise, but it's just about the only workaround left.
It's a circular situation: nobody wants to use XSLT because the tools are bad and nobody wants to make better tools because XSLT usage is too low.
Up until a few years ago, I could debug basic stuff in FireFox. If Firefox encountered an XSLT parsing error, it would show an error page with a big ASCII arrow pointing to the instruction that failed. That was a useful clue. Now it shows a blank page, which is not useful at all.
In the golden old days of 2018, browsers at least applied some styling https://evertpot.com/firefox-rss/
You can still manually apply styling using xslt https://www.cedricbonhomme.org/blog/index.xml
Unless I'm using XSLT without knowing, you can do this with the xml-stylesheet processing instruction
Link: </style.css>; rel=stylesheet
(Yes, this works even without <?xml-stylesheet?> PI others have mentioned.)I think the best strategy for Google is to support this and simultaneously ditch XSLT. This way nothing is truly lost.
[1] You can test your browser from: https://annevankesteren.nl/test/html-element/style-header.ph...
XSLT does much more than CSS.
Random example: https://lepture.com/en/feed.xml
This is useful because feed URLs look the same as web page URLs, so users are inclined to click on them and open them in a web browser instead of an RSS reader. (Many users these days don't even know what an RSS reader is). The stylesheet allows them to view the feed in the browser, instead of just being shown the XML source code.
And we have done it for other formats: PDF is now quite well supported in browsers without plugins/etc.
It's a format intended for be consumed like an API call. It's like JSON. The link is something you import into an aggregator.
RSS feeds shouldn't even be displayed as XML at all. They should just be download links that open in an aggregator application. The same way .torrent files are imported into a torrenting client, not viewed.
1. This is pretty difficult for someone who doesn't know about RSS. How would they ever learn what to do with it?
2. Browsers don't do that. There used to be an icon in the URL bar when they detected an RSS feed. It would be wonderful if browsers did support doing exactly what you suggest. I'm not holding my breath.
I'm not looking to replicate my blog via XSLT of the RSS feed: that's what the blog's HTML pages are. I just don't want to alienate non-RSS users.
I don't think you need to worry about "alienating" non-RSS users. If somebody clicks on an RSS link without knowing what RSS is and sees gibberish, that's not really on you. They can just look it up. Or if you want, you can put a little question-mark icon next to the RSS link if you want to educate people. But mostly, for feeds and social media links, people just ignore the icons/acronyms they don't recognize.
And: Because it exists/existed and thus people relied upon it.
With the amount of sites on the web, even a small number relying on features, each having just a bunch of users, it becomes a big number of impacted.
And XSLT in that context is interesting as one can ship the RSS file, the web browser renders it with XSLT to human readable and a smart browser can do smart things with it. All from the same file.
They chose to kill off a spec and have it removed from every browser because they don't like it. They choose to keep maintaining AMP because its their pet project and spec. Its as simple as that, it has nothing to do with limited resources forcing them to trim features rather than maintain or improve them.
You can have a document without CSS but you can’t style it.
You can have a document without JavaScript but only a static one (still interactive, but only though forms)
On the other hand, you can replace XSLT with server side rendering, or JavaScript. It does not serve a truly unique function.
What? CSS didn't come around until several years after HTML did. And you can certainly style an HTML document without CSS.
> On the other hand, you can replace XSLT with server side rendering, or JavaScript.
You can also execute JavaScript on the server to make browsers more secure, but I don't see browser makers clamoring to remove JavaScript support.
> It does not serve a truly unique function.
It does, though. It lets someone do some basic programming of some web pages without having to become a developer
> You can also execute JavaScript on the server to make browsers more secure, but I don't see browser makers clamoring to remove JavaScript support.
JS is not there just for client side static DOM rendering. Something like Google Maps or an IRC chat would be a much poorer experience without it.
Sometimes browsers are asked to render HTML documents that were written decades ago to conform to older specs and are still on the internet. That still works
> JS is not there just for client side static DOM rendering. Something like Google Maps or an IRC chat would be a much poorer experience without it.
Of course they would. That's most of the point. You can do a lot more damage with JavaScript than you currently can with XSLT, but XSLT has to go because of 'security concerns'
Imagine if you opened a direct link to a JPEG image and instead of the browser rendering it, you'd have to save it and open it in Photoshop locally. Wouldn't that be inconvenient?
Many browsers do support opening web-adjacent documents directly because it's convenient for users. Maybe not Microsoft Word documents, but PDF files are commonly supported.
Or can't you polyfill this / use a library to parse this?
In theory you could do the transformation client side, but then you'd still need the server to return a different document in the browser, even if it's just a stub for the client-side code, because XML files cannot execute Javascript on their own.
Another option is to install a browser extension but of course the majority of users will never do that, which minimizes the incentive for feed authors to include a stylesheet in the first place.
You need a server to serve Json as well. Basically, see XML as data format.
RSS readers are not chrome, so they have their own libraries for parsing/transforming with XSLT.
Its also worth noting that the latest XSLT spec actually supports JSON as well. Had browsers decided to implement that spec rather than remove support all together you'd be able to render JSON content to HTML entirely client-side without JS.
"Tell your friends and family about XSLT. Keep XSLT alive! Add XSLT to your website and weblog today before it is too late!"
https://www.rfc-editor.org/rfc/rfc7231#section-5.3.2
Checking the HTTP headers is the HTTP standard.
This would also break the workflow I have for my site, where I build it as a static directory locally during development and point Python's trivial HTTP server at it to access the content over localhost.
And it's totally insulting because the people removing this have created a (memory safe!) browser extension that lets you view XSLT documents, and put special logic in the browser to show users a message telling them to download that extension when an XSLT-styled document is loaded. They should bundle the extension with the browser instead of breaking my website and telling users where to fix it.
If your argument is that you don't want to use JavaScript because it's Turing complete and insecure and riddled with bugs and security holes, then why the fuck are you using XSLT?
Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater. It doesn't matter how unlikely you guess and wish they are, they just have to be possible. And the fact that the XSLT 1.0 implementations built into browsers are antique un-sandboxed memory-unsafe C++ code make vulnerabilities "highly likely", not "highly unlikely", which the record clearly proves.
Browsers only natively support the ancient version of XSLT 1.0, so if you need a less antiquated version, you should use a modern memory safe sandboxed polyfill, or process it on the server side, or more safely not use XSLT at all and simply use JavaScript instead (simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor), instead of foolishly relying on non-sandboxed old untrustworthy poorly maintained C++ code built into the browser.
Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia, and which is quite obvious if you have ever read the manual and used it. It has recursive template calls, conditionals, variables and parameters, pattern matching and selection, text and node construction, unbounded input and recursion depth, etc. So how could it possibly not be Turing complete, since it has the same expressive power of functional programming languages? And that should be quite obvious to anyone who knows XSLT and basic CS101, at a glance, without a formal proof.
https://en.wikipedia.org/wiki/XSLT
>While XSLT was originally designed as a special-purpose language for XML transformation, the language is Turing-complete, making it theoretically capable of arbitrary computations.
Do you recall the title of Chrome's web page explaining why they're removing XSLT? "Removing XSLT for a more secure browser" (aka "Bin Ladin Determined To Strike in XSLT" ;). Didn't you read that article, and the recent HN discussion about it? You can't just claim nobody warned you, like GW Bush tried to do.
https://news.ycombinator.com/item?id=45823059
https://developer.chrome.com/docs/web-platform/deprecating-x...
>Why does XSLT need to be removed?
>The continued inclusion of XSLT 1.0 in web browsers presents a significant and unnecessary security risk. The underlying libraries that process these transformations, such as libxslt (used by Chromium browsers), are complex, aging C/C++ codebases. This type of code is notoriously susceptible to memory safety vulnerabilities like buffer overflows, which can lead to arbitrary code execution. For example, security audits and bug trackers have repeatedly identified high-severity vulnerabilities in these parsers (e.g., CVE-2025-7425 and CVE-2022-22834, both in libxslt). Because client-side XSLT is now a niche, rarely-used feature, these libraries receive far less maintenance and security scrutiny than core JavaScript engines, yet they represent a direct, potent attack surface for processing untrusted web content. Indeed, XSLT is the source of several recent high-profile security exploits that continue to put browser users at risk. The security risks of maintaining this brittle, legacy functionality far outweighs its limited modern utility. [...]
Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.
Examples:
CVE‑2025‑7425: A heap use-after-free in libxslt caused by corruption of the attribute type (atype) flags during key() processing and tree-fragment generation. This corruption prevents proper cleanup of ID attributes, enabling memory corruption and possibly arbitrary code execution.
CVE‑2024‑55549: Another use-after-free in libxslt (specifically xsltGetInheritedNsList) disclosed via a Red Hat advisory.
CVE‑2022‑22834: An XSLT injection vulnerability in a commercial application (OverIT Geocall) allowing remote code execution from a “Test Trasformazione XSL” feature. Shows how XSLT engines/processors can be attack surfaces in practice.
CVE-2019-18197: (libxslt 1.1.33) In the function xsltCopyText (file transform.c) a pointer variable isn’t reset in certain flows; if the memory area was freed and reused, a bounds check could fail and either write outside a buffer or disclose uninitialised memory.
CVE-2008-2935: buffer overflows in crypto.c for libexslt.
CVE-2019-5815: type confusion in xsltNumberFormatGetMultipleLevel, repeated memory safety flaws (heap/stack corruption, improper bounds checks, pointer reuse) in the library over many years.
What's the point of having an Atom feed if I can't give people a link to it? Do you just expect me to write “this website has an atom feed” and have only the <link> element invisibly pointing at it? That is terrible UX. And then what if I want to include a link to my feed in a message to share it with someone?
>Handwaving that vulnerabilities are "highly unlikely" is dangerous security theater
No it isn't. There are memory safe XSLT implementations. Not so for JavaScript. This is because XSLT is a simple language and JavaScript a complicated one. You are trying to make the case that XSLT is inherently unsafe because poor implementations of it exist, yet it is actually much safer because safe implementations exist and are easy to write. It can initiate no outgoing internet connections, cannot read from memory directly, cannot do any of the things that makes JavaScript inherently dangerous.
>simply transforming RSS to HTML directly with JavaScript is a MUCH smaller and harder attack surface than the massive overkill of including an entire sandboxed general purpose Turing complete XSLT processor
Firstly, you can't include JavaScript tags in RSS or Atom so my website would not conform to any web standard. Secondly, by using JavaScript, I'm demanding that my users enable a highly dangerous web feature that has been the basis for many attacks. By using XSLT, I'm giving them the option to use a much smaller interface with safer implementations available. How many CVEs have their been in JavaScript runtimes compared with XSLT? And finally, browser developers should just bundle one of these JavaScript polyfills and activate it for documents with stylesheets if they are so easy to use. Demanding that users deviate from web standards to get simple features like XML styling is ridiculous and it would clearly be little effort for them to silently append a polyfill script to documents with XSLT automatically. If that's the only way they can make it secure, that's what they should do.
>Your overconfidence in XSLT's security in browsers is unjustified and unsupported by its track record and reputation, its complexity is extremely high, it's written in unsafe un-sandboxed C/C++, it gets vastly less attention and hardening and use than JavaScript, and its vulnerabilities are numerous and well documented.
I have no confidence at all in browsers' implementations of XSLT because they admit they use a faulty library. I have absolute confidence that it would be little effort to replace the faulty library with a correct one, and that doing so would be miles safer than expecting users to enable JavaScript.
>Of course all versions of XSLT are Turing complete, as you can easily confirm on Wikipedia
Do not quote Wikipedia as a source. In this case, the provided source in the Wikipedia page claims only that version 2.0 is Turing complete, and this claim is erroneous, based on a proprietary extension of certain XSLT processors but not that used in Chrome.
http://tkachenko.com/blog/archives/000275.html
It is quite frankly ridiculous to me that people are bending over backwards to suggest that XSLT is somehow an inherent security risk when you can include a JavaScript fragment in pages to trigger an XSLT processor. Whatever risk is posed by XSLT is a clear subset of that posed by JavaScript for this reason alone. You will never see a complete JavaScript implementation in XSLT because it isn't possible. One language is given greatly more privileged access to the resources and capabilities of the user's computer than the other.
The decision of web-browser to include faulty XSLT libraries when safe ones exist is the source of risk here, and not these same people who have been putting users at risk in a billion different ways over the years come to me and suggest that I have to remove a completely innocuous feature from my website and replace it with a more dangerous one while breaking standards compliance because they can't be bothered to switch from an unsafe implementation to a safe one.
The google graveyard is for products Google has made. It's not for features that were unshipped. XSLT will not enter the Google graveyard for that reason.
>We must conclude Google hates XML & RSS!
Google reader was shutdown due to usage declining and lack of willingness for Google to continue investing resources into the product. It's not that Google hate XML and RSS. It's that end users and developers don't use XSLT and RSS enough to warrant investing into it.
>by killing [RSS] Google can control the media
The vast majority of people in the world do not get their news by RSS. It's never would have taken over the media complex. There are other surfaces for news like X which Google is not able to control. Google is not the only surface where news can surface.
> Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
It is quite a reach to say that Google removing XSLT will give them control over government legislation. They are completely unrelated.
>How much did Google pay for this support?
Google is not paying for support. These browsers have essentially a revenue sharing agreements with the traffic they provide Google with. The payments are for the traffic to Google.
Keeping links to the original announcements for future reference:
1) <https://groups.google.com/a/chromium.org/g/blink-dev/c/CxL4g...>
2) <https://developer.chrome.com/docs/web-platform/deprecating-x...>
I know that every such feature adds significant complexity and maintenance burden, and most people probably don't even know that many browsers can render XSLT. Nevertheless, it feels like yet another interesting and niche part of the web, still used by us old-timers, is going away.
I strongly encourage building a website entitled, something like keepXSLTAlive.tld to advocate for XSLT as the other guys did https://keepandroidopen.org/ for Android (https://news.ycombinator.com/item?id=45742488), or keep this current site (https://xslt.rip/) but update the UI a little bit to better reflect the protest vibe.
But that does not mean xslt should be kept alive just because of that. It should be judged on its own merits
Google judged a 25 year old spec that is now 2 major versions out of date.
Most people seem to think it is bad because it is Google who want to remove it. Personally I just see Google finally doing something good.
Not only that Google engineers Mason Freed has shown pretty forcefully that he will not listen to defense, reason or logic. This further evidenced by Google repeatedly trying to kill it for 25 years.
Personally I just see you licking Google’s boot.
Just kidding, Canvas is obsolete technology, this should obviously be done with WebGPU
Chrome supports it on Windows and macOS, Linux users need to explicitly enable it. Firefox has only released it for Windows users, support on other platforms is behind a feature flag. And you need iOS 26 / macOS Tahoe for support in Safari. On mobile the situation should be a bit better in theory, though in my experience mobile device GPU drivers are so terrible they can't even handle WebGL2 without huge problems.
i still remember when tables were forced out of fashion by hordes of angry div believers! they became anathema and instantly made you a pariah. the arguments were very passionate but never made any sense to me: the preaching was separating structure from presentation, mostly to enable semantics, and then semantics became all swamped with presentation so you could get those damned divs aligned in a sensible way :-)
just don't use (or abuse) them for layout but tables still seem to me the most straightforward way to render, well, tabular content.
> Smaller browser vendors already pick and choose the features they support.
If there weren't a gazillion features to support, maybe there would be more browsers. I think criticizing Google and other vendors for _adding_ tons of bloat would be a better use of time.
I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.
However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.
As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.
HTML was also redefined to be an SGML application with 4.0.
I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.
All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.
It's the common cycle as old as software engineering itself.
Nope, it was intended as SGML from the get go; cf [1].
> SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity
HTML (and thus SGML) is the most used document language there ever has been, by far.
While HTML is clearly the most used document markup language there has ever been, almost nobody is using an SGML-compliant parser to parse and process it, and most are not even bothering with the DTD itself; not to mention that HTML5 does not provide a DTD and really can't even be expressed with an SGML DTD.
So while HTML used to be one of SGML "applications" (document types, along with a formal definition), on the web it was never treated as such, but as a very specific language that is inspired by SGML and only inspired by the spec too (since day 1, all browsers accepted "invalid" HTML and they still do).
Ascribing the success to SGML is completely backwards, IMHO: HTML was successful despite it being based on SGML, and for all intents and purposes, majority never really cared about the relationship.
The enormous proliferation of syntax and super-complicated layout models doesn't stop markup haters to cry wolf because entities (text macros) represent a security risk in markup however; go figure.
It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?
Countless websites on Geocities and elsewhere looked just like that. MY page looked like that (but more edgy, with rotating neon skull gifs). All those silly GIFs were popular and there were sites you could find and download some for personal use.
>It's like how people remember the 80s as neon blue and pink when it was more of a brownish beige.
In North Platte or Yorkshire maybe. Otherwise plenty of neon blue and pink in the 80s. Starting from video game covers, arcades, neon being popular with bars and clubs, far more colorful clothing being popular, "Memphis" style graphic design, etc.
Gray backgrounds where also popular, with bright blue for unvisited links and purple for visited links. IIRC this was inspired by the default colors of Netscape Navigator 2.
"Inspired" is an interesting word for "didn't set custom values." And I believe Mosaic used the same colors before. I'm not even sure when HTML introduced the corresponding attributes (this was all before CSS ...)
I once got into a cab in NYC on Halloween and the driver said to me, hey, you really nailed that 80s hairstyle, thinking I had styled it for Halloween. I had to tell him dude, I’m from the 80s.
https://geocities.restorativland.org/Area51/
> was more of a brownish beige.
Did you never watch MTV?
If there is no white 1x1 pixel that is stretched in an attempt to make something that resembles actual layout, or multiple weird tables, I always ask: are they even trying.
In all seriousness- they got quite a good run with xslt. Time to let it rest.
In the 90s, sites did kinda look like that.
What came later was the float layout hell- sorry, "solution".
I could add a polyfill, but that adds multiple MB, making this approach heavyweight.
I'm looking at: https://github.com/mfreed7/xslt_polyfill
Which uses: https://github.com/DesignLiquido/xslt-processor/tree/main
But they don't look that heavy. Am I missing something? Megabytes of JS would be enormous.
Just for a start. It's a tiny polyfill for a tiny subset of the thing that is XSLT 1.0.
https://github.com/mfreed7/xslt_polyfill/blob/main/xslt-poly...
Of course you can achieve similar effects with JS, by downloading data files and rendering them into whatever HTML you want. But that cuts users without enabled JS.
Not a huge loss, I guess, given the lack of popularity of these technologies. But loss nonetheless. One more step to bloated overengineered web.
Users who disable JS are insane and hypocritical if they don't also disable XSLT, which is even worse. So I wouldn't bend over too far backwards to support insane hypocrites. There aren't enough of them to matter, they enjoy having something to complain about, and they're much louder and more performative than the overwhelming majority of users. Not a huge loss cutting them out at all.
I had a good chuckle at the idea of sitting around the dinner table at Christmas telling my parents and in-laws all about XSLT.
It's just direct browsing support for rendering using XSLT that's removed.
> For over a decade, Chrome has supported millions of organizations with more secure browsing – while pioneering a safer, more productive open web for all.
… and …
> Our commitment to Chromium and open philosophy to integration means Chrome works well with other parts of your tech stack, so you can continue building the enterprise ecosystem that works for you.
Per the current version of https://developer.chrome.com/docs/web-platform/deprecating-x..., by August 17, 2027, XSLT support is removed from Chrome Enterprise. That means even Chrome's enterprise-targeted, non-general-web browser is going to lose support for XSLT.
Surprisingly, the "hyperlinked documents" structure was universal enough to allow rudimentary interactive web applications like shops or reservation forms. The web became useful to commerce. At first, interactive functionality was achieved by what amounted to hacks: nav blocks repeated at every page, frames and iframes, synchronous form submissions. Of course, web participants pushed for more direct support for application building blocks, which included Javascript, client-side templates, and ultimately Shadow DOM and React.
XSLT is ultimately a client-side template language too (can be used at the server side just as well, of course). However, this is a template language for a previous era: non-interactive web of documents (and it excels at that). It has little use for the current era: web of interactive applications.
If you can use it to generate HTML, you can use it to generate an interactive experience.
As for money: Remind me what was Google's profit last year?
As for usage: XSLT is used on about 10x more sites [1] than Chrome-only non-standards like USB, WebTransport and others that Google has no trouble shoving into the browser
[1] Compare XSLT https://chromestatus.com/metrics/feature/timeline/popularity... with USB https://chromestatus.com/metrics/feature/timeline/popularity... or WebTransport: https://chromestatus.com/metrics/feature/timeline/popularity... or even MIDI (also supported by Firerox) https://chromestatus.com/metrics/feature/timeline/popularity...
Browsers should try things. But if after many years there is no adoption they should also retire them. This would be no different if the organization is charity or not.
Google themselves have a document on why killing anything in the web platform is problematic: e.g. Chrome stats severely under-report corporate usage. See "Blink principles of web compatibility" https://docs.google.com/document/d/1RC-pBBvsazYfCNNUSkPqAVpS...
It has great examples for when removal didn't break things, and when it did break things etc.
I don't know if anyone pays attention to this document anymore. Someone from Chrome linked to this document when they wanted to remove alert/prompt, and it completely contradicted their narrative.
Last i checked, google isn't a charity.
Last I checked, Google isn't supposed to be able to unilaterally decide how the World Wide Web is supposed to work
Besides, xkcd #2347 [1] is talking about precisely that situation - there is a shitload of very small FOSS libraries that underpin everything and yet, funding from the big dogs for whom even ten fulltime developer salaries would be a sneeze has historically lacked hard.
Google does contribute to software that it uses. When i say google is not a charity, i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
An awful lot of stuff depends on xslt under the hood. Web frontend, maybe not much any more, that ship has long since sailed. But anything Java? Anything XML-SOAP? That kind of stuff breathes XML and XSLT. And, at least MS Office's new-generation file formats are XML... and I'm pretty sure OpenOffice is just the same.
I'd also assume the java world is using xalan-j or saxon, not libxslt.
Neither do huge complicated standards that Chrome pushed in recent years.
> that is why google is removing it instead of fixing it.
And yet Google has no issues supporting, deploying and fixing features that see 10x less usage. Also, see this comment: https://news.ycombinator.com/item?id=45874740
> i mean why would they continue to use a library that is not useful to them, just so they can have an excuse to contribute to it? It makes very little sense.
They took upon themselves the role of benevolent stewards of the web. According to their own principles they should exercise extreme care when adding or removing features to the web.
However, since they dominate the browser market, and have completely subsumed all web-related committees, they have turned into arrogant uncaring dictators.
Apple and firefox agree with them. They did not do this unilaterally. By sone accounts it was actually firefox originally pushing for this.
The main guy pushing it didn't even know RSS sites as used by podcasts until after people flooded the issue with examples and requests not to remove. E.g. https://github.com/whatwg/html/issues/11523#issuecomment-315...
[1] Reaction was "cautiously agree" btw. In that same issue, https://github.com/whatwg/html/issues/11523#issuecomment-314... and https://github.com/whatwg/html/issues/11523#issuecomment-314...
[2] Same as with alert/prompt. While all browsers would want to remove them, Chrome not only immediately decided to remove them with very short notice, but literally refused to even engage with people pointing out issues until a very large public outcry: https://gomakethings.com/google-vs.-the-web/#the-chrome-team...
There's a difference between "we agree on principle" and "we don't care, remove/ship/change YOLO"
[1] https://github.com/pizlonator/fil-c/tree/deluge/projects/lib...
It doesn’t seem dramatic at all:
> Finding and exploiting 20-year-old bugs in web browsers
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
> libxslt -- unmaintained, with multiple unfixed vulnerabilities
— https://vuxml.freebsd.org/freebsd/b0a3466f-5efc-11f0-ae84-99...
For $0? Probably not. For $40m/year, I bet you could create an entire company that just maintains and supports all these "abandoned" projects.
No sane commercial entity will dump even a cent into supporting an unused technology.
You have better luck pitching this idea to your senator to set up an agency for dead stuff - it will create tens or hundreds of jobs. And what's $40mm in the big picture?
Funny you should mention that. US Title Code uses XSLT.
It is supported technology. That's all it is. And it will be no more.
No one is stopping you from rendering your XML to HTML server side using XSLT.
Counterpoint: google hates XML and XSLT. I've been working on a hobby site using XML and XSLT for the last five years. Google refused to crawl and index anything on it. I have a working sitemap, a permissive robots.txt, a googlebot html file proving that I'm the owner of the site, and I've jumped through every hoop I can find, and they still refused to crawl or index anything except a snippet of the main index.xml page and they won't crawl any links on that.
I switched everything over to a static site generator a few weeks ago, and Google immediately crawled the whole thing and started showing snippets of the entire site in less than a day.
My guess is that their usage stats are skewed because they've designed their entire search apparatus to ignore it.
Game sites and other "desperate-for-attention" sites have the animated gifs all over, scrolling or blinking text, dark background with bright multi-colored text with different font sizes and types and sound as well, looking pretty chaotic.
Just browsing around on a geocities website you can find pages like https://geocities.restorativland.org/CollegePark/Lounge/3449... and https://geocities.restorativland.org/Eureka/1415/ (audio warning on both)
If anything, this retro site is a bit too modern for having translucent panels, the background not being badly tiled, and text effects being too stylish.
When it comes to killing web technology, Google is mostly killing their own weird APIs that nobody ended up using or pruning away code that almost nobody uses according to their statistics.
It has RSS feeds for individual channels. It does not _support_ RSS in any meaningful way.
Disenting opinions will be marked as abuse!
The XML Priesthood will immediately jump down your throat about "XSL 3 Fixes All Things" or "But You're Not Doing It Correctly", and then point towards a twenty year old project that has five different proprietary dependencies, only two of which even still have a public cost. "Email Jack for Pricing".
And all this time, the original publishing requirement for these stone age pipelines is completely subsumed by the lightweight markup ecosystem of the last decade, or, barring that, that of TeX. So much complexity for no reason whatsoever, I am watching man-centuries go up in frickin' smoke, to satisfy a clique of semantic academics who think all human thought is in the form of a tree.
The horror, the horror.
What they can do is remove support for XSLT in Chrome and thus basically kill XSLT for websites. Which until now I didn't even know was supported and used.
XSLT can be used in many other areas as well, e.g. for XSL-FO [2]
[1] https://www.w3.org/TR/xslt-30/ [2] https://en.wikipedia.org/wiki/XSL_Formatting_Objects
Browser can render XHTML which is also a valid XML.
So it's pretty natural to use XSLT to convert XML into XHTML which is rendered by browser. Of course you can do it on the server side, but client side support enables some interesting use-cases.
Also: "the needs of users and authors (i.e. developers) should be treated as higher priority than those of implementors (i.e. browser vendors), yet the higher priority constituencies are at the mercy of the lower priority ones": https://dev.to/richharris/stay-alert-d
I guess the fact that it’s obscure knowledge that browsers have great, fast tools for working directly with XML is why we’re losing nice things and will soon be stuck in a nothing-but-JavaScript land of shit.
Lots of protocols are based on XML and browsers are (though, increasingly, “were”) very capable of handling them, with little more than a bridge on the server to overcome their inability to do TCP sockets. Super cool capability, with really good performance because all the important stuff’s in fast and efficient languages rather than JS.
One point from one of the linked threads I find particularly puzzling:
> I think the issue with XSLT isn't necessarily the size of the attack surface, it's the lack of attention and usage.
> I.e. nearly 100% of sites use JS, while 1/10000 of those use XSLT. So all of the engineering energy (rightfully) goes to JS, not XSLT.
XSLT is a finished standard. Not everything needs to evolve. If the implementation works and is safe, what speaks against keeping it?
If people continue to use XML-supporting technology, these open standards will continue to thrive.
I'm sure this site will be supported eventually by the Ladybird Web browser - can't wait to switch to it next August.
- WSDL files that were used to describe Enterprise services on a bus. These were then stored and shared in the most convoluted way in a Sharepoint page <shudders>
- XSD definitions of our custom XML responses to be validated <grimace>
- XSLTs to allow us to manipulate and display XML from other services, just so it would display properly on Oracle Siebel CRM <heavy sweats>
Fuck Google you tyrants, all the dissenting opinions in this thread about XSLT are clearly Google employees.
Btw, I love this page! Highly entertaining, yet at the same time use of XSLT.
There is a reason the lead Google engineers initials are “MF”.
https://tomi.vanek.sk/ a WSDL viewer implemented as a set of XSLT transformations that translate the original XML definitions into HTML.
They can also avoid wasting resources on a format only used by “Raspberry Pi guys”.
“I just made an app that tracks local tandem bikes in the San Francisco Bay area”
I don't like seeing any backward compatibility loss on the web though, so I do wish browsers would reconsider and use a js-based compatibility shim (as other comments have mentioned).
There is nothing like in the modern web stack, such a pity.
They moved everything into a wiki later.
EDIT: Oh, their developers' manual is still done like that: https://github.com/gentoo/devmanual into https://devmanual.gentoo.org/
I specifically want it to be served as XML so it can still be an RSS feed: I don't even need the HTML to look that great: I have the actually website for that.
But what I want only is XSLT on live DOM nodes. Not a fancy stuff.
I used to generate a blog and tumblelog entirely from XML files using an XSLT processor, it will not be missed.
And how does it break RSS? (Which I at least heard of people using it before)
And I truly believe it's time to retire this monstrosity.
Edit: and for a slightly calmer response: Google has like, a bajillion dollars. They could address any security issues with XSLT by putting a few guys on making a Rust port and have it out by next week. Then they could update it to support the modern version in two weeks if it being out of date is a concern. RSS feeds need XSLT to display properly, they are a cornerstone of the independent web, yet Google simply does not care.
Say what you will about how this is technically allowed in open source, it is nothing short of morally despicable. A real https://xkcd.com/2347/ situation.
It would cost Google practically nothing to step up and fix all security issues, and continue maintenance if they wanted to. To say nothing of simply supporting the original maintainer financially.
But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
I think https://xkcd.com/1172/ is more fitting.
> But IMO the more important topic within this whole saga is that libxml2 maintenance will also end this year. Will we also see support for XML removed?
No, because xml has meaningful usage on the web. The situations are very different.
They're really not. If "meaningful usage" was a factor, Google should stop maintaining AMP, USB, WebTransport, etc.[1]
If security and maintenance are a concern, then they should definitely also remove XML, since libxml2 has the same issues as libxslt.
> Similar to the severe security issues in libxslt, severe security issues were recently reported against libxml2 which is used in Chromium for parsing, serialization and testing the well-formedness of XML. To address future security issues with XML parsing In Chromium we plan to phase out the usage of libxml2 and replace XML parsing with a memory-safe XML parsing library written in Rust
Perhaps there are some Rust gurus out there that can deliver a XSLT crate in a similar fashion, which other folks can then integrate?
The problem seems to be that the current libxslt library is buggy due to the use of C++, an unsafe language (use after free etc.).
[BTW, Chris Hanson's old book "C: Interfaces and Implementations" demonstrated how to code in C in a way that avoids use after free: use pointers to pointers instead of pointers and set them to zero upon free-ing memory blocks; e.g.
/* source: https://github.com/drh/cii/blob/master/src/arena.c */
void Arena_dispose(T *ap) {
assert(ap && *ap);
Arena_free(*ap);
free(*ap);
*ap = NULL; /* avoid use after free */
}
]Even if one existed right now, i would be surprised if that changed googles mind.
Meaningful usage being a factor does not mean it is the only factor.
I think it goes without saying that google isn't going to remove support for xml (including things like SVG) anytime soon.
Hopefully YES.
Let the downvotes come, I know there are XML die hard fans here on HN.
Also, doesn't Excel use XSLT or am I thinking of something else?
Lots of Comic Sans and animated GIFs (which means that I still have XSLT, I guess).
The gaudy retro amateur '95 design of this page might suggest the idea "anyone only cares about this for strange nostalgia reasons".
Content-wise, I think this argument is missing a key piece:
> Why does Google hate XML?
> RSS is used to syndicate NEWS and by killing it Google can control the media. XSLT is used worldwide by [multiple government sites](https://github.com/whatwg/html/issues/11582). Google are now trying to control LEGISLATION. With these technologies removed what is stopping Google?
Google wanting RSS/Atom dead, presumably for control/profit reasons, is very old news. And it's old news that Big Tech eventually started playing ball with US-style lobbying (to influence legislation) after resisting for a long time.
But what does the writer think is Google's motivation for temporarily breaking access to US Congress legislative texts and misc. other gov't sites in this way (as alleged by that `whatwg` issues link)? What do they want, and how does this move advance that?
We can imagine conspiracy theories, including some that would be right at home on a retro site with animated GIFs and a request to sign their guestbook, but the author should really spell out what they are asserting.
These points should be addressed first on the website.
What the hell is Mozilla doing with that money? How useless are all those people?
(IIRC her salary increased something like 10 folds over the past 15 years or so)
Edit: It has jumped from $490k[1] to $6.25M[2] from 2009 to 2024.
Edit 2: by looking the figures up, I learned that she's gone at last, good riddance (though I highly doubt her successor is going to take a 12-fold pay cut)
[1]: https://static.mozilla.com/foundation/documents/mf-2009-irs-... page 8
[2]: https://assets.mozilla.net/annualreport/2024/b200-mozilla-fo... page 8 as well.
The page styling is harkening back to the style of some EARLY early personal amateur niche sites. It reminds me of like Time Cube <https://web.archive.org/web/20150506055228/http://www.timecu...> or like Neocity pages, even TempleOS in the earlier days. T
It's really taking me back, I'm actually getting a little emotional...
Good old DSSSL days, sigh.
I cannot tell if this is satire or not, very well done
wtf is XSLT?
Would have been great if it had been open-sourced, but they paid for all the development and owned the codebase. They wanted to use it to dynamically generate content for every page for every unique device and client that hit their server. They had the infrastructure to do that for millions of users. The processing could be done on the server for plain web browsers or embedded inside a client binary app, so live rendering to native could be done on-device.
Back then, it was trivial to generate XML on-the-fly from a SQL-based database, then send that back, or render it to XHTML or any other custom presentation format via XSLT. Through XSD schema, the format was self-documenting and could be validated. XSLT also helped push the standardizing on XHTML and harness the chaos of mis-matched HTML versions in each browser. It was also a great way to inject semantic web tags into the output.
But I always thought it got dragged down with the overloaded weight of SOAP. Once REST and Node showed up, everyone headed for new pastures. Then JS in browsers begat SPAs, so rendering could be done in the front-end. Schema validation moved to ad-hoc tools like Swagger/OpenAPI. Sadly, we don't really have a semantic web alternative now and have to rely on best guesses via LLMs.
For a brief moment, it looked like the dream of a hyper-linked, interconnected, end-to-end structured, realtime semantic web might be realizable. Aaand, then it all went poof.
TL;DR: The XML/XSLT stack nailed a lot of the requirements. It just got too heavy and lost out to lighter-weight options.
Because I wasted my time to reach some working ways to get the results by scripting, it leave me no time actually to think about it in any other way (like to prove for it the next few things I saw coming to the client side soon after, which I used to know from earlier thanks eXist-db). I took me some time, much later, to learn about such few incredible things - that if working, would make my job so.. basic - just, if, again few things described as bugs, were fixed at that time.
Without that, just that happen: you wanted the results you have code it yourself - regarding or regardless of few bugs making simple things being hard corner cases with interoperability problems that can't be solved.
Since then, I understand that with JavaScript it's just easier to keep fixing things ad hoc not worrying to much about standards, implementations
.
- than, actually to keep asking for few things or key bugs to be fixed, for more than 20 years - and to not see that ever.
.
The legacy is that, we can no longer get there where simple things can just interoperate (is it old school now ?) - but some generation later actually not aware why, has such imperative mindset of micromanagement that they can not even imagine self not implementing repetitively something just because in some other world after long way it was already abstracted once - but just not ever implemented once to work in same consistent way and as intended between browsers.
From that point of view it's quite easy to not worry about or to abolish standards - you can't do much about implementations elsewere or bugs - but you can do whatever you want with your code (so long no one will remind you - will it last when other things change ?).
That's sad actually, as I se it, that Javascript Document Programmers keep repeating and will be repeating same works, unaware of reasons for that - few bugs here and there, for 20 years not fixed once or in same common way.
But how "random" were all that things leading to that point: with JavaScript all is possible and everything else is redundant ? ( only a hammer can work ?) - then look at example: https://news.ycombinator.com/item?id=45183624 - what's there look like simplest abstract form and what's like redundant ?
P.S. RIP WWW
(?) (JS is not a W3 standard)
I have and I've always hated it. I still to this day will never touch an IBM DataPower appliance, though I'm more than capable because of XSLT.
They (IBM) even tried to make it more appealing by allowing Javascript to run on DataPower instead of XSLT to process XML documents.
It's a crap language designed for XML (which is too verbose) and there are way better alternatives.
Javascript and JSON won because of their simplicity. The Javascript ecosystem however (nodejs, npm, yarn etc) are what take away from an otherwise excellent programming language.