It would be significantly more practical for the output to have "type" attribute in the same way as in the input.
I did experiment with oputput|type in my Sciter and added these:
type="text" - default value, no formating
type="number" - formats content as a number using users locale settings,
type="currency" - formats content as a currency using users locale settings,
type="date" - as a date, no TZ conversion,
type="date-local" - as a date in users format, UTC datetime to local,
type="time" - as a time
type="time-local" - as a local time, value treated as UTC datetime.
This way server can provide data without need to know users locale.> The output element represents the result of a calculation performed by the application, or the result of a user action.
<output> is for changing content. It's the ARIA semantics that matter. The content gets announced after page updates.
You can put whatever you want inside the <output> to represent the type. "text" is the default. You can represent dates and times with the <time> element. And while there is currently no specific number formatting element, since Intl has arrived there have been many requests for this.
For example:
<output>The new date is <time datetime="2025-10-11">Oct 11</time></output>
IOW, <output> should not have to handle all these types when it handles HTML and HTML needs to represent the types anyway.It's sad how many elements this is still the case for in 2025. A good chunk of them can be blamed on Safari.
Probably the most extreme example of this is <input type="date"> which is supposedly production-ready but still has so many browser quirks that it's almost always better to use a JS date picker, which feels icky.
I then proceeded to spend the next week crying trying to get JS date picker to work as well as native did on my browsers.
It's actually a little surprising to me since these are somewhat basic controls that have been around in UI toolkits for decades. It's in that weird sweet spot where the building blocks are almost usable enough to build rich applications, but it's just out of reach.
<output for=input>
<!-- bring your own time-locale component -->
<time is=time-locale datetime=2001-02-03>2001-02-03</time>
</output>
With the component replacing the value dependent on locale. I don't think having HTML/CSS fiddling around with making fake content is a great idea, it already causes issues with trying to copy things injected by CSS's :before/:after psudoelements, let alone having a difference between the DOM's .innerText and, well, the inner text.Not saying decisions can't be made about these things, just that, making those decisions will pretty much make a dedicated DSL out of a single element (dependent on input, desired kind of output (absolute or relative), other data sent along side (type of currency, does it need to be a "real" currency? Since instead of just calling something in mutable/overridable JS, its now part of the HTML processing, something that can't directly be touched)
There have been a bunch of requests for Intl-driven related elements in HTML, and I expect them to be added at some point.
<form id="my-form">
<input name="number" type="number">
<output name="result"></output>
</form>
<script>
const myForm = document.getElementById("my-form");
const inputField = document.elements.namedItem("number");
const outputField = document.elements.namedItem("result");
outputField.textContent = inputField.valueAsNumber ** 2;
</script>
- const inputField = document.elements.namedItem("number");
- const outputField = document.elements.namedItem("result");
+ const inputField = myForm.elements.namedItem("number");
+ const outputField = myform.elements.namedItem("result");
<output type="currency"> uses the same convention as "Intl.NumberFormat/style=currency": https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
They are correct in that if you're displaying a currency value, you have to know which currency it is in, right? It wouldn't make sense for the server to be "unaware" of the locale of the value.
That said, your comment sidesteps that issue and addresses how the number itself is displayed, since ultimately the value itself is a number, but different locales display numbers differently.
So the person you're responding to is asking: since the server ostensibly already knows which currency it's in, shouldn't it already be formatting the value appropriately, and that's more a question of where one thinks localization formatting should ultimately live in web app context.
If you see a price in Euros and there's a chance the browser converts the number to my locale, then the price becomes completely ambiguous. Information is lost unless I change my locale just to see if the number changed.
If, on the other hand, the browser doesn't apply such formatting, then the number is probably the number.
What's more, wouldn't you need to specify an origin locale so the browser knows how to correctly interpret the value?
If you want specific country format then you may use lang:
<output type="currency" lang="de-DE">123456.00</output>
Currency conversion is not a subject of a browser.
€1,000.48 = 1.000,48€
A payment, bill, price, etc has a particular currency.
For example, 59.95 Australian dollars:
In en-AU locale, that is $59,95.
In en-US locale, that is 59.95 AUD or AU$59.95.
Either way, the quantity and units are the same, but they are presented differently.
In some cases, there may be currency exchange services. That will depend on the current exchange rate, and possibly exchange fees. And yes, that will take some more work. But that’s a fundamentally distinct concept than pure presentation of a monetary amount.
<input type="range" id="example_id" name="example_nm" min="0" max="50">
<output name="example_result" for="example_id"></output>
And it would just show you the input value. Maybe with a "type" specifier like talked about. Maybe the ::before or ::after css and it would allow content: updates or something.Bunch of <input> types that there's a reasonable case for. Especially if it allowed for formatting. Did you put in the type="tel" the way you believed? It prints it out formatted.
'checkbox, color, date, datetime-local, file, month, number, radio, range, tel, time, url, week' might all have possible uses. Some of the text cases might have uses in specific conditions. 'email, text, url'
Also be nice if the for="" attribute actually did very much. The attachment seems mostly irrelevant in the examples seen. Most example just use a variation on:
<output name="result">
<form oninput="result.value=...">
const outputEl = document.getElementById("my-output");
const currencyFormat = new Intl.NumberFormat("default", {
style: "currency",
currency: "ISK",
});
outputEl.textContent = currencyFormat.format(value);
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...Formatting output values to the user’s locale has nothing to do with currency exchange rate. And JavaScript does the former rather excellently (except when the locale is not supported [ahm, Chromium]).
form.value = { transAmount: 12345n, transDate: new Date() };
where form is <form>
... <output type="currency" name="transAmount" />
... <output type="date-local" name="transDate" />
</form>
Yeah, count me on with those who don't even know it exists. I'm adding this to my TIL.
> When I searched GitHub public repos, it barely showed up at all.
> That absence creates a feedback loop: if no one teaches it, no one uses it.
This has triggered an instant question in my head: Do LLMs actually use it when generating code or they are not well-trained for this specific tag?
AI software development models and agents can be "taught" to go look at the official version documentation for languages via the prompt (just as one example) without needing to modify their weights.
One call to getSpecification('Python','3.14') or some similar tool call and they know exactly what they are working with, even if the language version did not exist when the model was trained.
What we should have had instead is a `document.speak(text, priority)` method, which would just pass the text straight through to the screen reader.
Sure, that approach is less pure, yet it is far more pragmatist and practical, as you only need aria when you heavily use js anyway.
Aria live regions, as they currently stand, basically encourage three anti-patterns:
First is the "if it changes, it should be announced" anti-pattern, think e.g. a constantly changing stock ticker on a news website. People make these into aria live regions because "it's live, we want to be good citizens and do semantics, right?". Sites that do this are almost impossible to use, as you're constantly distracted by the ticker. Another instance of this are timers that change every second. With a `document.speak` method, it would be far more obvious that you don't want the screen reader to announce the ticker every time it changes.
The second anti-pattern is the one of outputting the same message twice. Think a calculator where the user executes two different expressions which both produce a result of 0. The result of the second expression won't be announced, as there will be no change to the live region contents. With `document.speak`, this wouldn't be a problem, as the method would just be executed twice.
The third anti-pattern is actually somewhat specific to the `output` tag. It's very tempting to use it for terminal, log and chat outputs, and that is a mistake. Those processes append to the output instead of replacing it, but with the aria status of atomic, the entire output area would be announced every time the output changes.
Waiting for support to improve on a 17 year old tag that is barely used anymore?
It’s obviously the screen readers’ fault.
https://developer.mozilla.org/en-US/docs/Web/Accessibility/A...
It's fun to play around with things like this, but if you're a developer you have a responsibility to build things that work for your users using the existing tools and ecosystem. Don't use semantic HTML tags that aren't widely used, just do the thing that works.
> And because it has been part of the spec since 2008, support is excellent across browsers and screen readers. It also plays nicely with any JavaScript framework you might be using, like React or Vue.
So what makes you think this isn’t a “thing that works”?
html isn't just "browsers". I've been doing alot of epub work, and semantic pages make everything easier and better.
Any screen reader users able to comment on whether this is worth doing? I suspect this would be such a rarity online that screen reader users wouldn’t be familiar with it, but it depends on the UX of the software
That said, I imagine it's more useful to do the opposite, label the output itself e.g.
<label for="output">Total</label> <output id="output">£123.45</output>
That way it will be announced like "Total, £123.45" rather than a random number with no context
This is handy for testing with screen readers, and includes links to the appropriate spec (for output and all elements):
https://stevefaulkner.github.io/AT-browser-tests/test-files/...
The browser may add a reference from one to the other in the accessibility tree, but whether a screenreader announces it is another matter. I'd be surprised if it's supported in any meaningful way here. Happy to be shown otherwise!
(Actually, the dodgy GenAI calculator image at the top primed me for even more failure, making the excellent content that followed even more surprising. But I soon forgot about it and only remembered when I scrolled back to the start for no particular reason when done.)
It appears human beings are already forgetting the even more dodgy images some of us created before AI allowed us to reduce said dodginess. Or actually get a picture we could post without too much shame. :)
And in this case, IMHO, the image has a significant amount of dodgy vintage tech charm.
Not every use of AI replaces a professional artist.
1. The techniques, inspiration and creativity of skilled human artists.
2. The personality of art by unskilled artists.
3. The use of generative AI to replace generic clip art. A little whimsy to dress up plain text.
Is anyone really crusading to protect generic clip art? No?
There should be something like rule 34 for social and cultural movements. For every movement, the ideological version will get performatively or knee jerk expressed, with shame throwing, in benign situations.
> Actually, the dodgy GenAI calculator image at the top primed me for even more failure, making the excellent content that followed even more surprising.
This was poor spirited, personal itch scratching. We can just compliment a writer on their writing, if they are a writer, presenting themselves as a writer, not a visual artist.
I'm fairly confident that this article is primarily AI-written as well
Also "ARIA" stands for Accessible Rich Internet Applications and it's "a set of HTML attributes that make web content more accessible to people with disabilities."
You'd be surprised how many people barely know it exists... I was a TA for my uni's Web Engineering and Ethics in CS courses and accessibility never even came up in either course.
That is genuinely baffling to me. How does a university teach web engineering without even mentioning accessibility? It’s not just best practice—it’s often a legal requirement for public-sector sites in many countries. Even outside government work, major companies (FAANG included) publicly invest in accessibility to avoid both reputational and legal fallout. Ignoring it entirely sends the wrong message to students about professional responsibility and real-world standards.
It’s why ‘self taught’ in many disciplines is very doable too, if someone focuses on what people actually want/need.
They might not be good at articulating the differences between fizzbuzz and bubble sort, but they can get shit done that works.
Every PhD that I know that went from Academia to Industry immediately had their stress levels decrease 10x and their pay roughly double too - because they could finally do a thing, see if it worked or not, and if it did, get paid more.
Instead of insane constant bullshitting and reputation management/politics with a hint of real application maybe sprinkled in. Few ‘knives’ have to be as sharp as the academics, in my experience.
For example: You don’t realize how absolutely abysmal voice control is for computers until you have to use it.
There are so many assumptions about the world that causes things like neurodivergence to become a disability instead of a difference.
Fair. I might’ve read more snark in the “Apparently,” than the commenter intended to convey.
For what it’s worth, the comment you read is the toned down version of what I had initially come up with. I really don’t think being dismissive of accessibility concerns is good style.
>> The first rule of ARIA use is "If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so."
https://developer.mozilla.org/en-US/docs/Web/Accessibility/A...
Maybe I'm jaded, I was all in on semantic xhtml and microformats before we got HTML5, but this seems like being overly-pedantic for the sake of pedantry rather than for a11y.
Around the time that abbreviation became fashionable using a lot of DIV elements also did, but that wasn't what the "D" stood for.
Descriptivism usually reflects some reality no matter the intended prescriptives.
See also the dictionary fallacy, and again descriptivism vs prescriptivism.
Additionally, even leaving alone the div/dynamic language issue, there really isn’t a point in usage history where DHTML came without JS — believe me, I was doing it when the term first came into usage. JS was required for nearly all dynamic behavior.
DHTML is an acronym that expands to: Dynamic HyperText Markup Language.
There is no dictionary fallacy or descriptivism vs prescriptivism or defined meaning. It was simply an industry standard way to shorten all those words.
Changing one of the letters to stand for something else reassigns the pointer to something else entirely, or is the making of a joke, which I think the above may have been.
We had table based layouts and then divs when CSS started to take off, mostly used by artists rather than companies at first.
Javascript had vanishingly limited uses at first, too. I don't remember exactly how long it took us to get XHR but before that we had "Comet frames", before iframe security was given much focus. Javascript couldn't do that for a while. It was also dodgy and considered bad practice for quite a while, too.
I don't remember when the term javascript was even really used in regular vernacular but DHTML was not so much referring to CSS as it was the myriad of weird mechanisms introduced to make pages dynamic. It was never "Div-based HTML" or whatever, the div craze came way later once CSS was Good Enough to eschew table layouts - after which, Dreamweaver died and photoshop's slice tool finally got removed, and we started inching toward where the web sits today.
I also do distinctly recall needing a doctype for DHTML for some browsers.
It wasn't as fast or as usable as it is today, but Javascript has been in every mainstream browser since before Microsoft started pushing "DHTML".
Interestingly, in my memory, it seemed like we had JS for a long time before DHTML, but it was only a couple years between Eich writing it and IE4, which was the start of the "DHTML" moniker. Looking back at the timeline, everything seems much more compressed than it felt at the time.
Divs weren’t a “craze”. They were popularized by the (brand new) XHTML spec, which did have its own doctype.
2004 or 2005. Gmail and Google Maps were a "holy crap this is actually possible?" for a lot of people, both technical and non, and was when javascript switched from mostly-ignored* to embraced.
*Just minor enhancements, outside of technical people mostly only known to MySpace users who wanted to add a little flair to their page. XmlHttpRequest was almost entirely unknown even in technical spaces until gmail showcased interaction without page refreshes.
It's clear that you are sighted and never use reader mode.
I think the parent has a good point: browsers don't do anything with these tags for sighted users, who are unfortunately the majority of developers. If they were to notice benefits to using semantic tags, maybe they'd use them more often.
I use reader mode by default in Safari because it’s essentially the ultimate ad blocker: it throws the whole site away and just shows me the article I want to read.
But this is in opposition to what the website owners want, which is to thwart reader mode and stuff as many ads in my way as they can.
It’s possible good accessibility is antithetical to the ad-driven web. It’s no wonder sites don’t bother with it.
[1] - eg https://picocss.com/
Sure, it allows the browser to do that. GP is complaining that even though browsers are allowed to do all that, they typically don't.
Why don't we just have markup for a table of contents in 2025?
But if you are a developer you should see value in <article> and <section> keeping your markup much much nicer which in turn should make your tests much easier to write.
You have to remember, this is an industry that thinks having code without syntax errors was too unreasonable a requirement for XHTML, there is no reason to expect them to know anything beyond div and maybe a dozen other tags.
a
abbr
address
area
article
aside
audio
b
base
bdi
bdo
blockquote
body
br
button
canvas
caption
cite
code
col
colgroup
data
datalist
dd
del
details
dfn
dialog
div
dl
dt
em
embed
fieldset
figcaption
figure
footer
form
h1
h2
h3
h4
h5
h6
head
header
hgroup
hr
html
i
iframe
img
input
ins
kbd
label
legend
li
link
main
map
mark
menu
meta
meter
nav
noscript
object
ol
optgroup
option
output
p
picture
pre
progress
q
rp
rt
ruby
s
samp
script
search
section
select
slot
small
source
span
strong
style
sub
summary
sup
table
tbody
td
template
textarea
tfoot
th
thead
time
title
tr
track
u
ul
var
video
wbr
Not that this is problematic per se, everybodies milage may vary and we're all out there to learn. But if I told one of them about the output tag thry probably wouldn't even understand why that would be important.
Maybe it's because like most things html/css related, it's a semi-broken implementation of a half-feature?
Then no one checked, and the javascript train had already left the station.
In before comments - not advocating for div only development, just that the nature of www moved from html with some js to well ... only js.
Another is structuring your form names to help align with how it’s going to be used in the backend so you don’t have to use JavaScript to gather all the data or be doing a lot of restructuring of the request data.
This is an oversimplified example but now even if you submit with JS, you just have to submit the form and the form data is already there.
<input name=“entity[id]”>
<input name=“entity[relation]”>
I understand that their are some accessibility benefits to some semantic HTML tags.
You could, but then you wouldn't be gaining any of the accessibility wins the article is discussing...
Now, the bottleneck is entirely the database first and the framework second. Those can be switched if the framework code is extra garbage. When those are taken out of the equation I am seeing text update to the screen in about 5-15ms in response to a user interaction that requires execution on the localhost server, 45ms for networked server.
At that speed you don’t need to alert the user of any content changes. You only need to structure the content such that walking the DOM, using a screen reader, from point of interaction to area of output is direct and logical, expected, for a human.
> It’s been in the spec for years. Yet it’s hiding in plain sight.
Almost as if we're... blind to it?
No? Too on the nose?
Is there a way to search by code?
The output of any actions will be shoved into any N random elements. So every `<div>` will have `<output>`? Why? Waste of payload size and CPU cycles in parsing.
The designers of semantic tags truly live in ivory towers.
Imagine you place a textfield in precise spot on a screen, with your choice of font and it renders the way it renders on your screen on every client device, everywhere.
Oh wait, we had, it was called Flash.
Because we don’t need another fucking tag, that’s why.