As for using extended grapheme clusters, it sounds a little bit iffy—maybe possible to use correctly, maybe not, because they’re not stable over time. That style of thing has created some fascinating bugs, like (a few years ago) index corruption in PostgreSQL due to collation changes.
Unicode scalar values are technically-safe: you can’t introduce invalid Unicode. But you can definitely still end up with nonsense.
> We made emoji an atomic node type.
That avoids problems for emoji, but leaves the underlying hazard untouched. I imagine it could still theoretically occur with other text, probably CJK. But probably only theoretically.
> This splits by grapheme clusters rather than code units. No orphaned surrogates, no split emoji. It's what .slice() should have been doing all along, but of course UTF-16 predates emoji by decades.
I do not agree that slice() should operate on extended grapheme clusters. Don’t lump the grapheme cluster/scalar value split in with the sins of UTF-16 and its unreliable code point/code unit split.
UTF-16 was unforced error (and I still can’t work out why it wasn’t obvious from the start that UCS-2 would never be enough). But the concept of multiple scalars contributing to the logical unit was always inevitable.
Surely certain people did know, but those people weren't in a position to do anything about it.
Specifically, there were surely people who knew that because historical Chinese place names, Japanese nicknames, and so on, were not included in the original "Unicode" (it wasn't called UCS-2 yet) it was insufficient for complete expression of Asian languages.
There were also many people who objected to Han unification, which is a different problem.
But all of these objections were discarded because of the overwhelming mandate for a fixed-width encoding. The original "Unicode" was conceived as a "16-bit" initiative. Its 16-bit-ness was an essential aspect of the design and the Unicode Consortium did what they had to do to fit all scripts and characters "in modern use" into 16 bits.
From the Wikipedia article on Han Unification[1]:
> Some of the controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, and Han unification was therefore a critical step for avoiding tens of thousands of character duplications.
It's also worth noting that the original goal of Unicode wasn't to be able to faithfully represent all text, but rather to faithfully represent existing character sets. Only later do you get the impetus to actually include everything, as people become a lot less tolerant of "computer can't actually represent <X>" scenarios. Note too that a lot of the Han unification criticisms basically fall into the same bucket as, say, Medievalists, who want to preserve certain details of their source texts more faithfully than was the norm for computer systems in the 1980s.
Maybe a simpler argument against this idea is that the definition of an extended grapheme cluster changes between versions of Unicode. The relevant standard is on its 47th revision (not all of which change extended grapheme clusters, but many do): https://www.unicode.org/reports/tr29/
ISO 10646 (“Universal Coded Character Set”) planned for 31-bit code points from the start (128 groups of 256 planes of 256 rows of 256 cells, with UCS-4 as a four-byte encoding), around 1989. Unicode, on the other hand, was a parallel effort initiated by Xerox and Apple a few years earlier, with more pragmatic aims, defining a 16-bit character set (but no encoding) that would allow round-tripping of existing character sets. For Unicode 1.1, it was decided to align it with ISO 10646 and make it coincide with the latter’s first plane (the BMP) and UCS-2. In Unicode 2.0, surrogate pairs and the UTF-16 encoding were introduced to allow future expansion to additional planes, in a way that would be compatible with existing implementations. Only with Unicode 3.1 in 2001, five years after Unicode 2.0 and ten years after Unicode 1.0, were actual characters assigned beyond the BMP.
History is complicated; aims, incentives, and constraints change over time.
Yeah, I think that's fair. I didn't really think this through as I was writing it.
I'm not even so sure "ending up with nonsense" here is the worst outcome. It might be unavoidable with this approach and if that had been the only problem this bug might have been less memorable.
The real problem—which I mention didn't articulate/emphasize particularly well—was that these invalid surrogate pairs were getting passed into `encodeURIComponent` somewhere deep in the stack and choking catastrophically on them. That was the "real" bug at the end of the day, but the invalid surrogate pairs and the way they were getting created on the way were a fun journey to untangle.
- https://george.mand.is/invalid-surrogate-pairs/
I thought it was something that's easier to play with and feel than necessarily just read about.
We were expanding our product to a new language that used non-ASCII code points. Part of the system involved invoking binaries using text as input.
Locally, everything worked great. Once deployed, we got corrupted text output. As soon as we SSH’d on to the server to inspect, everything started working again.
It turns out that SSH servers can modify the LANG environment variable. The default value on our servers didn’t support Unicode, but LANG was updated as soon as we connected via ssh. It was a head scratcher for sure.
I recently ported a program from python to rust and the original author used string regexes. Input and output document encoding mattered but the characters that needed to be matched were always lower ASCII. The python program could have used binary regexes, but instead forced an input encoding (UTF-8) and made the user choose an output encoding. When the input comes from an unknown process or legacy data, however, you don’t always get the luxury of assuming the encoding. Switching to binary regexes and ignoring encoding altogether simplified logic, eliminated classes of errors, and made the program work in scenarios it couldn’t earlier. Getting rid of the last decoding/encoding code gave me so much relief, especially when all of the whacky encoding tests I had already written continued to work.
If I'm remembering correctly, we briefly explored a solution where we told Python "This is a UTF-16LE encoded string" so the count would match, but I think we learned/realized the endianness is actually dictated by the client's machine (Going from memory here). Ultimately we just changed the solution so the client was the source of truth about lengths and counts.
These threads are surfacing all kinds of things I forgot about and didn't add in that blog post. Maybe I need to write another, haha.
Because invalid UTF-16 strings could show up in places within Windows, someone made a UTF-8 variant called "WTF-8", which allows unmatched surrogate pairs to survive a round trip.
Was already bad enough that instead of bytes, we have to worry about code points. Now even that isn’t enough?
It would have been expensive, but all characters should have been fixed size 64bit values.
It would have been a non-starter, and then we'd all be dealing with Shift-JIS, BIG5, and FSM knows how many different codepages to this day. UTF-8 is about as elegant as it gets, though Java and JS still managed to fuck that up too (they both encode every codepoint outside the BMP as surrogate pairs in UTF-8)
I can’t comment on Java, but JS I know reasonably well and I can’t think of any place it uses CESU-8.
You're making the same mistake that numerous people made before you: thinking that it's as simple as using arrays of large enough numbers. First they thought that two bytes per symbol would be enough, then four. Spoiler alert: it wasn't. And eight won't work either.
"character" turns out to be too vague an idea to correspond to some specific fact about the software. If you co-worker says his Uncle is "conservative" does he mean like "Believes Right To Work laws are a good idea" conservative or "Believes Joe Biden is a Communist" conservative ?
https://en.wikipedia.org/wiki/Character_(symbol) gives you some idea about this rabbit hole. Suffice to say, no, you can't have operations on "characters" until you've nailed down exactly what it was you meant by that.
Author went for Intl.Segmenter too: https://github.com/cheeaun/phanpy/issues/1491
Fun Java/macos quirk: macos normalizes file names, so you can't have two files called ü in the same directory by writing ü as a single character and as composing characters. But unfortunately, this only happens on write, not on read, so if you type an ü on a German keyboard (produces a single character) into the Java source code file when writing a file name, the file will be saved with the decomposed name instead, but when trying to open the file, it will not be found when trying to open it with the single character name.
[0] But everyone disagrees as to what indexing a string means, so you need to make an actual choice if you want anything involving indexing to match across languages.
Java did not get the memo. Since the char type is fixed at 16 bits, it uses surrogates to encode everything outside the BMP, regardless of the encoding.
It was really `encodeURIComponent` that didn't handle it gracefully.
If you just type this into the console (surrogate pair for cowboy smiley face emoji), you see it encodes it ("%F0%9F%A4%A0"):
encodeURIComponent("\uD83E\uDD20")
If you give it an invalid surrogate pair, it will throw an actual error:
encodeURIComponent("\uDD20\uD83E")
Before I'd looked that up I was going to say: I feel like "don't allow an invalid Unicode string to exist all" feels like a separate/bigger problem to me from "handling it fine" when they do get created. To the extent I can hand JavaScript an invalid combination of code units in a variety of other scenarios, returning a � felt fine.
e.g. // valid String.fromCodePoint(0xd83e, 0xdd20) // invalid, but "�" is ... fine? String.fromCodePoint(0xdd20, 0xd83e)
emojies are a sequence of Unicode codepoints producing a single grapheme. Splitting in the middle of a grapheme will produce two valid strings, but with some funky half baked emoji. So for a text editor it makes sense to split between grapheme boundaries.
21-bit, actually. It was supposed to be 32-bit, but UTF-16 caps out at 21-bit, so they lopped eleven bits of potential from Unicode (and UTF-8, so no more six-byte encoding).
> at some point before Unicode
No, in the early days of Unicode.
> run length encodes
Um… what? RLE is a data compression thing, UTF-16 has nothing to do with it.
Although, conveniently this means that UTF-8 bytes 0xF8 through 0xFF are always nonsense so the third party Rust type `ColdString` uses leading bytes 0xF8 through 0xFF in its 8 bytes of representation to indicate "I am an inline UTF-8 string, but, the UTF-8 starts in the next byte with a total length of N bytes" where N = byte - 0xF8
This leaves the continuation marker bits alone so ColdString can use those in that front byte to indicate "I am not actually inline data, I'm a pointer, rotate me so these indicator bits are my LSB and zero out them out to make me a 4 byte aligned pointer".
Which leaves all other 8 bytes values for the valid UTF-8 strings, which all begin with either ASCII or a byte between 0xC2 and 0xF4 inclusive.
> 21-bit, actually
Less than that. https://en.wikipedia.org/wiki/Code_point#In_character_encodi...:
“The Unicode code space is divided into seventeen planes (the basic multilingual plane, and 16 supplementary planes), each with 65,536 (= 2¹⁶) code points. Thus the total size of the Unicode code space is 17 × 65,536 = 1,114,112”
That makes it log(1,114,112)/log(2) bit. That’s about 20,09.
(https://www.unicode.org/versions/Unicode17.0.0/ assigns 159,801 of them to characters)
I would argue that Unicode v2 onward; circa 1991 (Unicode Consortium and the ISO/IEC working together); is what anybody knows as Unicode with the 0 to 1_114_111 codepoints easily manipulated as a 32 bit value.
I meant variable length encoding, RLE encodes a number of successive repetition indeed.