When I shared computers with my parents I had to switch languages back-and-forth all the time. This helped me learn English rather quickly but, I find it a huge accessibility and software design issue.
If your program depends on letter cases, that is a badly designed program, period. If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.
While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.
I don't care if Unicode releases a conversion map. Natural-language behavior should always require natural language metadata too. Even modern languages like Rust did a crappy job of enforcing it: https://doc.rust-lang.org/std/primitive.char.html#method.to_... . Yes it is significantly safer but converting 'ß' to 'SS' in German definitely has gotchas too.
Rust did the only sensible thing here. String handling algorithms SHOULD NOT depend on locale and reusing LATIN CAPITAL LETTER I arguably was a terrible decision on the Unicode side (I know there were reasons for it, but I believe they should've bit the bullet here), same as Han unification.
POSIX requires that many functions account for the current locale. I'm not sure why you are blaming GNU for this.
Isn't the choice of language and date and unit formats normally independent.
> Isn't the choice of language and date and unit formats normally independent.
You would hope so but, no. Quite a bit software tie the language setting to Locale setting. If you are lucky, they will provide an "English (UK)" option (which still uses miles or FFS WTF is a stone!).
On Windows you can kinda select the units easily. On Linux let me introduce you to the journey to LC_ environment variables: https://www.baeldung.com/linux/locale-environment-variables . This doesn't mean the websites or the apps will obey them. Quite a few of them don't and just use LANGUAGE, LANG or LC_TYPE as their setting.
My company switched to Notion this year (I still miss Confluence). It was hell until last month since they only had "English (US)" and used M/D/Y everywhere with no option to change!
It's just English (I don't know when it's US and when it's UK, it's UK for Poland), but with the date / temperature / currency / unit preferences of whatever locale you actually live in.
Maybe there are some exceptions if we speak globally, hence limiting myself to europe. But I assume it is the same deal.
It's actually a pretty good weight for measuring humans (14lb). Your weight in pounds varies from day to day but your weight in (half-)stones is much more stable.
I propose 614 stones to the rock, 131 pebbles to the stone, and 14707 grains to the pebble. Of course.
An english imperial measurement. Measurements made based on actual stone rock and were mainly use as weighing agricultural items such as animal meat and potatoes. We also used tons and pounds before we incorporated the metric system of Europe.
It wasn’t a mistake for local software that is supposed to automatically use the user’s locale. It’s what made a lot of local software usefully locale-sensitive without the developer having to put much effort into it, or even necessarily be aware of it. It’s the reason why setting the LC_* environment variables on Linux has any effect on most software.
The age of server software, and software talking to other systems, is what made that default less convenient.
There's a few fundamental problems with it:
1. The locale APIs weren't designed very well and things were added over the years that do not play nice with it.
So like as an example, what should `int toupper(int c)` return? (By the way, the paramater `c` is really an unsigned char, if you try to put anything but a single byte here, you get undefined behavior. What if you're using something that uses a multibyte encoding? You only get one byte back so that doesn't really help there either.
Many of the functions were clearly designed for the "1 character = 1 byte" world, which is a key assumption of all of these APIs. Which is fine if you're working with ASCII, but blows up as soon as you change locales.
And even so, it creates problems where you try to use it. Say I have a "shell" but all of the commands are internally stored as uppercase, but you want to be compatible. If you try to use anything outside of ASCII with locales, you can't just store the command list in uppercase form because then they won't match when doing a string comparison using the obvious function for it (strcmp). You have to use strcoll instead, and sometimes you just, might not have a match for multibyte encodings.
2. The locale is global state.
The worst part about it is that it's actually global state (not even like faux-global state like errno). This basically means that it's basically wildly thread unsafe as you can have thread 1 running toupper(x) while another thread, possibly in a completely different library, calling setlocale (as many library functions do to guard against the semantics of a lot of standard library functions changing unexpectedly). And boom, instant undefined behavior, with basically nothing you could reasonably do about it. You'll probably get something out of it, but the pieces are probably going to display weirdly unless your users are from the US, where the C locale is pretty close to the US locale.
This means any of the functions in this list[1] is potentially a bomb:
> fprintf, isprint, iswdigit, localeconv, tolower, fscanf, ispunct, iswgraph, mblen, toupper, isalnum, isspace, iswlower, mbstowcs, towlower, isalpha, isupper, iswprint, mbtowc, towupper, isblank, iswalnum, iswpunct, setlocale, wcscoll, iscntrl, iswalpha, iswspace, strcoll, wcstod, isdigit, iswblank, iswupper, strerror, wcstombs, isgraph, iswcntrl, iswxdigit, strtod, wcsxfrm, islower, iswctype, isxdigit.
And there are some important ones in there too like strerror. Searching through GitHub as a random sample, it's not uncommon to see these functions be used[2], and really, would you expect `isdigit` to be thread-unsafe?
It's a little better with POSIX as they define a bunch of "_r" variants of functions like strerror and the like which at least give some thread safety (and uselocale at least is a thread-only variant of setlocale, which lets you safely do the whole "guard all library calls to `uselocale(LC_ALL, "C")`"). But Windows doesn't support uselocale so you have to use _configthreadlocale instead.
It also creates hard to trace bug reports. Saying you only support ASCII or whatever is, well it's not great today, but it's at least somewhat understandable, and is commonly seen to be the lowest common denominator for software. Sure, ideally we'd all use byte strings where we don't care or UTF-8 where we actually want to work with text (and maybe UTF-16 on Windows for certain things), but that's just a feature that doesn't exist, whereas memory corruption when you do something with a string but only for people in a certain part of the world in certain circumstances is not really a great user experience or developer experience for that matter.
The thing, I actually like C in a lot of ways. It's a very useful programming language and has incredible importance even today and probably for the far future, but I don't really think the locale API was all that well designed.
[1]: Source: https://en.cppreference.com/w/c/locale/setlocale.html
[2]: https://github.com/search?q=strerror%28+language%3AC&type=co...
Irish English locale uses a dot.
For what it's worth, I think most all European keyboard layouts have key combos for € and $ defined (many have £ as well), while on en_US you can only type $ (without messing with settings). Europe of course has more currencies than just €, but they use a two-letters-long abbreviations instead of a special symbol.
(The Polish Ł is typically not easily typable of non-Polish keyboards.)
There is a deeper bug within Unicode.
The Turkish letter TURKISH CAPITAL LETTER DOTLESS I is represented as the code point U+0049, which is named LATIN CAPITAL LETTER I.
The Greek letter GREEK CAPITAL LETTER IOTA is represented as the code point U+0399, named... GREEK CAPITAL LETTER IOTA.
The relationship between the Greek letter I and the Roman letter I is identical in every way to the relationship between the Turkish letter dotless I and the Roman letter I. (Heck, the lowercase form is also dotless.) But lowercasing works on GREEK CAPITAL LETTER IOTA because it has a code point to call its own.
Should iota have its own code point? The answer to that question is "no": it is, by definition, drawn identically to the ascii I. But Unicode has never followed its principles. This crops up again and again and again, everywhere you look. (And, in "defense" of Unicode, it has several principles that directly contradict each other.)
Then people come to rely on behavior that only applies to certain buggy parts of Unicode, and get messed up by parts that don't share those particular bugs.
One important goal of Unicode is to be able to convert from existing character sets to Unicode (and back) without having to know the language of the text that is being converted. If they had invented a separate code point for I in Turkish, then when converting text from those existing ISO character encodings, you’d have to know whether the text is Turkish or English or something else, to know which Unicode code point to map the source “I” into. That’s exactly what Unicode was designed to avoid.
[0] https://en.wikipedia.org/wiki/ISO/IEC_8859-7
> in "defense" of Unicode, it has several principles that directly contradict each other
Unicode wants to do several things, and they aren't mutually compatible. It is premised on the idea that you can be all things to all people.
> It’s not a bug, it’s a feature.
It is a bug. It directly violates Unicode's stated principles. It's also a feature, but that won't make it not a bug.
Great. So now we have to know locale for handling case conversion for probably centuries to come, but it was totally worth to save a bit of effort in the relatively short transition phase. /s
I believe that even addition of emojis was completely unnecessary despite the pressure from Japanese telecoms. Today's landscape of messengers only confirms that.
ISO (or RFC....) date time, UTF-8 default (maybe also alternative with ISO8859-1) decimal point in numbers and _ for thousands, metric paper / A4, ..., unicode neutral collation
but keeps US-English language
I'm not a native English speaker btw. I learned it as I was learning programming as a kid 20 years ago
If you only work in English, you will test in English and avoid uses cases like the one described in the article.
Did you know that many town and streets in Canada have a ' in their name? And that many websites reject any ' in their text fields because they think its Sql injection?
(I'm sure there's a good reason, but I find it odd that compiler message tags are invariably uppercase, but in this problem code they lowercased it to go do a lookup from an enum of lowercase names. Why isn't the enum uppercase, like the things you're going to lookup?)
Another question: why does the log record the string you intended to look up, instead of the string you actually did look up?
The bug here was the default Java implementation that Kotlin uses on JVM. On kotlin-js both toLowerCase() and lowercase() do exactly the same thing. Also, the deprecation mechanism in Kotlin is kind of cool. The deprecated implementation is still there and you could use it with a compiler flag to disable the error.
@Deprecated("Use lowercase() instead.", ReplaceWith("lowercase(Locale.getDefault())", "java.util.Locale"))
@DeprecatedSinceKotlin(warningSince = "1.5", errorSince = "2.1")
@kotlin.internal.InlineOnly
public actual inline fun String.toLowerCase(): String = (this as java.lang.String).toLowerCase()
/**
* Returns a copy of this string converted to lower case using Unicode mapping rules of the invariant locale.
*
* This function supports one-to-many and many-to-one character mapping,
* thus the length of the returned string can be different from the length of the original string.
*
* @sample samples.text.Strings.lowercase
*/
@SinceKotlin("1.5")
@kotlin.internal.InlineOnly
public actual inline fun String.lowercase(): String = (this as java.lang.String).toLowerCase(Locale.ROOT)
Also, this is the last remaining major system-dependent default in Java. They made strict floating point the default in 17; UTF-8 the default encoding some versions later (21?); only the locale remains. I hope they make ROOT the default in an upcoming version.
FWIW, in the Scala.js implementation, we've been using UTF-8 and ROOT as the defaults forever.
I have no idea what `Locale.ROOT` refers to, and I'd be worried that it's accidentally the same as the system locale or something, exactly the sort of thing that will unexpectedly change when a Turkish-speaker uses a computer or what have you.
The API docs clearly specify that Locale.ROOT “is regarded as the base locale of all locales, and is used as the language/country neutral locale for the locale sensitive operations.”
Isn't it kind of strange to say that Locale.US is too US centric, and therefore we'll invent a new, fictitious locale, the contents of which is all the US defaults, but which we'll call "the base locale of all locales"? That somehow seems even more US centric to me than just saying Locale.US.
Setting the locale as Locale.US is at least comprehensible at a glance.
DateFormat dateFormat = DateFormat.getDateInstance(DateFormat.DEFAULT, Locale.ROOT);
System.out.println(dateFormat.format(new Date()));
dateFormat = DateFormat.getTimeInstance(DateFormat.DEFAULT, Locale.ROOT);
System.out.println(dateFormat.format(new Date()));
NumberFormat numberFormatter = NumberFormat.getNumberInstance(Locale.ROOT);
System.out.println(numberFormatter.format(12.34));
NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance(Locale.ROOT);
System.out.println(currencyFormatter.format(12.34));
2025 Oct 13
10:12:42
12.34
¤ 12.34
Even POSIX C is less American than I expected, with a metric paper size and no currency symbol defined (¤ isn't in ASCII). Only the American date format.As the article demonstrates, the error manifests in a completely inscrutable way. But once I saw the bug from a couple of users with Turkish sounding names, I zeroed in on it. And cursed a few times under my breath whoever messed up that character table so bad.
map[name] = "box${primitiveType.javaKeywordName.capitalize(Locale.US)}"
[…]In September 2020, nearly a year after the coroutines bug had been fixed and forgotten
[…]
When they came to fix this issue, the Kotlin team weren’t leaving anything to chance. They scoured the entire compiler codebase for case-conversion operations—calls to capitalize(), decapitalize(), toLowerCase(), and toUpperCase()”
Bloody late, I would say. If something like this happened in OpenBSD, I think they would have done that, and more (the article doesn’t mention tooling to detect the introduction of new similar bugs ofof adding warnings to documentation), at the first spotting of such a bug.
How come no reviewer mentioned such things when the first fix was reviewed?
Also, why are they using Locale.US, and not Locale.ROOT (https://docs.oracle.com/javase/8/docs/api/java/util/Locale.h...)?
Really, this bug is nothing but programmers failing to take into account that not everybody writes in English.
I don't know... I understand the history and reasons for this capitalization behavior in Turkish, and my native language isn't English, which had to use a lot of strange encodings before the introduction of UTF-8.
But messing around with the capitalization of ASCII <= codepoint(127) is a risky business, in my opinion. These codepoints are explicitly named:
"LATIN CAPITAL LETTER I" "LATIN SMALL LETTER I"
and requiring them to not match exactly during capitalization/diminuitization sounds very risky.
This bug is the exact opposite of that. The program would have worked fine had it used pure ASCII transforms (±0x20); it was the use of library functions that did in fact take Turkish into account that caused the problem.
More broadly, this is not an easy issue to solve. If a Turkish programmer writes code, what is the expected behaviour for metaprogramming and compilers? Are the function names in English or Turkish? What about variables, object members, struct fields? You could have one variable name that references some government ID number using its native Turkish name, right next to another variable name that uses the English "ID". How does the compiler know what locale to use for which symbol?
Boiling all of this down to 'just be more considerate' is not actually constructive or actionable.
The whole problem is that the compiler has no idea about the locale of any strings in the system, that's why it's on the programmer to specify them.
Lowercasing/uppercasing a string takes an (infuriatingly) optional locale parameter, and the moment that gets involved, you should think twice before using it for anything other than user data processing. I would happily see Oracle deprecate all string operations lacking a locale in the next version of Java.
I cannot square your earlier assertion that we should be more mindful "that not everybody writes in English", with your current assertion that all code must only ever contain English, for simplicity's sake. Either is a cogent position on its own, just not both at the same time.
This bug arose because the programmers made incorrect assumptions about the result of a case-changing operation. If you impose English case rules on Turkish symbol names, this exact bug would simply arise in reverse.
More problematically, as I alluded to earlier, Turkish code may contain a mix of languages. It may, for example, be using a DSL to talk to a database with fields named in Turkish, as well as making calls to standard library functions named in English. Which half of the code is your proposed invariant locale going to break?
It's like they decided that the uppercase of "a" is "E" and the uppercase of "e" is "A".
There is no reason to assume that the English representation is in general "correct", "standard", or even "first". The modern script for Turkish was adopted around the 1920's, so you could argue perhaps that most typewriters presented a standard that should have been followed. However, there was variation even between different typewriters, and I strongly suspect that typewriters weren't common in Turkey when the change was made.
Not quite. In English, 'i' and 'I' are two allographs of one grapheme, corresponding to many phonemes, based on context. (Using linguistic definitions here, not compsci ones.) The 'i's in 'kit' and 'kite' stand for different phonemes, for example.
> There is no reason to assume that the English representation is in general "correct", "standard", or even "first".
Correct, but the I/i allography is not exclusive to English. Every Latin script functions that way, other than Turkish and Turkish-derived scripts.
No one is saying Turkish cannot break from that convention - they can feel free to do anything they like - but the resulting issues are fairly predictable, and their adverse effects fall mainly on Turkish speakers in practice, not on the rest of us.
I don't think it's fair to call it predictable. When this convention was chosen, the problem of "what is the uppercase letter to I" was always bound to the context of language. Now it suddenly isn't. Shikata ga nai. It wasn't even an explicit assumption that can be reflected upon, it was an implicit one, that just happened.
You're right, apologies my linguistics is rusty and I was overconfident.
> Correct, but the I/i allography is not exclusive to English. Every Latin script functions that way, other than Turkish and Turkish-derived scripts.
I think my main argument is that the importance of standardizing to i/I was much less obvious in the 1920's. The benefits are obvious to us now, but I think we would be hard pressed to predict this outcome a-priori.
It does in literally any language using a latin alphabet other than Turkish.
Also, we don't have serifs in our I. It's just a straight line. So, it's not even related to your Ii pair in English. You can't dictate how we write our straight lines, can you.
The root cause of the problem is in the implementation and standardization of the computer systems. Computers are originally designed only for English alphabet in mind. And patched to support other languages over time, poorly. Computers should obey the language rules, not the other way around.
The assumption that letters come in universal pairs is wrong. That assumption is the bug. You can’t assume that capitalization rules must be the same for every language implementing a specific alphabet. Those rules may change for every language. They do.
And not just capitalization rules. Auto complete, for instance, should respect the language as well. You can’t “correct” a French word to an English word. Localization is not optional when dealing with text.
That depends on font.
>So, it's not even related to your Ii pair in English.
Modern Turkish uses the Latin script, of course it's related.
>You can't dictate how we write our straight lines, can you.
No, I can't, I just want to understand why the Turks decided to change this letter, and this letter only, from the rest of the standard Latin script/diacritics.
Because Turkish uses a phonetic alphabet suited for Turkish sounds, based on latin letters. There are 8 vovels come in two subsets:
AIOU and EİÖÜ.
When you pair them with zip(), pairs are phonetically related sounds but totally different letters at the same time. Turkish also uses suffixes for everything, and vowels in these suffixes sometimes change between these two subgroups.
This design lets me write any word uniquely and almost correctly using the Turkish alphabet.
Dis dizayn lets mi rayt ani vörd yüniğkli end olmost koreğtkli yuzing dı törkiş alfabet.
Ö is the dotted version of O. İ is the dotted version of I. Related but different. Their lower case versions are logically (not by historical convention): öoiı. So we didn’t just wanted to change I, and only I. We just added dots. Since there are no Oö pair in any language our OoÖö vovels didn’t get the same attention. Same for our Ğğ and Şş.
I hope this answers the question.
Computers are originally designed for no alphabet at all. They only have two symbols.
ASCII is a set of operating codes that includes instructions to physically move different parts of a mechanical typewriter. It was already a mistake when it was used for computer displays.
Where is it broken in German script? Do you mean small ß and capital ẞ?
Ö and ü were already borrowed from German alphabet. Umlaut-added variants of 'ö' and 'ü' have a similar effect on 'o' and 'u' respectively: they bring a back vowel to front. See: https://en.wikipedia.org/wiki/Vowel . Similarly removing the dots bring them back.
Turkish already had i sound and its back variant which is a schwa-like sound: https://en.wikipedia.org/wiki/Close_back_unrounded_vowel . It has the same relation in IPA as 'ö' has to 'o' and 'ü' has to 'u'. Since the makers of the Turkish variant of Latin Alphabet had the rare chance of making a regular pronunciation system with the state of the language and since removing the dots had the effect of making a front vowel a back vowel, they simply copied this feature from ö and ü to i:
Just remove the dots to make it a back vowel! Now we have ı.
When comes to capitalization, ö becomes Ö, ü becomes Ü. So it is just logical to make the capital of i İ and the lowercase of I ı.
Of course the latin capital I is dotless because originally the lowercase latin "i" was also dotless. The dot has been added later to make text more legible.
Does that reflect the Turkish terminology? Ordinarily you would call o and u "high" while a and e are "low". The distinction between o/u and ö/ü is the other dimension: o/u are "back" while ö/ü are "front".
Yes. The Turkish terms are "kalın ünlü" and "ince ünlü". They literally translate to "low pitch wovel"/"high pitch wovel" )(or "thick wovel"/"thin wovel") in this context.
There is a second wovel harmony rule [1] (called lesser wovel harmony) that makes the distinction you pointed out. Letters a/e/ı/i are called flat wovels, and o/ö/u/ü are called round wovels.
[1] https://georgiasomethingyouknowwhatever.wordpress.com/2015/0...
The latinization reform of the Turkish language predates computers and it was hard to foresee the woes that future generations would have had with that choice
Note that the vowel /i/ cannot umlaut, because it's already a front vowel. The ï you cite comes from French, where the two dots represent diaeresis rather than umlaut. When umlaut is a feature of your language, combining the notation like that isn't likely to be a good idea.
A better solution would have been to leave i/I as they are (similar to j/J), and introduce a new lowercase/uppercase letter pair for "ı", such as Iota (ɩ/Ɩ).
In c#, setting every letter to its uppercase form is ToUpper, and I think capitalise is perfectly reasonable for setting the first character. I'm not sure I've ever referred to uppercasing a string as capitalising it
> The code is part of a class named CompilerOutputParser, and is responsible for reading XML files containing messages from the Kotlin compiler. Those files look something like this:
"Oh."
"... Seriously?"
As if I didn't hate XML enough already.
>match
>[Definition: (Of strings or names:) Two strings or names being compared are identical. Characters with multiple possible representations in ISO/IEC 10646 (e.g. characters with both precomposed and base+diacritic forms) match only if they have the same representation in both strings. No case folding is performed.
I'm quite fond of XML myself, and this is not an issue in XML.
Unrelated, but a month ago I found a weird behaviour where in a kotlin scratch file, `List.isEmpty()` is always true. Questioned my sanity for at least an hour there... https://youtrack.jetbrains.com/issue/KTIJ-35551/
Ramazan Çalçoban sent his estranged wife Emine the text message:
Zaten sen sıkışınca konuyu değiştiriyorsun.
"Anyhow, whenever you can't answer an argument, you change the subject."
Unfortunately, what she thought he wrote was:
Zaten sen sikişınce konuyu değiştiriyorsun.
"Anyhow, whenever they are fucking you, you change the subject."
This led to a fight in which the woman was stabbed and died and the man committed suicide in prison.https://gizmodo.com/a-cellphones-missing-dot-kills-two-peopl...
You can also change the default culture to the invariant culture and save all the headaches. Save the localized number conversion and such for situations where you actually need to interact with localized values.
Though linters will routinely catch this particular issue FWIW.
1. Simple one-to-one mappings -- E.g. `T` to `t`. These are typically the ones handled by `lower()` or similar methods as they work on single characters so can modify a string in place (the length of the string doesn't change).
2. More complex one-to-many mappings -- E.g. German `ß` to `ss`. These are covered by functions like `casefold()`. You can't modify the string in place so the function needs to always write to a new string buffer.
3. Locale-specific mappings -- This is what this bug is about. In Turkish `I` maps to `ı` whereas other languages/locales it maps to `i`. You can only implement this by passing the locale to the case function, irrespective of whether you are also doing (1) or (2).
See also: https://stackoverflow.com/questions/19030948 where someone sought the locale-sensitive behaviour.
Defaulting to ROOT makes a lot of sense for internal constants, like in the example in this article, but defaulting to ROOT for everything just exposes the problems that caused Sun to use the user locale by default in the first place.