The exploits largely revolved around either speccing an unusual code point that “vibes” into say a slash or a hyphen or quotes. These code points are typically evaluated one way (correct full Unicode evaluation) inside a modern programming language, but when passed to shell commands or other Win32 API things are vibes-downed. Crucially this happens after you check them, since it’s when you’ve passed control.
To quote the curl maintainer “curl is a victim” here — but who is the culprit? It seems certain that curl will be used to retrieve user supplied data automatically by a server in the future. When that server mangles user input in one way for validation and another when applied to system libraries, you’re going to have a problem.
It seems to me like maybe the solution is to provide an opt-out of “best fit” munging in the Win32 space, but I’m not a Windows guy, so I speculate. At least then open source providers could just add the opt out to best practices, and deal with the many terrible problems that things like a Unicode wide variant of “ or \ delivers to them.
And of course even if you do that, you’ll interact with officially shipped APIs and software that has not opted out.
I'm not sure why the non-unicode APIs are still so commonly used. I can't imagine it's out of a desire to support Windows 98 or Window 2000.
[1] https://learn.microsoft.com/en-us/windows/apps/design/global...
An explanation I heard ~10 years prior is that doing so exposed bugs in CRT and nobody wanted to fix them.
What I've heard is that the issue is not with the CRT, but with applications using fixed-size byte buffers. IIRC, converting from UTF-16 to any of the traditional Windows code pages requires at most two bytes for each UTF-16 code unit, while the UTF-8 "code page" can need three bytes. That would lead to buffer overflows in these legacy applications if the "ANSI" code page was changed to UTF-8.
You can read Mingw64 source to see how many hacks they had to do to make it work.
It's very easy to make a win32 program without the ucrt filesytems APIs so long as you don't mind being platform-specific (or making your own cross-platform wrappers).
Using C++ is another matter.
So now it seems even if you think your app is fully Unicode, still do this just in case? :)
Windows filenames are not guaranteed to be valid UTF-16 so A functions with UTF-8 code page can still fail to access some files. If you want 100% compatibility you need to realize that Windows is a WTF-16 system and make your own compatibility wrappers for the W functions under that assumption.
You can use W functions and convert the WTF-16 strings you get to WTF-8 and use that in std::string without problems.
On top of that, how many new gotchas these “modern” Windows functions hide, and how many fix cycles are required to polish them to the required level?
Yes it would have required numerous fix cycles, but curl in my mind is such a polished product and they would have bit the bullet.
However, in neither case this is a shortcoming of curl. They'd be responding to a complicated problem caused by the platform they're running on.
Because they care. That's it.
> I've never paid for curl.
I'm sure people who develop it doesn't want money and fame, but they're just doing what they like. However, curl has commercial support contracts if you need.
> Who even develops it?
Daniel Stenberg et al. Daniel can be found at https://daniel.haxx.se.
> Sounds like a thankless job to fix obscure worstfit bugs.
It may look thankless, but it's not. curl is critical infrastructure at this point. While https://xkcd.com/2347/ applies squarely to cURL, it's actually nice that the lead developer is making some money out of his endeavor.
They fix bugs because they simply want their product to be better, if if I were to take a guess? Like, I'm sure curl's contributors worked on OS-specific problems before, and it wouldn't be the last.
> to fix obscure worstfit bugs.
Again my premise is "if there are actually common problems caused by it". This specific bug does sound like that, at least not for now.
Even argv is affected on Windows. That's part of the C and C++ standard, not really a Windows API. Telling all C/C++ devs they need to stop using argv is kind of a tough ask.
Oh, and wmain is a VisualC thing. It isn't found on other platforms. Not standardised.
Also, it's not UCS-2. Also not UTF-16. Windows uses WTF-16 internally and if you want 100% compatibility that's what you need to target.
Nowadays, it's either for historical reasons (code written back when supporting Windows 9x was important, or even code migrated from Windows 3.x), or out of a desire to support non-Windows systems. Most operating systems use a byte-based multi-byte encoding (nowadays usually UTF-8) as their native encoding, instead of UTF-16.
> I'm not sure why the non-unicode APIs are still so commonly used
I think because the rest of the C world uses char* with utf-8, so that is what people are habituated to. Setting the ACP to CP_UTF8 would have solved a lot of problems, but I believe that's only been supported for a short period of time, bafflingly.
It wouldn't solve all encoding problems though because most Windows APIs can store/return invalid UTF-16 which you can't represent in CP_UTF8 - you'd need a CP_WTF8 for that which doesn't even exist so you have to use the W APIs and do the conversion yourself.
A lot of the uses are indirectly via standard C API functions that are effectively broken on Windows but work just well enough (i.e. work with ASCII) that their use goes unnoticed when someone ports something to Windows.
Simple: portable code meant to run on Unix (where UTF-8 is king) and Windows -> want to use UTF-8 codepage on Windows and the "A" APIs.
A long ago released build of Windows 10 did this automatically so no need for adjustments anymore, 32k is the max....
...except for Office! It can't handle long paths. But Office has always been hacky (the title bar, for example).
I think what is also needed is some kind of linting - there is usually no need to call ANSI WinAPI functions in a modern application (unless you set the locale to UTF-8 and only use the 8-bit functions, but I don't know how well that works). I think there are also a couple of settings and headers to include to make everything "just work" - meaning argv, printf and std::cout work with UTF-8, you get no strange conversions, and you just have functions to convert between UTF-8 and UTF-16 to use WinAPI. I'm pretty sure I have a Visual Studio project lying around somewhere where it works. But all those steps necessary need to be documented and put in one place by MS.
Worse than that, even reading relevant registry keys will be faked.
Security vulnerability or not, it's a bug with curl on windows as it doesn't correctly handle unicode arguments.
Microsoft should just never have created Best-Fit -- it's a disaster. If you have to lose information, use an ASCII character to denote loss of information and be done. (I hesitate to offer `?` as that character.) Or fail to spawn the process with an error indicating the impossibility of transcoding. Failure is better actually.
For "they", I mean MS could have made BestFit work as follows: if an input string contains characters not in the user's code page, then return a new string with characters replaced by with the name of that code point as assigned by the Unicode consortium (and maybe also the textual code point number U+<number>). This requires a new allocation and copies of the parts of the string not needing replacement, but loses no information and creates no security holes.
Windows is like the card game Munchkin, where a whole bunch of features can add up to a completely, unbelievably random over-powered exploit because of unintentional synergy between random bits.
I'm happy to see that they are converting the ANSI subsystem to UTF-8, which should, in theory, mitigate a lot of these problems.
I wonder if the Rust team is going to need YetAnotherFix to the process spawning API to fix this...
Of course, Rust can't control what happens on the other side of side of a process boundary. So if an application invoked by Rust uses ANSI APIs then they'll have a problem. But also that's their responsibility.
This is actually necessary because Rust cannot assume it owns the entry point. E.g. a Rust library could be called from a C++ application or in a DLL, etc. So when someone calls `std::env::args` it asks the OS directly for the arguments instead of getting them from C.
This has been Microsoft's official position since NT 3.5, if I remember correctly.
Sadly, one of the main hurdles is the way Microsoft's own C/C++ runtime library (msvcrt.dll) is implemented. Its non-standard "wide" functions like _wfopen(), _wgetenv(), etc. internally use W-functions from Win API. But the standard, "narrow" functions like fopen(), getenv(), etc., instead of using the "wide" versions and converting to-from Unicode themselves (and reporting conversion failures), simply use A-functions. Which, as you see, generally don't report any Unicode conversion failures but instead try to gloss over them using best-fit approach.
And of course, nobody who ports software, written in C, to Windows wants to rewrite all of the uses of standard functions to use Microsoft's non-portable functions because at this point, it becomes a full-blown rewrite.
As for my application, any wchar conversions being done by the runtime are a drop in the bucket compared to the actual compute.
OTOH you need ~twice as much memory / copy ~twice as much data around than if you converted to WTF-8 internally.
- The wide APIs accept and/or produce invalid UTF-16 in some places (like filesystems). There's no corresponding UTF-8 for invalid UTF-16. Meaning there are cases that lead to loss of information and that you simply cannot handle.
- You have no control over all the DLLs loaded in your process. If a user DLL loads that can't handle UTF-8 narrow APIs, you're just praying it won't break.
- Some APIs simply don't have narrow versions. Like CommandLineToArgvW() or GetFileInformationByHandleEx() (e.g., FILE_NAME_INFO). You will not avoid wide APIs by doing this if you need to use enough of the APIs; you're just going to have to perform conversions that have dubious semantics anyway (see point #1 above).
- Compatibility with previous Windows versions, obviously.
- Performance
I want to emphasize this point. From what I've heard, on Windows it's very common for DLLs from who knows where to end up loaded in your process. Not only the things you'd also find on other operating systems like the user-space component of graphics APIs like OpenGL and Vulkan, but also things like printer drivers, shell extensions, "anti-malware" stuff, and I've even heard of things like RGB LED control software injecting their DLLs into every single process. It's gotten so bad that browsers like Firefox and Chrome use fairly elaborate mechanisms to try to prevent arbitrary DLLs from being injected into their sandbox processes, since they used to be a common source of crashes.
There's WTF-8 - too bad that's not what Microsoft chose to use for their universal 8-bit codepage.
Using UTF-8 with the "A" Windows APIs is the only reasonable solution, and Microsoft needs to commit to that.
> - The wide APIs accept and/or produce invalid UTF-16 in some places (like filesystems). There's no corresponding UTF-8 for invalid UTF-16. Meaning there are cases that lead to loss of information and that you simply cannot handle.
This is also true on Unix systems as to `char`. Yes, that means there will be loss of information regarding paths that have garbage in them. And again, if you want to write code for Windows _and_ Unix, using wchar_t won't spare you this loss on Unix. So you're damned if you do and damned if you don't, so just accept this loss and say "don't do that".
> - You have no control over all the DLLs loaded in your process. If a user DLL loads that can't handle UTF-8 narrow APIs, you're just praying it won't break.
In some cases you do have such control, but if some DLL unknown to you uses "W" APIs then.. it doesn't matter because if it's unknown to you then you're not interacting with it, or if you are interacting with it via another DLL that is known to you then it's that DLL's responsibility to convert between char and wchar_t as needed. I.e., this is not your problem -- I get that other people's bugs have a way of becoming your problem, but strictly speaking it's their problem not yours.
> - Some APIs simply don't have narrow versions. Like CommandLineToArgvW() or GetFileInformationByHandleEx() (e.g., FILE_NAME_INFO). You will not avoid wide APIs by doing this if you need to use enough of the APIs; you're just going to have to perform conversions that have dubious semantics anyway (see point #1 above).
True, but these can be wrapped with code that converts as needed. This is a lot better from a portability point of view than to fork your entire code into Windows and Unix versions.
> - Compatibility with previous Windows versions, obviously.
Sigh. At some point people (companies, contractors/consultants, ...) need to put their feet down and tell the U.S. government to upgrade their ancient Windows systems.
> - Performance
The performance difference between UTF-8 and UTF-16 is in the noise, and it depends greatly on context. But it doesn't matter. UTF-8 could be invariably slower than UTF-16 and it would still be better to move Windows code to UTF-8 than to move Unix to UTF-16 or lose portability between Windows and Unix.
In case you and others had not noticed Linux has a huge share of the market on servers while Windows has a huge share of the market on laptops, which means that giving up on portability is not an option.
The advice we give developers here has to include advice we give to developers who have to write and look after code that is meant to be portable to Windows and Unix. Sure, if you're talking to strictly-Windows-only devs, the advice you give is alright enough, but if later their code needs porting to Unix they'll be sad.
The reality is that UTF-8 is superior to UTF-16. UTF-8 has won. There's just a few UTF-16 holdouts: Windows and JavaScript/ECMAScript. Even Java has moved to UTF-8. And even Microsoft seems to be heading in the direction of making UTF-8 a first-class citizen on Windows.
The problem is that you can't roundtrip all filenames. CP_UTF8 doesn't solve that only pretends to. For a full solution you need to use the W functiosn and then convert between WTF-16 and WTF-8 yourself.
> At least in the context of Unix utilities portable to Windows. We are NOT going to be forking those to use wchar_t on Windows and char on Unix -that's a non-starter- and we're also not going to be switching to wchar_t on both because wchar_t is a second-class citizen on Unix.
Those aren't the only options. You (or someone) could also write your own compatibility layers for the APIs that avoid some of the problems I mentioned (e.g., by producing errors on inconvertible characters, by being compatible with former Windows versions, by not affecting other DLLs in your process, etc.)
Or you could e.g. get upstream to start caring about their users on other platforms, and play ball.
> This is also true on Unix systems as to `char`. Yes, that means there will be loss of information regarding paths that have garbage in them. And again, if you want to write code for Windows _and_ Unix, using wchar_t won't spare you this loss on Unix.
Er, no. First, if you're actually writing portable code, TCHAR is the solution, not wchar_t. Second, if you can't fork others' code, at the very least you can produce errors to avoid silent bugs (see above). And finally, "this problem also exists with char" is just wrong. In a lot of cases the problem doesn't exist as long as you're using the same representation and avoiding lossy conversion, whatever the data type is. If (say) the file path is invalid UTF, and you save it somewhere and reuse it, or pass it to some program and then have it passed back to you, you won't encounter any issues -- the data is whatever it was. The issues only come up with lossy conversions in any direction.
> if some DLL unknown to you uses "W" APIs then.. it doesn't matter because if it's unknown to you then you're not interacting with it, or if you are interacting with it via another DLL
I don't think you're understanding the problem here. Interaction is not part of the picture at all. You might not be loading the DLL yourself at all. DLLs get loaded by the OS and user for all sorts of reasons (antiviruses, shell extensions, etc.) and they easily run in the background without anything else in the process "knowing" anything about the at all. Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.
> Sigh. At some point people (companies, contractors/consultants, ...) need to put their feet down and tell the U.S. government to upgrade their ancient Windows systems.
USG? Ancient? These are systems less than 10 years old. We're not talking floppy-controlled nukes here.
> The performance difference between UTF-8 and UTF-16 is in the noise, and it depends greatly on context.
"Depends greatly on the context" kinda makes my point. It can turn a zero-copy program into single- or double-copy. Generally not a showstopper by any means, but it sure as heck can impact some programs. And if that program is a DLL people use - well now you can't work around. (Yes, there's a reason I listed this last. But there's a reason I listed it at all.)
> The reality is that UTF-8 is superior to UTF-16. UTF-8 has won.
The reality is Windows isn't UTF-16 and nix isn't UTF-8, which was the crux of most of my points.
TCHAR is a Microsoftism, it's NOT portable at all.
That's akin to writing a partial C library. If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.
> Or you could e.g. get upstream to start caring about their users on other platforms, and play ball.
The upstream is often not paid for this. Even if they get a PR, if the PR makes their code harder to work on they might reject it.
Microsoft has to make UTF-8 a first-class citizen.
> I don't think you're understanding the problem here. Interaction is not part of the picture at all. You might not be loading the DLL yourself at all. DLLs get loaded by the OS and user for all sorts of reasons (antiviruses, shell extensions, etc.) and they easily run in the background without anything else in the process "knowing" anything about the at all. Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.
You mean changing the codepage for use with the "A" functions? Any DLL that does that must go on the bonfire. There's a special place in Hell for developers who build such DLLs.
> "Depends greatly on the context" kinda makes my point. It can turn a zero-copy program into single- or double-copy. Generally not a showstopper by any means, but it sure as heck can impact some programs. And if that program is a DLL people use - well now you can't work around. (Yes, there's a reason I listed this last. But there's a reason I listed it at all.)
I'm assuming you're referring to having to re-encode at certain boundaries. But note that nothing in Windows forces or even encourages you to use UTF-16 for bulk data.
> The reality is Windows isn't UTF-16 and nix isn't UTF-8, which was the crux of most of my points.
Windows clearly prefers UTF-16, and its filesystems generally use just-wchar-strings for filenames on disk (they don't have to though). Unix clearly prefers UTF-8, and its filesystems generally use just-char-strings on disk.
> That's akin to writing a partial C library. If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.
I found out about activeCodePages thanks to developers of those compatibility layers documenting the option and recommending it over their own solutions.
> The upstream is often not paid for this. Even if they get a PR, if the PR makes their code harder to work on they might reject it
The project I work on is an MFC application stemming from 9x and early XP and abandoned for 15 years. Before I touched it it had no Unicode support at all. I'm definitely not being paid to work on it, let alone the effort to convert everything to UTF-16 when the tide seems to be going the other direction.
> Your program is declaring that everything in the process is UTF-8 compatible, but those DLLs might not be compatible with that, and so you're just praying that they don't use -A functions in an incompatible manner.
Programs much, much, much more popular than mine written by the largest companies in the world, and many programs you likely use as a developer on Windows, set activeCodePage to UTF-8. Notwithstanding the advice in the article to set it globally for all applications (and it implies it already is the default in some locales). Those DLLs will be upgraded, removed, or replaced.
And since Windows-centric developers, when porting their apps to Linux, are generally willing to accomodate for Linux-specific idiosyncrasies (that's what porting is about, after all) if they care abour that platform enough, the dynamic will generally stay the same: people porting from Windows to Linux will keep making compatibility shims, people porting from Linux to Windows will keep telling you "build it with MinGW or just run it in WSL2, idgaf".
Not really. It's just writing an encoding layer for the APIs. For most APIs it doesn't actually matter what they're doing at all; you don't have to actually care what their behaviors are. In fact you could probably write compiler tooling to do automatically analyze the APIs and generate code for most functions so you don't have to do this manually.
> If MSFT makes UTF-8 as the codepage work well enough I'd rather use that.
"Well enough" as in, with all the warts I'm pointing out? Their current solution is all-or-nothing for the whole process. They haven't provided a module-by-module solution and I don't expect them to. They haven't provided a way to avoid information loss and I don't expect them to.
> You mean changing the codepage for use with the "A" functions? Any DLL that does that must go on the bonfire. There's a special place in Hell for developers who build such DLLs.
"Changing" the code page? No, I'm just saying any DLL that calls FooA() without realizing FooA() can now accept UTF-8 could easily break. You're just praying that they don't.
> I'm assuming you're referring to having to re-encode at certain boundaries. But note that nothing in Windows forces or even encourages you to use UTF-16 for bulk data.
Nothing? How do you say this with such confidence? What about, say, IDWriteFactory::CreateTextLayout(const wchar_t*) (to give just one random example)?
And literally everything that interacts with other apps/libraries/etc. that use Unicode (which at least includes the OS itself) will have to encode/decode. Like the console, clipboard, or WM_GETTEXT, or whatever.
The whole underlying system is based on 16-bit code units. You're going to get a performance hit in some places, it's just unavoidable. And performance isn't just throughput, it's also latency.
> Windows clearly prefers UTF-16, and its filesystems generally use just-wchar-strings for filenames on disk (they don't have to though). Unix clearly prefers UTF-8, and its filesystems generally use just-char-strings on disk.
Yes, and you completely missed the point. I was replying to your claim that "UTF-8 has won" over UTF-16. I was pointing out that what you have here is neither UTF-8 on one side nor UTF-16 on the other. Going with who "won" makes no sense when neither is the one you're talking about, and you're hitting information loss during conversions. If you were actually dealing with UTF-16 and UTF-8, that would be a very different story.
I really would like to learn otherwise. But when I have to suggest fixes my old opinion stays. Dropping any C runtime use and going from API macro or A version to W is the solution to all weird and hard to repro problems on platforms from Ms.
That does not sound like a good solution.
If you can't embed it for some reason, then you can distribute the application manifest side-by-side with the executable by appending ".manifest" to the binary filename. In this case probably already have defensive checks for other resources not being found if a user copies just the exe, and if not can add one and exit.
This does mean I can't just use char* and unadorned string literals, so I define a tchar type (which is char on Linux and wchar_t on Windows) and an _T() macro for string literals.
This mostly works without thinking about it.
This has been superseded by the Universal C runtime (UCRT)[1] which is C99-compliant.
https://learn.microsoft.com/en-us/cpp/c-runtime-library/comp...
[1]: https://devblogs.microsoft.com/cppblog/c11-atomics-in-visual...
[2]: https://devblogs.microsoft.com/cppblog/c11-threads-in-visual...
Not as bad as Apple nowadays though, quite far from Inside Inside Macintosh days.
Glad to know about C23 features, as they went silent on C23 plans.
C++23 looks quite bad for anything that requires frontend changes, there are even developer connection issues for us to tell what to prioritise, as if it wasn't logically all of it. There is another one for C++26 as well.
Personally, I think that with the improvements on low level coding and AOT compilation from managed languages, we are reaching local optimum, where C and C++ are good enough for the low level glue, C23 and C++23 (eventually C++26 due to static reflection) might be the last ones that are actually relevant.
Similar to how although COBOL and Fortran standards keep being updated, how many ISO 2023 revision compliant compilers are you going to find out for portable code?
That's really unfortunate.
> Not as bad as Apple nowadays though, quite far from Inside Inside Macintosh days.
Funny story, I know a guy who wanted to write a personal Swift project for an esoteric spreadsheet format and the quality of the documentation of SwiftUI made him ragequit. After that, he switched to kotlin native and gtk and he is much happier.
> Personally, I think that with the improvements on low level coding and AOT compilation from managed languages, we are reaching local optimum, where C and C++ are good enough for the low level glue, C23 and C++23 (eventually C++26 due to static reflection) might be the last ones that are actually relevant.
I agree on the managed language thing but, I mean, the fact that other languages are getting more capable with low level resources does not mean that improvements in C/C++ are a bad idea and will not be used. In fact, I think that features like the transcoding functions in <stdmchar.h> in C2y (ironically those are relevant to the current HN post) are useful to those languages too! So even if C, C++ and fortran are just used for numerical kernels, emulators, hardware stuff, glue code and other "dirty" code advancements made to them are not going wasted.
It is possible to request for a directory to contain case-sensitive files though, and the filesystem will respect that. And if you use the NT Native API, you have no restrictions on filenames, except for the Backslash character. You can even use filenames that Win32 doesn't allow (name with a ":", name with a null byte, file named "con" etc), and every Win32 program will break badly if it tries to access such a file.
It's also possible to use unpaired surrogate characters (D800-DFFF without the matching second part) in a filename. Now you have a file on the disk whose name can't be represented in UTF-8, but the filename is still sitting happily in the filesystem. So people invented "WTF-8" encoding to allow those characters to be represented.
AFAIK, it's even worse: it uses the rules for the locale which was in use when the filesystem was created (it's stored in the $UpCase table in NTFS, or its equivalent in EXFAT). So you could have different case-insensitive rules in a single system, if it has more than one partition and they were formatted with different locales.
IMO, case-insensitive filesystems are an abomination; the case-insensitivity should have been done in the user interface layer, not in the filesystem layer.
Implementing case-insensitivity in a file picker or something is OK, but doing that throughout your app's runtime is insane since you'd have to hook every file open and then list the directory, whereas in a file picker you're probably listing the directory anyways.
What's the NT Object Namespace? You can use "WinObj" from SysInternals to see it.
The NT Object Namespace uses its own special paths called NT-Native paths. A file might be "C:\hello.txt" as a Win32 path, but as an NT-Native path, it's "\??\C:\hello.txt". "\??\" isn't a prefix, or a escape or anything like that. It's a real directory sitting in the NT Object Namespace named "\??", and it's holding symbolic links to all your drive letters. For instance, on my system, "\??\C:" is a symbolic link that points to "\Device\HarddiskVolume4".
Just like Linux has the "/dev/" directory that holds devices, the NT Object Namespace has a directory named "\Device\" that holds all the devices. You can perform File IO (open files, memory map, device IO control) on these devices, just like on Linux.
"\??\" in addition to your drive letters, also happens to have a symbolic link named "GLOBALROOT" that points back to the NT-Native path "\".
Anyway, back to "\\?\". This is a special prefix that when Win32 sees it, it causes the path to be parsed differently. Many of the checks are removed, and the path is rewritten as an NT-Native path that begins with "\??\". You can even use the Win32 Path "\\?\GLOBALROOT\Device\HarddiskVolume4\" (at least on my PC) as another way to get to your C:\ drive. *Windows Explorer and File Dialogs forbid this style of path.* But 7-Zip File Manager allows it! And regular programs will accept a filename as a command line argument in that format.
Another noteworthy path in "\??\" is "\??\UNC\". It's a symbolic link to "\Device\Mup". From there, you can add on the hostname/IP address, and share name, and access a network share. So in addition to the classic UNC path "\\hostname\sharename", you can also access the share with "\\?\UNC\hostname\sharename" or "\\?\GLOBALROOT\Device\Mup\hostname\sharename".
On Unix the reason for this is that the kernel has no idea what codeset you're using for your strings in user-land, so filesystem-related system calls have to limit themselves to treating just a few ASCII codepoints as such (mainly NUL, `/`, and `.`).
Actually, everyne does use Latin-based scripts extensively. Maybe not exclusively but your almost all text-like data intended to be consumed by programs will mainly be Latin-based scripts. So even for languages where you have characters that need 3-bytes in UTF-8 but two in UTF-16 you can still end up saving memory with UTF-8 because all the boilerplate syntax around your fancy characters is ASCII.
One way is with a Manifest file, and works as of a particular build of Windows 10. This can also be applied to any EXE after building it. So if you want a program to gain UTF-8 support, you can hack it in. Most useful for console-mode programs.
The other way is to use the hacks that "App Locale" type tools use. One way involves undocumented function calls from NTDLL. I'm not sure exactly which functions you need to call, but I think it might involve "RtlInitNlsTables" and "RtlResetRtlTranslations" (not actually sure).
I don’t know how likely this is. There are a lot of old applications that assume a particular code page, or assume 1 byte per character, that this would break. There are also more subtle variations of this, like applications assuming that converting from wide characters to ANSI can’t increase the number of bytes (and hence an existing buffer can be safely reused), which isn’t the case for UTF-8 (but for all, or almost all, existing code pages). It can open up new vulnerabilities.
It would probably cause much less breakage to remove the Best-Fit logic from the win32 xxxA APIs, and instead have all unmappable characters be replaced by a character without any common meta semantics, like “x”.
[0] https://tambre.ee/blog/adobe_after_effects_windows_utf-8/
Another not-so-reasonable thing would be to drop all mappings to "special" ASCII characters from the Best-Fit mappings, though this wouldn't help apps linked statically with the CRT. Not so reasonable because it doesn't fix the vulnerabilities.
Sometimes security vulnerabilities motivate backwards-compatibility breakage.
[1] https://learn.microsoft.com/en-us/dotnet/fundamentals/code-a...
You don't need to convert everything from char * to wchar *. You can instead convert the wide characters you received to UTF-8 (or to something like Rust's WTF-8, if you want to also allow invalid sequences like unpaired surrogates), and keep using "char" everywhere; of course, you have to take care to not mix ANSI or OEMCP strings with UTF-8 strings, which is easy if you simply use UTF-8 everywhere. This is the approach advocated by the classic https://utf8everywhere.org/ site.
Bit of a shame that you can't fully opt-in to be UTF-8 with the *A API, for your own apps. But I think for the issues highlighted in the post, I think it would still be a valid workaround/defence-in-depth thing.
[0] https://learn.microsoft.com/en-us/windows/apps/design/global...
[1] Yes, I know there is a UTF-8 code page (65001). That was really unusable for a long time and still is suffering compatibility issues to this day.
TXR Lisp uses wchar_t strings, and the "W" functions on Windows. So that's well and good. But it does start with a regular C main, relying on the Cygwin run-time for that.
If that's vulnerable, I will hack it to have its own argument parsing, using the wide char command line.
Maybe I should ask this on the Cygwin mailing list.
Unfortunately, they're using it on "__argv", not "__wargv".
Which means they are probably vulnerable to this.
[0] https://cygwin.com/cgit/newlib-cygwin/tree/winsup/cygwin/ker...
https://github.com/cygwin/cygwin/blob/main/winsup/cygwin/ker...
If we jump to the very bottom, we see that GetCommandLineA is one of the functions implemented in this file. It uses something called RtlUnicodeStringToAnsiString.
Microsoft declares that one in <wdm.h> and places it in NTOSKRNL.EXE.
That very function is mentioned in the submitted article as having the problem!
If Cygwin handles arguments through this function in preparation for main(), and it's the Microsoft one, it has the issue.
In my program, I could fix it (if necessary) by taking the WCHAR command line and parsing it myself, using my own UTF8 encoding routines to make the multi-byte strings.
Luckily, I maintain a fork of the Cygwin DLL for my project called Cygnal, where I can make improvements. The focus has not been security up to now, but rather restoring some "native like" behaviors in the Cygwin run-time that are "too POSIXy" for native Windows users.
I could hack the CRT stuff in the Cygnal fork to avoid pitfalls in producing the main() arguments.
At the moment, I only looked at this issue fairly superficially, so I don't have a full picture of how what piece are doing exactly what in this area.
Fundamentally this boils down to essentially bugs in functions that are supposed to transform untrusted into trusted input like the example they gave:
`system("wget.exe -q " . escapeshellarg($url));`
`escapeshellarg` is not producing a trusted output with some certain inputs.
vs. posix that just dumps the arguments directly into argv
Windows technically just works on the principle of an executable name + a single argument. And it does this for compatibility with DOS.
So you end up with this stupid escaping rules you’ve described so there are compatibility conventions at the kernel level with earlier implementations of Windows, which in turn maintained compatibility with MS-DOS. While providing a C abstraction that’s compatible with POSIX.
Which is just one of many reasons why it’s a nightmare to write cross platform shells that also target Windows.
This is bold claim.
Is it not possible? Or not easy to do correctly?
and then every program handles it in whatever way it feels is best
as examples: go/java/python all process arguments slightly differently
even microsoft's libc changes handling between versions
given it's not possible to know what parser a specific target program is going to use: it's not possible to generically serialise an array safely
Unfortunately, the parsing is a two-step operation, with the application calling GetCommandLineA itself first and passing that to the parser, so a fix would need a hack to correlate the versions of the command line input without breaking when it's given a different string.
Always use W functions and use your own converions (that can round-trip invalid UTF-16 like WTF-8) if you want to use an 8-bit encoding internally.
Most (all?) of the exploits here are already bugs because the applications don't properly handle unicode data.
It directly mentions: "Setting BestFitMappingAttribute parameters in this manner provides an added measure of security."
A reasonably sane solution would be for it to reject command line arguments on Windows that contain non-ASCII characters or ASCII characters that aren’t portable across code pages (not all code pages are a superset of US-ASCII), by default, and to support an optional parameter to allow the full range, documenting the risk.
This might be uglier and slightly change behaviour, but only for vulnerable applications.
For those who don't know the reference: https://en.wikipedia.org/wiki/Bush_hid_the_facts it's a vaguely related issue, in which a Windows component misinterprets a sequence of ASCII characters as a sequence of UTF-16 characters. Windows just seems full of these sorts of character-handling bugs, in part due to its long history as a descendant of the codepage-using MS-DOS and 16-bit Windows operating systems.
What happens if the standard library updates its shell escaping to also escape things like the Yen character and any other character that has a Best-Fit translation into a quote or backslash? Which is to say, what does Windows do for command-line splitting if it encounters a backslash-escaped nonspecial character in a quoted string? If it behaves like sh and the backslash simply disables special handling of the next character, then backslash-escaping any threat characters should work.
If the executable is linked statically with the CRT then nothing changes until you re-link it with the newer CRT. If it links with the UCRT then if the UCRT changes its rules then the program will too.
I find encoding to be a mostly solved problem in any software that supports /UTF-[0-9]+/. Can't remember the last time I ran into an encoding issue, maybe it was when I ran PHP on a Windows server years and years ago, which defaulted to passing query parameters in ISO-8859-15 or something, which didn't work with html escaping (returned a blank string, so failing in a safe and obvious way iirc) if you didn't specify the character set. I personally converted, or set, everything I create or use to UTF-8 by now
https://devblogs.microsoft.com/oldnewthing/20040324-00/?p=40...
To guard against this nasty issue that is going to take years to fix you can enable global UTF-8 support by doing
Settings > Time & language > Language & region > Administrative language settings > Change system locale, and check Beta: Use Unicode UTF-8 for worldwide language support. Then reboot the PC for the change to take effect.
[0] https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/Windo...
| The loosey-goosey mapping of code points to characters has always bothered me about Unicode.
but the ones you posted are Microsoft's Best-Fit mappings. I'm going to conclude that GP referred to just that.
The comment you responded to is weird because of exactly this reason. Very terse without a whole lot of substance, coming off as interaction bait / similar. All too often do I see comments on social media where people post something that will obviously lead others into being cornered arguing something that was misleading in the first place. Best not to entertain these unless confidence can be had that they're being genuine and are phrasing like this by mistake / not knowing better.
Imagine no emojis, Just letters, plain and true, No accents to confuse us, No glyphs in Sanskrit too. Imagine all the programs, Running clean and fast…
You may say I’m a dreamer, But I’m not the only one. I hope someday you’ll join us, And encoding wars will be done.
I do too, which is why I hate the "unicode only for everything everywhere" narrative that's taken hold. My language can't be written properly in Unicode, so support for traditional codepages and encodings is really important!
Speak ASCII or Die [0]