I'm glad they tracked it down even further to figure out exactly why.
https://randomascii.wordpress.com/2016/10/17/vestibular-dysf...
I documented it this time :sigh: https://github.com/MatthewJohn/terrareg/commit/2231ba733a7f5...
i need a hug.
He has many better ones but that's the latest one I've seen
As an additional protection, new stack frames are implicitly zeroed as they are created. I assume this is done by filling the CPU cache with zeros for those addresses before continuing to execute the called function. No need to wait for actual zeros to be written to main memory.
Consider the case of a system call, such as `read`. You’re in user space and you have some stack frames on the stack as usual. You allocate a buffer on the stack (there’s a cpu instruction for that; it basically just extends your “turf¹” to include more of the stack page, and zeros it as mentioned) to hold the data you want to read. You then call `read` with the `call` instruction, including the address of the buffer and the buffer size as arguments. So far everything is very straight–forward.
But `read` is actually in a different protection domain; it’s part of the kernel. The CPU uses metadata previously set up by the kernel to turn this into a “portal call”. After the portal call your thread will be given a different protection domain. In principle this is the kernel’s protection domain, but in reality the kernel might split that up in many complicated ways. What is relevant here is that the turf of this protection domain has been modified to include this new stack frame. From the perspective of `read`, the stack has just started; there are no prior frames. The reality is that this stack frame is still part of the stack of the caller, it’s only the turf that has changed. Those prior stack frames still exist, but they are unreadable. Worse, the buffer is also unreadable; it’s located at an address that is not part of the kernel’s turf.
So obviously there needs to be another set of instructions for modifying turfs. The full set of obvious modifications are available, but the relevant one here is a temporary grant of read and/or write permissions to a function you are about to call. You would insert a `pass` instruction to pass along access to the buffer for the duration of the call. This access is automatically revoked after the call returns. (Ideally you wouldn’t actually have to do this manually for every portal call; instead you would call a non–portal `read` function in libc. This function’s job is to make the portal call, and whoever wrote it makes sure to include the `pass` instruction.)
¹ A turf is the set of addresses that a given thread running in a given protection domain can read and/or write.
It'd be expensive though; every context switch would require it's own stack and pushing / restoring one more register. There's GOOD reason programs don't work that way and are supposed to not rely on values outside of properly initialized (and not later clobbered) memory.
Expensive is the (very slow for modern CPUs) operation of _writing_ that change in value out to memory at it's distant and slow speed compared to that which the CPU operates at, as well as the overhead of synchronizing that write to any other caches of those memory locations.
Maybe you're thinking of the trick of a band new page of memory mapped memory that is 'zeroed' but is in reality just a special 'all zeros' page in the virtual to physical memory lookup table? Those still need to be zeroed by real writes at some point, if they're ever used.
For reference, the actual proposal that was accepted into C++26 is [2]. It discusses performance only in general, and it refers to an earlier analysis [3] for more details. This last reference describes regressions of around 0.5% in time and in code size. Earlier prototypes suggested larger regressions (perhaps even "horrendous") but more emphasis on compiler optimizations has brought the regression down considerably.
Of course one's mileage may vary, and one might also consider a 0.5% regression unacceptable. However, the C++ committee seems to have considered this to be an acceptable tradeoff to remove a frequent cause of undefined behavior from C++.
[1]: https://herbsutter.com/2024/08/07/reader-qa-what-does-it-mea...
[2]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p27...
[3]: https://open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2723r1...
This compiler option causes the compiler to emit a call to a stack probe function to ensure that a sufficient amount of stack space is available.
Rather than just probe once for each stack page used, you can substitute a function that *FILLS* the stack frame with a particular value - something like 0xBAADF00D - one could set the value to anything you wanted at runtime.
This would get you similar behaviour to gcc/clang's -ftrivial-auto-var-init
Windows has started to auto-initialize most stack variables in the Windows kernel and several other areas.
The following types are automatically initialized:
Scalars (arrays, pointers, floats)
Arrays of pointers
Structures (plain-old-data structures)
The following are not automatically initialized:
Volatile variables
Arrays of anything other than pointers (i.e. array of int, array of structures, etc.)
Classes that are not plain-old-data
During initial testing where we forcibly initialized all types of data on the stack we saw performance regressions of over 10% in several key scenarios.
With POD structures only, performance was more reasonable. Compiler optimizations to eliminate redundant stores (both inside basic blocks and between basic blocks) were able to further drop the regression caused by POD structures from observable to noise-level for most tests.
We plan on revisiting zero initializing all types (especially now that our optimizer has more powerful optimizations), we just haven’t gotten to it yet.
see https://web.archive.org/web/20200518153645/https://msrc-blog... > Nov 22, 2012 — Perl 5.18 will introduce per process hash randomization and almost certainly will feature a new hash function.
> This is an interesting lesson in compatibility: even changes to the stack layout of the internal implementations can have compatibility implications if an application is bugged and unintentionally relies on a specific behavior.
I suppose this is why Linux kernel maintainers insist on never breaking user space.
With a sufficient number of users of an API,
it does not matter what you promise in the contract:
all observable behaviors of your system
will be depended on by somebody.
If you promise randomization, then somebody will depend on that :)And then you can never remove it!
I know it's easier said than done everywhere, just found it to be an interesting parallel.
You don't. You say the order is undefined.
Check the table at https://docs.adacore.com/spark2014-docs/html/ug/en/usage_sce..., look for "SPARK builds on the strengths of Ada to provide even more guarantees statically rather than dynamically.".
More reading:
https://docs.adacore.com/spark2014-docs/html/ug/en/tutorial....
https://learn.adacore.com (many books for learning Ada and SPARK) available in PDF, EPUB, and HTML format.
Also, scanf should be deprecated. Terrible API. Never use scanf or sscanf etc. We managed to get "gets()" deprecated, time to spread that to other parts of the API.
atoi() or atof() etc. work OK, but really you need a parser.
Most modern langages do that as part of hashdos mitigation, Python did that until it switched to a naturally ordered hashmap, then made insertion order part of the spec. Importantly iteration order remains consistent with a process (possibly on a per-hashmap basis).
Notably, Go will randomise the starting point of hashmap iteration on each iteration.
It turned out there a few places that had assumed a predictable - not just stable, but deterministic - hash key iteration order. Mostly this showed up as tests that failed 50% of the time, which suggested to me a rough measure of how annoying an error is to track down is inversely correlated with how often the error appears in tests.
(Other issues were mostly due to the fact that Perl 5 is all but abandoned by its former community: a few CPAN modules are just gone, some are so far out of date that they can't be coerced to still work with other modules that have been updated over time. )
What compiler error would you expect here? Maybe not checking the return value from scanf to make sure it matches the number of parameters? Otherwise this seems like a data file error that the compiler would have no clue about.
Yes it would. -fsanitize=address does a bunch of instrumentation - it allocates shadow memory to keep track of what main memory is defined, and it checks every read and write address against the shadow memory. It is a combination of compile-time instrumentation and run-time checking. And yes, it is expensive, so it should be used for debugging and not the final release.
https://clang.llvm.org/docs/AddressSanitizer.html , https://learn.microsoft.com/en-us/cpp/sanitizers/asan?view=m...
There's no use-after-free, use-after-return, use-after-scope, or OOB access here. It's a case of "an allocated stack variable is dynamically read without being initialized only in a runtime case," which afaik no standard analyzer will catch.
The best way to identify this would be to require all locals to be initialized as a matter of policy (very unlikely to fly in a games studio, especially back then, due to the perceived performance overhead) or to debug with a form of stack initialization enabled, like "-ftrivial-auto-var-init=pattern" which while it doesn't catch the issue statically, does make it appear pretty quickly in QA (I tested).
I only use UBSan and ASan on my own programs because I tend not to make mistakes about initialization. So my knowledge is incomplete with respect to auditing other people's code, which can have different classes of errors than mine.
Thank goodness that every language that is newer than C and C++ doesn't repeat these design mistakes, and doesn't require these awkward sanitizer tools that are introduced decades after the fact.
The simpler policy of "don't allow unintialized locals when declared" would also have caught it with the tools available when the game was made (though a bit ham-fisted).
int x, y, z;
int n = scanf("%d %d %d", &x, &y, &z);
At compile time, you can make no inferences about which of x, y, and z are defined, because that depends on the returned value n. There are many ways to branch out from this.One is to insist on definite assignment - so if we cannot prove all of them are always assigned, then we can treat them as "possibly undefined" and err out.
Another way is to avoid passing references and instead allow multiple returns, like Python (this is pseudocode):
x, y, z = scanf("%d %d %d")
In that case, if the hypothetical `scanf()` returns a tuple that is less than 3 elements or more than 3 elements, then the unpacking will fail at run time and crash exactly at that line.Another way is like Java, which insists that the return value is a scalar, so it can't do what C and Python can do. This can be painful on the programmer, of course.
int n = scanf("%d %d %d", &x, &y, &z);
Would be caught, because it takes references to undeclared variables. To be allowed, the programmer would have to initialize the variables beforehand.It's probably what the PR resolving the issue I linked to does. Though I didn't check
But because the whole line is parsed in a single sscanf call, the compiler's static analysis is forced to assume they have now initialised. There doesn't seem to be any generic static analysis approach that can catch this bug.
Though... you could make a specialised warning just for scanf that forced you to either pass in pre-initilized values or check the return result.
But AI won't replace them, nor did the past 50+ years of software development innovation. There's millions (tens of millions?) of higher programming language developers that don't know the difference between stack or heap besides maybe some theory they half remember from school but they don't care because they don't have to think about it for their day job.
Win9x video games that made bad assumptions about the stack were a theme I saw. One of the differences between win9x and NT based windows is that kernel32 (later kernelbase) is a now user mode wrapper atop ntdll, whereas in the olden days kernel32 would trap directly into the kernel. This means that kernel32 uses more user mode stack space in NT. A badly behaving app that stored data to the left of the stack pointer and called into kernel32 might see its data structures clobbered in NT and not in 9x. So there were compatibility hacks that temporarily moved the stack pointer for certain apps.
IIRC, they had a significant lab and tons of infrastructure for exercising and identifying compatibility issues in thousands of popular and less popular software packages. It all got distilled into a huge database of app fingerprints and corresponding compatibility shims to be applied at runtime.
It blows my mind that the languages allow you to leave variables uninitialized which has caused countless bugs (including production bugs that I have seen first hand), and you often need to rely on additional compiler flags or static analysis tools/valgrind etc to catch them. Even though newer languages often use a different solution (default zero value or must initialize a variable before use), people still go back to C/C++ all the time.
https://web.archive.org/web/20250423144746/https://cookieplm...
while (this->m_fBladeAngle > 6.2831855) { this->m_fBladeAngle = this->m_fBladeAngle - 6.2831855; }
Like, "let's just write a while loop that could turn into an infinite loop coz I'm too lazy to do a division"
But knowing they were able to they were able to blow up loading GTA5 by 5 minutes by just parsing json with sscanf, I don't have much hope.
Writing some simple code that works with the data you expect to have without bothering with optimizations is fine, if anything it is one of the actual cases of "premature optimization": even with profiling no real time is spent on that code, your data wont make it spend any time and you should avoid wild guesses since chances are you'll be wrong (even if in this case it could be a correct guess, it'd be like a broken clock guessing the time is always 13:37).
The actual issue with that code was that, after they reused it for GTAOnline and started becoming a performance issue after some time as they added more objects, nobody thought to try and see what is wrong.
The second error of deduplicating values by linear scanning an array was way more egregious.
I think someone estimated that error cost them millions in revenue? I'm pretty sure a fraction of that could afford an engineer who knows how fast computers ought to be.
Like, even though it's pretty critical to initial user experience initial loading time is generally what gets disregarded the most.
> I'm pretty sure a fraction of that could afford an engineer who knows how fast computers ought to be.
It can, if someone cares enough or realises it's an issue, and then someone is motivated enough to dig into it, or has the time to.
There is absolutely no way this could turn into an infinite loop. It could underflow, but for that to happen angle would have to be less than the 2*pi, therefore exiting the loop.
When you subtract a small float from a very large float, the value doesn't change. This is because the "steps" between float values increase with the size of the value (i.e. floats have coarser resolution for larger magnitudes)
To see this in action, try running the following in a JavaScript interpreter:
console.log(1_000_000_000_000_000_000 - 1);
It will “never” become big.
So why check? It’s unnecessary.
Thus the bug.
I guess the most robust code handling both performance and unexpected input would be one iteration of this (leveraging the assumption that angles are either always within the bounds, or had one frame of going out of bounds by a small amount); followed by a fmod if that assumption is just totally off.
(so, for example, this bug would have never been created by Rust unless it was deeply misused)
Though, I really like the _mm_undefined_ps() intrinsics for SSE that make it clear that you're purposefully not initialising a variable. Something like that for ints and floats would be pretty sweet.
When I think of the "no runtime cost" mentality of C/C++ I don't think that normally extends to ignoring errors in I/O functions.
> The performance impact is negligible (less that 0.5% regression) to slightly positive (that is, some code gets faster by up to 1%). The code size impact is negligible (smaller than 0.5%). Compile-time regressions are negligible. Were overheads to matter for particular coding patterns, compilers would be able to obviate most of them.
> The only significant performance/code regressions are when code has very large automatic storage duration objects. We provide an attribute to opt-out of zero-initialization of objects of automatic storage duration. We then expect that programmer can audit their code for this attribute, and ensure that the unsafe subset of C++ is used in a safe manner.
> This change was not possible 30 years ago because optimizations simply were not as good as they are today, and the costs were too high. The costs are now negligible.
[1] https://github.com/cplusplus/papers/issues/1401
[2] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p27...
A trick we were using with SSE was something like
__m128 zero = _mm_undefined_ps(); zero = _mm_xor_ps(zero, zero);
Now we were really careful with viewing our ops as data dependencies to reason about pipelining efficiency. But our profiling tools were not measuring this.
We did avoid _mm_set_ps(0.0f) which was actually showing up as cache misses.
I wonder if we were actually slower because cache misses are something we can measure?!
As a very high level example, take sorting. Rust's standard library provides you both a stable and unstable sort, as does your C++ standard library.
The C++ standard promises these sorts have O(n log n) performance, it's unclear in modern C++ if having a nonsensical ordering† is Undefined Behaviour (as it was in older versions) or outright IFNDR (much worse than UB) but the real world effect will be similar anyway
Rust promises that these sorts work as expected, if you provide nonsensical ordering, obviously it can't very well "sort" things the way you asked, but we don't need to kill your neighbour's cats and wipe the hard disk either, so, it will either give you back the same things in... some order or it will report the fatal error in your software.
The Rust option here is clearly much safer right? So, how much performance is this costing? Actually, it's faster. So C++ is choosing slower and worse. What's the upside?
† For example what about if I insist that Red < Green, but also Green < Red, and furthermore Red == Green is true, but so is Red != Green, however neither Green == Red nor Green != Red are true!
And that's not to mention the uncomfortable truth that while doing this correctly in something like Rust may very well take less effort overall than in C++, that is not the bar we are aiming to clear. They wanted to implement something that was correct-enough, and given that this bug wasn't hit for 20+ years and that the game was a roaring success on all the major platforms - I think that was the right decision.
In video games you can go back and try another option but life isn't like that and so we can only suppose what might have happened.
I know what you’re saying - you can’t really know what might have been in an alternate reality. But in that alternate reality they’d have had to come up with something truly monumental to outdo themselves here.
I think you’re just being a wee bit picky about me using the words “the right decision”. If we’re honest with ourselves there probably wasn’t a Rust-like language in the conversation when they set out to build GTA3, Vice City or San Andreas so this is all kind of moot unless we're suggesting that Rockstar should have started out by building that language...
I wouldn't consider it "Rust evangelism" as much as "not C/C++/any language that makes it trivially easy to write undefined-behavior bugs, evangelism".
I'd be just as much a fan of Roc, but they're not yet mature and actually in the middle of a compiler rewrite (as it so happens, from Rust to Zig, lol) https://www.roc-lang.org/
The best engineers I know are open to everything and played with almost every tool/language/whatever to form (sorry) informed opinions about them. They often know what they are talking about, and they choose the best tool for the job.
So the person in question is irritated at an interesting blog post about a 20+ year old game being used as another opportunity to push Rust. So for starters Rust obviously wasn't around at the time the game was developed so it's not like Rockstar made the wrong call in implementing this using C++. But more importantly I don't think Rust is currently in a state where studios can justify using it to develop AAA games. They'd need big teams of developers with Rust experience who are well-versed in the sort of problems encountered during game development. You'd need battle-tested build/deployment processes that allow you to produce the binaries for Playstation/Xbox (not too dissimilar CPU/GPU wise, but each with their own platform-level quirks no doubt) and Switch hardware - potentially across multiple generations. You'd need various platforms' OS hooks and network-service APIs available. Additionally you'd need to convince the guys with the money that instead of spending $projected on a game, you'd need to spend $projected+$mystery_number when they take the plunge and write their first game in Rust with new tools etc rather than C++ and everything they currently use. The gaming industry is nothing if not ruthless at making money, if it made financial sense they'd be moving to Rust already - if it will make sense in the future, they'll be planning to do it.
You've been charitable in your read of the original comment, taking it as "this family of problem does not exist in Rust" - and for what it's worth I agree and really value this. However this other commenter has presumably seen it as a bit more naive and missing the bigger picture, and in combination with other similar experiences is questioning the value of these of glowing testimonies.
In addition, a lot of people saying "this is great, this is the future!" doesn't necessarily make something good automatically. For about 5+ years here on HN we had legions of people responding "blockchains will fix this" to almost every problem and very confidently declaring the rest of us are luddites for not getting it. I'm obviously not saying Rust is the same, I'm just trying to show that not following the crowd doesn't automatically mean you're the kind who will always fall behind.
As for how to avoid this? I dunno if you can undo the zillions of RIIR comments that have been floating around since Rust appeared on the scene, but if I was evangelising or even just strongly recommending it I'd just keep in mind that my target audience is maybe sick of seeing the same kinds of comments and would be a bit more creative and/or sensitive in approaching the topic.
That's one hell of a language!
int k; // C makes an uninitialized variable named k - probably bad idea
let k: i32 = unsafe { MaybeUninit::uninit().assume_init() }; // Rust, same bad idea
If we say "I will initialize it - later" that's fine in Rust and you just write the name (and where appropriate type) of the variable and go about your day. The compiler will reject your program if, in fact, it can't see why you're fulfilling that promise, and sometimes that might be because the compiler is dumb (but often it's because you are) but there's no problem technically with this and if the compiler agrees that we do, in fact, initialize it later then it compiles and works and everybody is happy.But to actually make a variable and not initialize it, as we saw above, is a lot of extra work in Rust because like... that's a bad idea, why would you be setting out to do that?
This is such a bad idea that Rust's unsafe std::mem::uninitialized, which is how they did this before MaybeUninit existed, was de-fanged (giving it poor performance by actually writing a pattern to RAM every time) and deprecated so you get a warning if you try to use it even though it was already marked unsafe. See, people (and I'm sure many C programmers are like this) tend to imagine it's OK for say an integer to be uninitialized because surely any possible value is OK, right ? Nope. Your operating system knows that data was never written, and so it feels entitled to fuck you about if you expect it to stay unchanged, because it never promised that will work - as a result rarely but sometimes you get kicked in the head by the OS and you get a seemingly impossible bug.
It's true that C may be unique-ish in this regard though- this bug also couldn't happen in Ruby, which is not a functional language, but Ruby certainly still makes undefined behaviors much more possible than in other languages like Elixir.
This sentence is the real takeaway point of the article. Undefined behavior is extremely insidious and can lull you into the belief that you were right, when you already made a mistake 1000 steps ago but it only got triggered now.
I emphasized this point in my article from years ago (but after the game was released):
> When a C or C++ program triggers undefined behavior, anything is allowed to happen in the program execution. And by anything, I really mean anything: The program can crash with an error message, it can silently corrupt data, it can morph into a colorful video game, or it can even give the right result.
> If you’re lucky, the program triggering UB will show an appropriate error message and/or crash, making you immediately aware that something went wrong. If you’re unlucky, the program will quietly mangle data, and by the time you notice the problem (via effects such as crashes or incorrect output) the root cause has been buried in the past execution history. And if you’re very unlucky, the program will do exactly what you hoped it should do, until you change some unrelated code / compiler versions / compiler vendors / operating systems / hardware platforms – and then a new bug becomes visible, and you have no clue why seemingly correct code now fails to work properly.
-- https://www.nayuki.io/page/undefined-behavior-in-c-and-cplus...
As I wrote in my article, this point really got hammered into me when a coworker showed me a patch that he made - which added a couple of innocuous, totally correct print statements to an existing C++ program - and that triggered a crash. But without his print statements, there was no crash. It turned out that there was a preexisting out-of-bounds array write, and the layout of the stack/heap somehow masked that problem before, and his unlucky prints unmasked the problem.
Okay so then, how can we do better as developers today?
0) Read, understand, and memorize what actions in C or C++ are undefined behavior. Avoid them in your code at all costs. Also obey the preconditions of any API you use, whether in the standard library, operating system, etc.
1) Compile your application in Debug mode and compare its behavior to Release mode. If they differ by anything other than speed, then you have a serious problem on your hands.
2) Compile and run with sanitizers like -fsanitize=undefined,address to catch undefined behavior at runtime.
3) Use managed languages like Java, C#, Python, etc. where you basically don't have to worry about UB in normal day-to-day code. Or use very well-designed low-level languages like Rust that are safe by default and minimize your exposure to UB when you really need to do advanced things. Whereas C and C++ have been a bonanza of UB like we have never seen before in any other language.
[1]: https://herbsutter.com/2024/08/07/reader-qa-what-does-it-mea...
This reminds me of an excellent article I read a while back, the gist of it was that, given sufficient success, there's no such thing as a private API.
A little piece of technology made sense in the original context, but then it got moved to a different context without realizing that move broke the contract. Specifically in this case a flying boat became an airplane.
---
I recently worked a bug that feels very similar:
A linux cups printer would not print to the selected tray, instead it always requested manual feed.
Ok. Try a bunch of command line options, same issue.
Ok. Make the selection directly in the PPD (postscript printer definition) file. Same issue.
Ok! Decompile the PXL file. Wrong tray is set in pxl file... why?
Check Debug2 log level for cups - Wrong MediaPosition is being sent to ghostscript (which compiles the printer options into the print job) by a cups filter... why?
Cups filter is translating the MediaPosition from the PPD file... because the philosophy of cups is to do what the user intended. The intention inferred from MediaPosition in the PPD file (postscript printer definition) is that the MediaPosition corresponds to the PWG (Printer Working Group) MediaPosition, NOT the vendor MediaPosition (or local equivalent - in this case MediaSource).
AHA!! My PPD file had been copied from a previous generation of server, from a time when that cups filter did NOT translate the MediaPosition, so the VENDOR MediaSource numbers were used. Historically, this makes sense. The vendor tray number is set in the vendor ppd file because cups didn't know how to translate that.
Fast forward to a new execution context, and cups filters have gotten better at translating user intention, now it's translating a number that doesn't need to be translated, and silently selecting the wrong tray.
TLDR; There is no such thing as a printer command, only printer suggestions.
(a component being reused in a new context where a contract is broken, not bad CUPS drivers)
I'm ignorant about game development, virtual machines and system programming but from the little I understand it seems a sensible choice to make.
While there is an initial price to pay modeling 99% of the game to be implemented on a user-implemented stack seems a sensible approach to me.
Generally, game console "debug" configurations aren't "true" debug like most people think of -- optimizations are still globally enabled, but the build generally has a number of debug systems enabled that naturally require the use of a devkit. Devkits, especially back then, generally had 2-3x as much memory as retail systems -- so you'd happily sacrifice framerate during feature development to have those systems enabled.
Debugging was (and still is) generally done on optimized builds and, once you know the general area of the problem, you simply disable optimizations for that file or subsystem if you can't pinpoint the issue in an optimized build.
The biggest performance hit, in general, comes from disabling optimizations in the compiler. I say "in general" because there are systems that might be used to find this kind of thing that DO make a game wholly unplayable, such as a stomp allocator. Of course, you wouldn't generally enable a stomp allocator across all your allocations unless you're desperate, so you could still have that enabled to find this kind of bug and end up with a playable game.
The more likely reason here is that no one noticed or cared. GTA:SA is 21 years old and this bug doesn't affect the Xbox or other versions.
> (with checks for things like this enabled)
You can (and could) easily compile an optimized build with debug symbols to track down sources of issues, but catching a bug like this would likely take a dynamic checker like Valgrind or MSan, which do not allow for any optimizations if you want to avoid false negatives, and add even more overhead on top of that. (Valgrind with its full processor-level virtualization, and MSan with its shadow state on every access. But MSan didn't exist at the time, and Valgrind barely existed.)
At minimum, fine-grained stack randomization might have exposed the issue, but only if it happened to be spotted in playtests on the debug build.
MSan didn’t exist at the time and valgrind doesn’t work on a ps2.
Neither of those are necessary to find this bug as it could be found using a stomp allocator if you’re a developer on the project at the time.
At no point is there an OOB access, just a failure to initialize stack variables. And to catch that, you'd need either MSan-style shadow state that didn't exist, thorough playtesting with fine-grained stack randomization, or some sort of poisoning that I don't think existed.
Really this is more a story about poor development practice than it is an interesting bug.
I spent hours looking for a badger.
https://static.wikia.nocookie.net/gta-myths/images/f/f1/Egg_...
IMHO this shows the downfall of Microsoft. Why did they do that? Critical sections have been there for many decades and should be basically bug-free by now. My best guess is someone thought they'd "improve" things and rewrote it, then made some microbenchmark that maybe showed the dubious improvement.
The other comment here mentions Raymond Chen, who wrote this article about why backwards-compatibility is very important (and arguably what got Microsoft into the position it's in today):
https://devblogs.microsoft.com/oldnewthing/20031224-00/?p=41...
and also this memorable case: https://news.ycombinator.com/item?id=2281932
That's a problem for the party trying to sell operating system updates.
FTA:
I have a likely explanation for why Rockstar made this specific mistake in the data to begin with – in Vice City, Skimmer was defined as a boat, and therefore did not have those values defined by design! When in San Andreas they changed Skimmer’s vehicle type to a plane, someone forgot to add those now-required extra parameters. Since this game seldom verifies the completeness of its data, this mistake simply slipped under the radar.
So the original code (or at least a working code + data version) in GTA Vice City had no visible problems, at least with the Skimmer object, since the vehicles.ide file had the correct number of values for the Skimmer boat object.Someone changed the Skimmer object from a boat to a plane for GTA San Andreas, BUT they DID NOT update the object to have the REQUIRED wheel values for a plane object.
Now the GTA code is expecting more values than it gets.
The vehicles.ide wasn't validated for correctness after the Skimmer object change to plane. Maybe there are more gotchas in that file...
At least users can fix the problem with a text editor instead of waiting and hoping that RockStar would fix the problem and release an update.
Mitigations exist - ASLR, NX pages, stack-smashing protection etc. but nothing comprehensively stops reads of stale data beyond the stack.
Thought experiment for a moment. What if the hardware ensures the unused part of a stack region cannot be read or written.
There are many ways to skin this cat, here’s one based around tracking each stack’s start address A, size S, and current depth D
1. Add an instruction to inform the CPU there is a stack at address A of size S. Its depth D is initially 0.
2. Add a jump instruction which reserves N bytes on the stack at address A, growing depth D to (D+N). Maybe this can be its own “reserve” instruction so as not to need a new jump instruction.
3. Give existing return instructions stack awareness. If returning to an address inside a stack, un-reserve the bytes reserved by the most recent jump, making the new depth (D-N).
4. Fail reads or writes to the stack region beyond its current depth. In other words fail all reads and writes between A+S-D and A+S.
5. The arithmetic is reversed on architectures whose stacks grow downwards.
Downsides I can see:
It cements one calling convention. The CPU memory manager will need a lot of state per stack, of which there are many per process: address A, size S, current depth D, plus a reservation stack - ie. sizes of each frame’s stack memory. That’s a lot of bookkeeping! It’s far from zero cost. The limits of how much bookkeeping the CPU can do impose limits on how deep a stack can go and how many stacks are supported - so when there are too many stacks or one goes too deep, either the CPU needs to signal failure or engage a fallback mode and revert to behaving as CPUs do today. And of course fallback puts things back to the start. It’d therefore only mitigate situations in which an attacker cannot control the depth of the stack / a bug always happens inside the max depth the CPU can bookkeep for.
That said, stacks are ubiquitous! Hardware stack awareness opens up all kinds of new mitigations.
Why isn’t this a common idea? Has it been tried?
I’m proposing the memory of the fresh stack frame initially reads as zeroes until written to.
A real update should fix both (note: I don't believe the later releases did, they also just added defaults to the parser) but for SilentPatch: a mod is not a real update, and being as simple as possible to remove & reducing conflicts with other mods is more important here than a fix that digs as deep as possible.
Devs need to be aware that the following C++ initisliser exists which zeros data structures for you:
MyStruct s = { };
They wrote a JSON “parser” using sscanf. sscanf is not bulletproof! Just use an open source library instead of writing something yourself. You will still be a real programmer, but you will finish your game sooner and you won't have embarrassing stories written about you.
Nothing has changed appreciably. If they would let you login to a private invite-only lobby that would likely speed things up greatly— but it’ll never happen.
> If they would let you login to a private invite-only lobby that would likely speed things up greatly— but it’ll never happen.
Did they remove this option in the last couple years?
However, it's not even remotely "like crack". Crack is really really really really fun, period, no "just enough of the time" about it. The reason people get hooked on crack is because it's guaranteed to be fun.
If I had to choose a substance that most closely mirrored variable ratio reinforcement conditioning, it'd probably be ketamine.
If you don't know what “good” looks like, take a look at [Serde](https://serde.rs/). It’s for Rust, but its features and overall design are something you should attempt to approach no matter what language you’re writing in.
The only C code that I have recently interacted with uses a home–grown JSON “library” that is actually pretty good. In particular it produces good error messages. If it were extracted out into its own project then I would be able to recommend it as a library.
Apart from that, many of us thought that Java serialization was good if just used correctly, that IE's XML parsing capabilities were good if just used correctly, and so on. We were all very wrong. And a 3rd party library would be just some code taken from the web, or some proprietary solution where you'd once again have to trust the vendor.
> And a 3rd party library would be just some code taken from the web, or some proprietary solution where you'd once again have to trust the vendor.
Open source exists for a reason, and had already existed for ~15 years by the time this game was begun. 20 years later there are even fewer excuses to be stuck using some crappy code that you bought from a vendor and cannot fix.
But also keep in mind in 2004 the legality of many open source projects was not really tested very well in court. Pretty sure that was right around the time one of the bigger linux distros was throwing its weight around and suing people. So you want to ship on PS2 and XBOX and PC and GameCube. Can you use that lib from inside windows? Not really. Can you build/vs buy? Buy means you need the code and probably will have to port it to PS2/GameCube yourself. Can you use that opensource lib? Probably, but legal is still dragging its feet, and you get to port it to PS2. Meanwhile your devs need a library 3 weeks ago and have hacked something together from an older codebase that your company owns and it works and means you can hit your gold master date.
Would you do that now? No. You would grab one of the multitudes of decent libs out there and make sure you are following the terms correctly. Back then? Yeah I can totally see it happening. Open source was semi legally grey/opaque to many corporations. They were scared to death of losing control of their secret sauce code and getting sued.
I don't follow. What would the reasons be?
(There is no way to prevent changes by a knowledgeable person with time or tools, so that's not a goal)
It's only now that single player progress is profitable to sell that video games have taken save game encryption to be default.
It's so stupid.
One of the anecdotes from Titan Quest developed by Iron Lore is that their copy protection had multiple checks, crackers removed the early checks to get the game running but later 'tripwires' as you progress through the game remained and the game appeared to crash. So the game earned a reputation for being buggy for something no normal user would hit running the game as intended.
What? No. What even are you suggesting? Hell, games with OFFICIAL MODDING SUPPORT still require you submit bug reports with no mods running.
Editing game files has always been "you are on your own", even editing standard Unreal config files is something you wont get support for, and they are trivial human readable files with well known standards.
>One of the anecdotes from Titan Quest
Any actual support for this anecdote? Lots of games have anti-piracy features that sneakily cause problems, and even could fire accidentally. None of those games get a reputation for being buggy. Games like Earthbound would make the game super hard and even delete your save game at the very end. Batman games would nerf your gliding ability. Game Dev Tycoon would kill your business due to piracy.
None of these affected the broad reputation of the game. Most of them are pretty good marketing in fact.
On top of that, the hardware requirements (256MB of system RAM, and the PlayStation 2 only had 32MB) made it enough of a challenge to get the game running at all. Throwing in a heavyweight parsing library for either of these three languages was out of the question.
Most of the time, the programmers who do this do not follow the simple rule that Stroustrup said which is to define or initialize a variable where you declare it (i.e. declare it before using it), and which would solve a lot of bugs in C++.
struct test {
int my_int = 0;
int* my_ptr = std::nullptr;
};
Or is this something more recent ?You cannot initialize them with a different value unless you also write a constructor, but it not the issue here (since you are supposed to read them from the file system)
struct test {
int my_int;
int *my_ptr;
test() : my_int(0), my_ptr(NULL) {}
};
The standars "artist" have are atificial and snoby.
How can Deadmau5/whatever EDM artist sell so much?
I worked in gamedev around the time this game was made and this would have been very much an ordinary, everyday kind of bug. The only really exceptional thing about it is that it was discovered after such a long time.
Yeah but we're talking about a 2004 game that was pretty rushed after 2002's Vice City (and I wouldn't be surprised if the bug in the ingestion code didn't exist there as well, just wasn't triggered due to the lack of planes except that darn RC Chopper and RC plane from that bombing run mission). Back then, the tooling to spot UB and code smell didn't even exist or, if at all, it was very rudimentary, or the warnings that did come up were just ignored because everything seemed to work.
https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
Besides, the complaint about not having a heavyweight parser here is weird. This is supposed to be "trusted data", you shouldn't have to treat the file as a threat, so a single line sscanf that's just dumping parsed csv attributes into memory is pretty great IMO.
Definitely initialize variables when it comes to C though.
This isn't that uncommon - look at something like Diablo 2 which has a huge amount of game data defined from text files (I think these are encoded to binary when shipped but it was clearly useful to give the game a mode where it'd load them all from text on startup).
To be honest, I just don't like how you disparaged the programmer out-of-context. Talk is cheap.
But it is even more important for today’s game studios to see and understand the mistakes that yesterday’s studios made. That’s the only way to avoid making them all over again.
And in 2004, didn't have a published specification, or much use outside of webdev (which hadn't eaten the world yet).
> and SAX parsers are 22
And, especially at the time, pretty much exclusive to Java, right?
Put another way, which are the high-quality open-source implementations of those formats that the developers should've considered while working on SA in 2003 and 2004? Or for that matter, in the 2001-2002 timeframe, when the parsing code was probably actually written for use in VC?
Your average hire for the time might have been self-taught with the occasional C89 tutorial book and two years of Digipen. Today’s graduates going into games have fallen asleep to YouTube lectures of Scott Meyers and memorized all the literature on a fixed timestep.