But apart from that, Rust is basically a bag of sensible choices. Big and small stuff:
- Match needs to be exhaustive. When you add something to the enum you were matching, it chokes. This is good.
- Move by default. If you came from c++, I think this makes a lot of sense. If you have a new language, don't bring the baggage.
- Easy way to use libraries. For now it hasn't splintered into several ways to build yet, I think most people still use cargo. But cargo also seems to work nicely, and it means you don't spend a couple of days learning cmake.
- Better error handling. There's a few large firms that don't use exceptions in c++. New language with no legacy? Use the Ok/Error/Some/None thing.
- Immutable by default. It's better to have everything locked down and have to explicitly allow mutation than just have everything mutable. You pay every time you forget to write mut, but that's pretty minor.
- Testing is part of the code, doesn't seem tacked on like it does in c++.
When I see people mention C++ with MISRA rules, I just think -- why do we need all these extra rules, often checked by a separate static analysis tool and enforced manually (that comes down to audit/compliance requirement), when they make perfect sense and could be done by the compiler? Missing switch cases happens often when an enum value is modified to include one extra entry and people don't update all code that uses it. Making it mandatory at compiler level is an obvious choice.
-Wswitch ¶
Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. (The presence of a default label prevents this warning.) case labels that do not correspond to enumerators also provoke warnings when this option is used, unless the enumeration is marked with the flag_enum attribute. This warning is enabled by -Wall.
<https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#inde...>The compiler can do that... And it's included in -Wall. It's not on by default but is effectively on in any codebase where anyone cares...
Please don't argue about "but I don't need to add a flag in Rust" it's not rust, there's reasons the standard committee finds valid for why and honestly your welcome to implement your own compiler that turns it on by default just like the rust compiler which has no standard because "the compiler is the standard".
MISRA requires that you explicitly write the default reject. So -Wswitch doesn't get it done even though I agree that if C had (which it did not) standardized this requirement that would get you what you need.
C also lacks Rust's non_exhaustive trait. If the person making a published Goose type says it's non-exhaustive then in their code nothing changes, all their code needs to account for all the values of type Goose as before - but everybody else using that type must accept that the author said it's non-exhaustive, so they cannot account for all values of this type except by writing a default handler.
So e.g if I publish an AmericanPublicHoliday type when Rust 1.0 ships in 2015, and I mark it non-exhaustive since by definition new holidays may be added, you can't write code to just handle each of the holidays separately, you must have a default handler. When I add Juneteenth to the type, your code is fine, that's a holiday you must handle with your default handler, which you were obliged to write.
On the other hand IPAddr, the IP address, is an ordinary exhaustive type, if you handle both IPv6Addr and IPv4Addr you've got a complete handling of IPAddr.
You can always use -Wswitch-enum then.
Why not? It's a big issue. You say it's "on in any codebase where anyone cares", and I agree with that but in my experience most C++ developers don't care.
I regularly have to work with other people's C++ where they don't have -Wall -Werror. It's never an issue in Rust.
Also I don't buy that they couldn't fix this because it would be a breaking change. That's just an excuse for not bothering. They've made backwards incompatible changes in the past, e.g. removing checked exceptions, changing `auto`, changing the behaviour around operator==. They can just use the standard version, just like Rust uses Editions.
Of course they won't, because the C++ standards committee is still very much "we don't need seatbelts, just drive well like me".
To be fair, -Werror is kind of terrible. The set of warnings is very sensitive to the compiler version, so as soon as people work on the project with more than one compiler or even more than one version of the same compiler, it just becomes really impractical.
An acceptable compromise can be that -Werror is enabled in CI, but it really shouldn't be the default at least in open-source projects.
Not even that. -Wall -Werror should be limited to local builds, and should never touch any build config that is invoked by any pipeline.
I think you inadvertently showed why this sort of thing: it's simply bad practice and a notorious source of problems. With -Wall -Werror you can turn any optional nit remark into a blocked pipeline requiring urgent maintenance. I know it because I had to work long hours in a C++ project that suddenly failed to build because a moron upstream passed -Wall -Werror as transitive build flags. We're talking about production pipelines being blocked due to things like function arguments being declared but not used.
Sometimes I wonder if these discussions on the virtues of blindly leaning on the compiler are based on solid ground or are instead opinionated junior devs passing off their skinner box as some kind of operational excellence.
> GCC "overflow analysis" warnings
I think I've seen this with `fmt`, and it was a GCC compiler bug. Not much you can do about that.
The C and C++ standards are quite minimal and whether or not an implementation is "compliant" or not is often a matter of opinion. And unlike other language standards (e.g. Java or Ada) there isn't even a basic conformance test suite for implementations to test against. Hence why Clang had to be explicitly designed for GCC compatibility, particularly for C++.
Merely having a "language standard" guarantees very little. For instance, automated theorem proving languages like Coq (Rocq now, I suppose)/Isabelle/Lean have no official language standard, but they far more defined and rigorous than C or C++ ever could be. A formal standard is a useful broker for proprietary implementations, but it has dubious value for a language centered around an open source implementation.
Then why is this a MISRA rule by itself? Shouldn't it just be "every codebase must compile with -Wall or equivalent"?
Not all compilers have a -Wall equivalent, GCC, Clang and MSVC does but RANDOM_EMBEDDED_CHIP's custom compiler might not and that is a valid target for MISRA compliance.
I doubt every single thing that needs MISRA get's compiled with an industry standard compiler, I wouldn't be surprised that GCC is the exception for most companies targeting MISRA compliance.
Firstly, in terms of what the rules require. Some MISRA rules are machine checkable. Your compiler might implement them or, more likely, a MISRA auditing tool you bought does so. Some MISRA rules need human insight in practice. Is this OK, how about that? A good code review process should be able to catch these, if the reviewers are well trained. But a final group are very vague, almost aspirational, like the documentation requirements, at their best these come down to a good engineering lead, at their worst they're completely futile.
Secondly in terms of impact, studies have shown some MISRA rules seem to have a real benefit, codebases which follow these rules have lower defect rates. Some are neutral, some are net negative, code which followed these MISRA rules had more defects.
Thirdly in terms of what they do to the resulting software. Some MISRA rules are reasonable choices in C, you might see a good programmer do this without MISRA prompting just because they thought it was a good idea. Some MISRA rules prohibit absolute insanity. Stuff like initializing a variable in one switch clause, then using it in a different clause! Syntactically legal, and obviously a bad idea, nobody actually does that so why write a whole rule to prohibit it? But then a few MISRA rules require something no reasonable C programmer would ever write, and for a good reason, but it also just doesn't really matter. Mostly this is weird style nits, like if your high school English essay was marked by a NYT copy editor and got a D minus because you called it NASCAR not Nascar. You're weird NYT, you're allowed to be weird but that's not my fault and I shouldn't get penalized.
I think this is still very much a debatable point. There are disadvantages to exceptions, mostly around code size and performance. But they are still the only error handling mechanism that anyone has found that defaults to adding enough context to errors to actually be useful (except of course in C++, because C++ doesn't like having useful constructs).
Rust error handling tends towards not adding any kind of context whatsoever to errors - if you use the default error mechanisms and no extra libraries. That is, if you have a call stack three functions deep that uses `?` for error handling, at the top level you'll only get an error value, you'll have no idea where the value originated from, or any other information about the execution path. This can be disastrous for actually debugging hard to reproduce errors.
In contrast, unless you manually add context to the error (or use a library that does something like this for you, overriding the default ? behavior), you won't get any information about where an error occurred at all.
Sure, with exceptions, you don't know statically where an exception might happen. But at runtime, you do get the exact information. So, if the error is hard to reproduce, you still have information about where exactly it occurred in those rare occasions where it happened.
OK, so, if I write the canonical modern C++ Hello World, execute it against an environment where the "standard output" doesn't exist, where does this stack trace get recorded? Maybe it depends on the compiler and standard library implementation somehow?
My impression is that in reality C++ just ignores the problem and carries on, so actually there was no stack trace, no logging, it just didn't work and too bad. Unsurprisingly people tasked with making things work prefer a language which doesn't do that.
If you're executing against a POSIX-compatible environment, then stdin, stdout, and stderr are expected to exist and be configured properly if you want them to work[1].
If you're executing against some other environment, like webassembly or an embedded system, then you'll already (hopefully) be using some logging and error handling approach that sends output to the correct place. Doesn't matter if you're using C, C++, .NET, Rust, Zig, etc.
For example, webassembly is an environment without stdio streams. It's your responsibility to make sure there is a proper way to record output, even if it's just a compatibility layer that goes to console.log.
[1]: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...
In the specific case of "Hello, World" it's more embarrassing. The Rust Hello World does indeed experience and report errors if there are any, the canonical C just ignores them, as does the C++.
Can you give an example for each of those?
I don't think it's a bug if, like in the C example, you don't handle the return value of the function you are calling. The strace shows that the function returned an error, but the code doesn't check it. Not a language flaw.
In fact, in most of the languages that "don't have the bug", the runtime is automagically capturing the issue and aborting the program. Like an exception. Rust just "doesn't have the bug" because the compiler forces you to handle the error. All the .NET languages do the same thing at runtime and force you to handle the I/O error... with an exception handler.
Unfortunately, your talking points just seem like more Rust fanaticism trying to discredit any other language. This happens in every single discussion about any language other than Rust, especially C/C++. I'm not going to engage any further.
Now on to your specific question.
First of all, I explicitly called out C++ exceptions as not having this useful property. C++ exceptions don't collect a stack trace, and the C++ runtime simply exits with an error code if an exception is thrown without a handler.
Now, moving to any other language with exceptions. What happens by default if executing in an environment without stdout will depend on details of the runtime of that language for that environment.
But let's assume that the runtime is not written to handle this gracefully. Here's the entirety of the code you need to add to your exception-based program to handle a lack of stdout and still get stack traces, in pseudo-code:
int main() {
try {
return oldMain();
} catch (Exception e) {
with(File f = openFile("my-log.log")) {
f.write("Unhandled exception:");
e.printStackTrace(f);
}
}
}
Where oldMain() is the main() you'd write for the same program if you did have stdout.Rust can store backtraces in value objects as well [0]. It's opt-in (capturing a stack trace at the error value's creation may be expensive if that error is eventually handled), but with the anyhow crate you get a decent compromise: a stack trace is captured at the boundary of your program and libraries during the conversion, and then shown only if the error bubbles up to main().
And you get the bonus of storing both the stack trace, and relevant context where needed, e.g. to show values of parameters. Here's how that playground example above fails:
Error: Second try
Caused by:
0: Parsing 'forty-two' as number
1: invalid digit found in string
Stack backtrace:
0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
at ./.cargo/registry/src/index.crates.io-6f17d22bba15001f/anyhow-1.0.94/src/backtrace.rs:27:14
1: <core::result::Result<T,F> as core::ops::try_trait::FromResidual<core::result::Result<core::convert::Infallible,E>>>::from_residual
at ./.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/result.rs:2009:27
2: playground::parse_number
at ./src/main.rs:25:8
3: playground::parse_and_increment
at ./src/main.rs:18:18
4: playground::main
at ./src/main.rs:7:19
...
[0] https://play.rust-lang.org/?version=stable&mode=debug&editio...I did know about anyhow, that was exactly the library I was mentioning. But that requires manually adding context at all places where the error is passed.
In Java, you can disable stack traces altogether that massively reduces the cost (which is what e.g. Crafting Interpreters suggests - it's a good course but the author is both wrong and actively misleading about cost model of implementations covered in parts 1 and 2 because of this) but few codebases do this.
This is both a blessing and a curse. Seeing the rust docs require 561 crates makes it clear that rust/cargo is headed down the same path as node/npm
Downloaded 561 crates (50.7 MB) in 5.21s (largest was `libsqlite3-sys` at 5.1 MB)
It's a whole web services with crates.io webhooks to build and update new documentation every time a crates gets updated, tracks state in a database and stores data on S3, etc. Obviously if you just want to build some docs for one crate yourself you don't need any of that. The "rustdoc" command has a much smaller list of dependencies.
* Rust has a strong type system, with good encapsulation and immutability by default, so the library interfaces are much less fragile than in JS. There's tooling for documenting APIs and checking SemVer compat.
* Rust takes stability more seriously than Node.js. Node makes SemVer-major releases regularly, and for a long time had awful churn from unstable C++ API.
* Cargo/crates-io has a good design, and a robust implementation. It had a chance to learn from npm's mistakes, and avoid them before they happened (e.g. it had a policy preventing left-pad from day one).
And the number of deps looks high, but it isn't what it seems. Rust projects tend split themselves into many small packages, even when they all are part of the same project written by the same people.
Cargo makes all transitive dependencies very visible. In C you depend on pre-built dynamic libraries, so you just don't see what they depend on, and what their dependencies depend on.
For example, Rust's reqwest shows up as 150 transitive dependencies, but it has fewer supported protocols, fewer features, and less code overall than a 1 dep of libcurl.
There's an argument to be made that there are too many packages from too many authors to trust everything. I don't find the argument to be too convincing, because we can play what-if games all day long, and if you don't want to use them, you get to write your own.
Really nice macro system.
First class serde.
First class sync/send
Derives!
What do you mean? `Serialize` and `Deserialize` are not part of std.
Because Rust's package ecosystem is more robust it's less anxious about the strict line between things everybody must have (in the standard library) and things most people want (maybe or maybe not in the standard library). In C++ there's a powerful urge to land everything you might need in the stdlib, so that it's available.
For example the FreeBSD base system includes C++. They're not keen on adding to their base system, so for example they seem disinclined to take Rust, but when each C++ ISO standard bolts in whatever new random nonsense well that's part of C++ so it's in the base system for free. Weird data structure a game dev wants? An entire linear algebra system from Fortran? Comprehensive SI unit systems? It's not up to the FreeBSD gatekeepers, a WG21 vote gets all of those huge requirements into FreeBSD anyway.
There are two significant barriers to Rust in FreeBSD base -- first, cultural: it's just a bunch of greybeards opposed to anything and everything new; and second, technical: Rust just doesn't (or didn't) have compiler backends for the same subset of platforms FreeBSD does (or did). (This situation is improving as FreeBSD finally drops official support for obsolete SPARC, 32-bit ARM, MIPS, and 32-bit PowerPC platforms, but obviously cultural barriers remain.)
https://rmod-files.lille.inria.fr/Team/Texts/Papers/Blac03a-...
Traits as CS concept, are part of OOP paradigm.
The whole FP vs OOP distinction does make little sense these days, as it has mostly been shown that each concept from the one can neatly fit within the other and vice versa.
Reverse Uno!
https://www.haskellforall.com/2013/02/you-could-have-invente...
And someone called Samuel the Bloggy Badger happens to have another blog posts on how comonads are really more like neighborhoods:
https://gelisam.blogspot.com/2013/07/comonads-are-neighbourh...
...so it might all just be a scam!
The only big difference is how implementation is mapped into the trait specification.
"Classes, Jim, But Not as We Know Them — Type Classes in Haskell: What, Why, and Whither"
https://www.microsoft.com/en-us/research/publication/classes...
"Adventure with Types in Haskell"
https://www.youtube.com/watch?v=6COvD8oynmI
https://www.youtube.com/watch?v=brE_dyedGm0
On the first lecture he discusses how Haskell relates to OOP in regards of subtyping and generic polymorphism and how although different on the surface they share those CS concepts in their own ways.
From slide 40:
> So the links to intensional polymorphism are closer than the links to OOP.
From the first bullet of slide 43:
> No problem with multiple constraints
> f :: (Num a, Show a) => a -> ...
From the second bullet:
> Existing types can retroactively be made instances of new type classes (e.g. introduce new Wibble class, make existing types an instance of it):
> class Wibble a where
> wib :: a -> Bool
> instance Wibble Int where
> wib n = n+1
From slide 46:
> In Haskell you must anticipate the need to act on arguments of various type
> f :: Tree -> Int
> vs
> f’ :: Treelike a => a -> Int
> (in OO you can retroactively sub-class Tree)
From slide 50:
> In Java (ish):
> inc :: Numable -> Numable
> from any sub-type of Numable to any super-type of Numable
> In Haskell:
> inc :: Num a => a -> a
> Result has precisely same type as argument
I appreciate you sharing informative links even though they prove you wrong. I haven't seen this set of slides before but I find it a very good concise explanation of why Haskell classes are not traditional OOP classes or interfaces.
Think about it: if the Rust trait system were highly similar to Java interfaces, why would people rave about it?
Mostly yes. In C/C++, the defaults are usually in the less safe direction for historical reasons.
For some cases you can make an argument that the right default would have been safer. For mutability, for avoiding deductions, these are both sometimes footguns. But in other cases the right default isn't so much safer as just plain better, the single argument constructors should default to explicit for example, all the functions which qualify as constexpr might as well be constexpr by default, there's no benefit remaining for the contrary.
My favourite wrong default is the memory ordering. The default memory ordering in C++ is Sequentially Consistent. This default doesn't seem obviously wrong, what would have been better? Surely we don't want Relaxed? And we can't always mean Release, or Acquire, and in some cases the combination Acquire-Release means nothing, so that's bad too. Thus, how can Sequentially Consistent be the wrong default? Easy - having a default was wrong. All the options were a mistake, the moment the committee voted they'd already fucked up.
There’s a reason why ML and Haskell compilers generally have that as a warning by default and not an error: when you need a pipeline of small transformations of very similar languages, the easiest way to go is usually declare one tree type that’s the union of all of them, then ignore the impossible cases at each stage. This takes the problem entirely out of the type system, true, but an ergonomic alternative for that hasn’t been invented, as far as I know. Well, aside from the micropass framework in Scheme, I guess, but that requires exactly the kind of rich macros that Rust goes out of its way to make ugly. (There have been other attempts in the Haskell world, like SYB, but I haven’t seen one that wouldn’t be awkward.)
Came from C++ and this is my least favorite part of the language ergonomics.
The move assignment semantic you see in Rust was also retrospectively termed "destructive" move because after the assignment A = B not only is the value from B now in A - that value is gone from B, B was in some sense "destroyed". If we write code which does A = B and then print(B) it won't compile! B is gone now.
Programmers actually really like that, it feels natural (with appropriate compiler support of course) and it doesn't have unexpected horrors to be uncovered.
In C++ they couldn't make that work (without destroying compatibility with existing C++ 98 code) so they invented their own C++ 11 "move" which is this more fundamental move plus making a new hollow object to go in B. This new hollow object allows the normal lifecycle of C++ 98 objects to happen as before - B goes out of scope, it gets destroyed.
So in C++ A = B; print(B) works - but it's not defined to do anything useful, you get some ready to clean up object, if B was a string maybe it's the empty string, if B was a remote file server then... maybe it's an "empty" remote file server? That's awkward.
It's worth understanding that the nicer Rust move isn't a novelty, or something people had no idea they wanted when C++ 11 was standardized, the "destructive" move already existed and was known to be a good idea - but C++ couldn't figure out a way to deliver it.
Std::move, std::forward are neat, though somewhat cumbersome compared to Rust. C++ scope, lifetime plus the fact that std::move doesn't actually move are real footguns.
There have been attempts to add destructive moves (Circle) but it's a long way from Rust's ergonomics.
I concur with op that default move semantic is where rust shines.
> Immutable by default.
In C++, these two fight each other. You can't (for the most part) move from something that's immutable.
How does Rust handle this? I assume it drops immutability upon the move, and that doesn't affect optimizations because the variable is unused thereafter?
Mutability in Rust is an attribute of a location; not a value, so you can indeed move a value from an immutable location into a mutable one, thus "dropping immutability". (But you can only move out of a location that you have exclusive access to -- you can't move out of an & reference, for example -- so the effect is purely local.)
Or most languages! Many could easily imitate it too. I'd love a pytest mode or similar framework for python that looked for doc tests and has a 'ModTest' or something class.
Google: https://google.github.io/styleguide/cppguide.html#Exceptions
> Given that Google's existing code is not exception-tolerant, the costs of using exceptions are somewhat greater than the costs in a new project.
> ...Things would probably be different if we had to do it all over again from scratch.
It's quite ironic to cite the Google C++ Style Guide as somehow supporting the case against exceptions. It's saying the opposite: we would probably use exceptions, but it's too late now, and we can't.
Somehow people miss this...
Those types of systems-y code can avoid exceptions if they want. Nobody said exceptions are a panacea. The alternative error models have their own performance and other problems, and those can manifest differently to other types of codebases.
Just from a quick glance: I see he's talking about things like stack overflows and std::bad_alloc. In a discussion like this, those two are probably the worst examples of exceptions. They're the most severe exceptions, and the one the fewest people care to actually catch, and the ones that error codes are possibly the worst at handling anyway. (Do you really want an error returned from push_back?) The most common stuff is I/O errors, permission errors, format errors, etc. which aren't well represented by resource exhaustion at all, much less memory exhaustion.
P.S. W.r.t. "the top C++ gurus/leaders" - Herb is certainly talented, but I should note that the folks who wrote Google's style guide are... not amateurs. They have been involved in the language development and standardization process too. And they're just as well aware of the benefits and footguns as anyone.
As a specific example, and this is something that's been a problem in the std lib before. When you code something that needs to maintain an invariant, e.g. a length field for an unsafe operation, that invariant has to be upheld on every path out of your function.
In the absence of exceptions, you just need to make sure your length is correct on returns from your function.
With exceptions, exits from your function are now any function call that could raise an exception; this is way harder to deal with in the general case. You can add one exception handler to your function, but it needs to deal with fixing up your invariant wherever the exception occurred (e.g. of the fix-up operation that needs to happen is different based on where in your function the exception occurred).
To avoid that you can wrap every call that can cause an exception so you can do the specific cleanup that needs to happen at that point in the function... But at that point what's the benefit of exceptions?
That's the wrong way to handle this though. The correct way (in most cases) is with RAII. See scope guards (std::experimental::scope_exit, absl::Cleanup, etc.) if you need helpers. Those are not "way harder" to deal with, and whether the control flow out of the function is obvious or not is completely irrelevant to them -- in fact, that's kind of their point.
In fact, they're better than both exception handling and error codes in at least one respect: they actually put the cleanup code next to the setup code, making it harder for them to go out of sync.
Huh? I don't get it. This:
stack.push_back(k);
absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
if (foo()) {
printf("foo()\n");
return 1;
}
if (bar()) {
printf("bar()\n");
return 2;
}
baz();
return 3;
is both easier, more readable, and more robust than: stack.push_back(k);
if (foo()) {
printf("foo()\n");
assert(stack.back() == k);
stack.pop_back();
return 1;
}
if (bar()) {
printf("bar()\n");
assert(stack.back() == k);
stack.pop_back();
return 2;
}
baz();
assert(stack.back() == k);
stack.pop_back();
return 3;
as well as: stack.push_back(k);
auto pop_stack = [&] { assert(stack.back() == k); stack.pop_back(); }
if (foo()) {
printf("foo()\n");
pop_stack();
return 1;
}
if (bar()) {
printf("bar()\n");
pop_stack();
return 2;
}
baz();
pop_stack();
return 3;
and unlike the others, it avoids repeating the same code three times.(Ironically, I missed the manual cleanups before the final returns in the last two examples right as I posted this comment. Edited to fix now, but that itself should say something about which approach is actually more bug-prone...)
The gnarliest scenario I recall was a ring-buffer implementation that relied on a field always being within the valid length, and a single code path not performing a mod operation, which was only observably a problem after a specific sequence of reserving, popping, and pushing.
EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
The stack length (and contents, too). It pushes, but ensures a pop occurs upon returning. So the stack looks the same before and after.
> I was imagining a function that manipulated a collection, and e.g. needed to decrement a length field to ensure the observable elements remained valid, then increment it, then do something else.
That is exactly what the code is doing.
> EDIT: oh, I think I see; is your code validating the invariant, or maintaining it?
Both. First it manipulates the stack (pushing onto it), then it does some stuff. Then before returning, it validates that the last element is still the one pushed, then pops that element, returning the stack to its original length & state.
> The gnarliest scenario I recall was a ring-buffer implementation that [...]
That sounds like the kind of thing scope guards would be good at.
E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
> E.g. in the case you provided, if pop & push couldn't fail, that would just be two calls in sequence.
I have no idea what you mean here. Everything in the comment would be exactly the same even if stack.push_back() was guaranteed to succeed (maybe due to a prior stack.reserve()). And those calls aren't occurring in sequence, one is occurring upon entrance and the other upon exit. Perhaps you're confused what absl::Cleanup does? Or I'm not sure what you mean.
I think you're going to have to give a code example if/when you have the chance, to illustrate what you mean.
But also, even if you find "a counterexample" where something else is better than exceptions just means you finally found found a case where there's a different tool for a (different) job. Just like how me finding a counterexample where exceptions are better doesn't mean exceptions are always better. You simply can't extrapolate from that to exceptions being bad in general, is kind of my whole point.
The problem re. there being a counter-example to exceptions (as implemented in C++) is that they're not opt-in or out where it makes sense. At least as I understand it, there's no way for foo/bar/baz to guarantee to you that they can't throw an exception, so you can rely on it (e.g. in a way that if this changes, you get a compiler error such that something you were relying on has changed). noexcept just results in the process being terminated on exception right?
First, I think you're making an incorrect assumption -- the assumption that "if (foo())" means "if foo() failed". That's not what it means at all. They could just as well be infallible functions doing things like:
if (tasks.empty()) {
printf("Nothing to do\n");
return 1;
}
or if (items.size() == 1) {
return items[0];
}
Second, even ignoring that, you'd still need the cleanup block! The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.Finally, your foo() and bar() calls can actually begin to fail in the future when more functionality is added to them. Heck, they might call a user callback that performs arbitrary tasks. Then your code will break too. Whereas with the cleanup block, it'll continue to be correct.
What you're doing is simplifying code by making very strong and brittle -- not to mention unguaranteed in almost all cases -- assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code. In that context, putting them together seems "unnecessary", yeah. But point-in-time programming is not software engineering. The situation is radically different when you factor in what can go wrong during updates and maintenance.
In a language without exceptions, I'm also assuming that a function conveys whether it can fail via it's prototype; in Rust, changing a function from "returns nothing" to "returns a Result" will result in a warning that you're not handling it
> What you're doing is simplifying code by making very strong assumptions on how it looks during initial authorship, and assuming the code will remain static throughout the lifetime of your own code.
But this is where the burden of exceptions is most pronounced; if you code as if everything can fail, there's no "additional" burden, you're paying it all the time. The case you're missing is in the simpler side, where it's possible for something to not fail, and that if that changes, your compiler tells you.
It can even become quite a great boon, because infallibility is transitive; if every operation you do can't fail, you can't fail.
To be very clear, I was explaining why, even if you somehow have a guarantee here that absolutely nothing ever fails, this code:
stack.push_back(k);
absl::Cleanup _ = [&] { assert(stack.back() == k); stack.pop_back(); }
foo();
bar();
baz();
return 3;
is still better than this code w.r.t. maintainability and robustness: stack.push_back(k);
foo();
bar();
baz();
assert(stack.back() == k);
stack.pop_back();
return 3;
The reason, as I explained above, is the following:>> The fact that it is next to the setup statement (i.e. locality of information) is extremely important from a code health and maintainability standpoint. Having setup & cleanup be a dozen lines away from each other makes it far too easy to miss or forget one of them when the other is modified. Putting them next to each other prevents them from going out of sync and diverging over time.
Fallibility is absolutely irrelevant to this point. It's about not splitting the source of truth into two separate spots in the code. This technique kills multiple birds at once, and handling errors better in the aforementioned cases is merely one of its benefits, but you should be doing it regardless.
Do you see what I mean?
For instance, this is the the scenario I expect to be harder to manage with exceptions & cleanup:
this.len += 1;
foo();
this.len += 1;
bar();
this.len += 1;
baz();
return ...;
Without infallibility, you need a separate cleanup scope for each call you make. With this, the change to the private variable is still next to the operation that changes it, you just don't need to manage another control flow at the same time.EDIT: sorry, had the len's in the wrong spot before
They're not. I've done this all the time, in the vast majority of cases it's perfectly fine. It sounds like you might not have tried this in practice -- I would recommend giving it a shot before judging it, it's quite an improvement in quality of life once you're used to it.
But in any large codebase you're going to find occasional situations complicated enough to obviate whatever generic solution anyone made for you. In the worst case you'll legitimately need gotos or inline assembly. That's life, nobody says everything has a canned solution. You can't make sweeping arguments about entire coding patterns just because you can come up with the edge cases.
> Without infallibility, you need a separate cleanup scope for each call you make.
So your goal here is to restore the length, and you're assuming everything is infallible (as inadvisable as that often is)? The solution is still pretty darn simple:
absl::Cleanup _ = [&, old_len = len] { len = old_len; };
foo();
this.len += 1;
bar();
this.len += 1;
baz();
this.len += 1;
return ...;
No need for a separate cleanup for every increment.Your parenthetical is kind of my point though. It's rare to need mid-function cleanups that somehow contradict the earlier ones (because logically this often doesn't make sense), and when that is legitimately necessary, those are also fairly trivial to handle in most cases.
I'm happy to just agree to disagree and avoid providing more examples for this so we can lay the discussion to rest, so I'll leave with this: try all of these techniques -- not necessarily at work, but at least on other projects -- for a while and try to get familiar with their limitations (as well as how you'd have to work around them, if/when you encounter them) before you judge which ones are better or worse. Everything I can see mentioned here, I've tried in C++ for a while. This includes the static enforcement of error handling that you mentioned Rust has. (You can get it in C++ too, see [1].) Every technique has its limitations, and I know of some for this, but overall it's pretty decent and kills a lot of birds with one stone, making it worth the occasional cost in those rare scenarios. I can even think of other (stronger!) counterarguments I find more compelling against exceptions than the ones I see cited here, but even then I don't think they warrant avoiding exceptions entirely.
If there's one thing I've learned, it's that (a) sweeping generalizations are wrong regardless of the direction they're pointed at, as they often are (this statement itself being an exception), and (b) there's always room for improvement nevertheless, and I look forward to better techniques coming along that are superior to all the ones we've discussed.
There are specific scenarios that a major issue, yes. But as the title of the video implies, the problem with exceptions runs far deeper. Imagine being a C++ library author who wants to support as many users as possible, you simply couldn't use exceptions even if you wanted to, and even if most of your users are using exceptions. The end result is that projects that use exceptions have to deal with two different methods of error handling, i.e. they get the worst of both worlds (the binary footprint of exceptions, the overhead of constantly checking error codes, and the mental overhead of dealing with it all).
C++ exceptions are a genuinely useful language feature. But I wish the language and standard library wasn't designed around exceptions. C++ has managed to displace C almost everywhere except embedded and/or kernel programming, and exceptions are a big reason for that.
I'm pretty sure that (much) less than 50% of the C++ code out there is "a C++ library that wants to support as many users as possible" -- I imagine most code is application code, not even C++ library code in the first place. It's perfectly fine to throw e.g. a "network connection was closed" or "failed to write to disk" exception and then catch it somewhere up the stack.
> The end result is that projects that use exceptions have to deal with two different methods of error handling. i.e. they get the worst of both worlds
No, that's not true. You might get a bit of marginal overhead to think about, but it's not the worst of both whatsoever. If you want to use exceptions and your library doesn't use them, all you gotta do is wrap the foo() call in CheckForErrors(foo()), and then handle it (if you want to handle it at all) at the top level of your call chain. It's not the worst of both worlds at all -- in fact it's literally less work than simply writing
std::expected<Result, std::error_code> e = foo();
and on top of that you get to avoid the constant checking of error codes and modifying every intermediate caller, leaving their code much simpler and more readable.And of course if you don't want to use exceptions but your library does use them, then of course you can do the reverse:
std::expected<Result, std::error_code> e = CallAndCatchError(foo()).
Nobody is claiming every error should be an exception. I'm just saying you're exaggerating and extrapolating the arguments too far. A sane project would have a mix of different error models, and that would very much still be the case if none of the problems you mentioned existed at all, because they're different tools solving different problems.For most people, no, you definitely want it to just work or explode, which is indeed what happens in normal Rust, and, not coincidentally, the actual effect when this exception happens in your typical C++ application after it is done with all the unwinding and discovers there is no handler (or that the handler was never tested and doesn't actually somehow cope).
But, sometimes that is what you wanted, and Linus has been very clear it's what he wants in the kernel he created.
For such purposes Rust has Vec::try_reserve() and Vec::push_within_capacity() which let us express the idea that we'd like more room and to know if that wasn't possible, and also if there was no room left for the thing we pushed we want back the thing we were trying to push - which otherwise we don't have any more.
There is no analogous C++ API, std::vector just throws an exception and good luck to you AFAIK.
Sure, yes. It's trivial to change to try_reserve if that's what you want. (There are other solutions for that as well, but they're more complicated and better for other situations.)
> Your function takes a reference and then tries to copy the referenced object into the growable array. But of course nobody said this object can be copied - after all we want it back if it won't fit so perhaps it's unique or expensive to make
Just add extend it to allow moves then? It's pretty trivial. (Are you familiar with move semantics in C++?)
I can't see how to make that work, but I also can't say for sure it's impossible all I can tell you is that I was genuinely trying and all I got for my trouble was a segfault that I don't understand and couldn't fix.
Edited to add: In case it helps the signature we want is:
pub fn push_within_capacity(&mut self, value: T) -> Result<(), T>
If you're not really a Rust person, this takes a value T, not a reference, not a magic ultra-hyper-reference, nor a pointer, it's taking the value T, the value is gone now, which just isn't a thing in C++, then it's returning either Ok(()) which signifies that this worked, or Err(T) thus giving back the T because we couldn't push it.I agree that your new code does roughly what you'd do in C++ if you wanted this, but you get to the same place as before -- if for example you try commenting out your allocation failure boolean, the code just blows up now.
There are lots of APIs like this which make sense in Rust but not in C++ because if you write them in Rust the programmer is going to handle edge cases properly, but in C++ the programmer just ignores the edge cases so why bother.
Er... doesn't this blow up in Rust? https://godbolt.org/z/eaaq43voT
pub fn main() {
let mut vec = Vec::new();
return vec.push_within_capacity(1).unwrap();
}
"But I can write correct C++" is trivially true because it's a Turing Complete language, and at the same time entirely useless unless you're playing "Um, actually".
I'm sorry, what? How in the world did you go from "exceptions are worse than error codes" to "that's why Linus doesn't like C++, he wants to write push_within_capacity() in C++ without exceptions and it's impossible" to "oh but your version doesn't move" to "oh I guess moving is possible too... but if you modified it to be buggy then it would crash" to "oh I see Rust would crash too... but it's OK because Rust programmers wouldn't actually let .unwrap() through code review"?? Aren't there .unwrap() calls in the standard library itself, never mind other libraries? So next we have "Oh I guess .unwrap() actually does through code review... but it's OK because Rust programmers wouldn't write such bugs, unlike C++ programmers"?
Among the things Linus doesn't like about C++ are its quiet allocations and its hidden control flow, both of which are implicated here - I think those are both bad ideas too, but in this case I'm just the messenger, I didn't write an OS kernel (at least, not a real one people use) so I don't need a way to handle not being able to push items onto a growable array.
The problem isn't that "if you modified it to be buggy then it would crash" as you've described, the problem is that only your toy demo works, once we modify unrelated things like no longer setting that global to true the demo blows up spectacularly (Undefined Behaviour) whereas of course the Rust just reported an error.
> Aren't there .unwrap() calls in the standard library itself
Unsurprisingly an operating system kernel does not use std, only core and some of alloc. So we're actually talking only about core† and alloc, not the rest of std. There are indeed a few places where core calls unwrap(), cases where we know that'll do what we meant so if you wrote what you meant by hand Clippy (at least if we weren't in core) would say you should just write unwrap here instead.
† As a C++ person you can think of core as equivalent to the C++ standard library "freestanding" mode. This is more true in the very modern era because reformists got a lot of crucial improvements into this mode whereas for years it had felt abandoned. So if you mostly work with say C++ 17, think "freestanding" but actually properly maintained.
We can't write unwrap here because it's not what we meant, so that's why it shouldn't pass review.
How are they a foot gun? It's not like C++ is the only language with exceptions. So what is particularly dangerous about C++ exceptions?
> trying to find some new solution
C++23 already has std::expected (= result type).
This is a major part of why I like languages like rust. I can do some pretty fearless refactoring that looks something like:
- Oh hey, there’s a string in this struct but I really need an enum of 3 possible values. Lemme just make that enum and change the field.
- Cargo tells me it broke call sites in these 5 places. This is now my todo list.
- At each of the 5 places, figure out what the appropriate value is for the enum and send that instead of the string.
- Oh, one of those places needs more context to know what to send, so I’ll add another parameter to the function params
- That broke 3 other places. That’s now my to-do list.
Repeat until it compiles, and 99.9% of the time you’re done.
With non-statically-typed languages you’re on your own trying to find the todo list above. If you have 100% test coverage you may be okay but even then it may miss edge cases that the type checker gets right away. Oh and even then, it’s likely that your 100% test coverage is spent writing a ton of useless tests that a type checker would give you automatically.
As nice as weakly/dynamically typed languages are to prototype greenfield code in, they lose very quickly once you have to maintain/refactor.
Importantly, no tests are required to guarantee that the refactor is safe - although no guarantees that it’s logically correct.
On the other hand, doing this exercise in a different low-level language involves a lot more “thinking” instead of just following the compiler’s complaints :)
In my free time I code 90+% in Rust, but for some areas, like OR (SAT, MILP, CSP), ML or CAS Python seems to be the better choice because types don't matter too much and if your code works, it works.
You can change your tsconfig to ignore the strictness but I don’t.
Having a type system from the start that cannot be disabled and that forces you to always think of types instead of allowing sprinkling 'as any' when the code works but doesn't compile which is a major annoyance, is a huge benefit in my opinion.
This is pretty much not the case these days, the packages people use mostly have types.
And this sometimes holds for even fairly popular libraries, like d3.js which I sometimes use for visualization. The idiosyncratic API design for object manipulation, selecting DOM nodes by string id and doing stuff based on their associated data, just doesn't really work in a strongly-typed context without 50% of the code being unreadable casts. And d3 is still trying at least to be somewhat type-safe, unlike other libraries.
Heh I just did this today. Rust is really good language to prototype and refactor in.
It scares me how good C# is these days. Every killer feature of Rust and Lisp is already in C# or started there. Visual Studio makes VSCode look like a 90s shareware tool. Even the governance, by MS of all entities, is somehow less controversial than Rust’s.
Sum types are a must-have for me. I don't want to write software without sum types. In C# you can add third party libraries to mostly simulate sum types, or you can choose a style where you avoid some of the worst pitfalls from only having product types and a simple enumeration, but either is a poor shadow to Rust having them as a core language feature.
Also VS is a sprawling beast, I spend almost as much time in the search function of Visual Studio finding where a solution I've seen lives as I do hand solving a similar problem in Vim. I spend the time because in Vim the editor won't get in my way when I solve it by hand, while VS absolutely might "helpfully" insert unrelated nonsense as I type if I don't use the "proper" tools buried in page 4 of tab 6 of a panel of the Option->Config->Preferences->Options->More Options->Other section or whatever.
Visual Studio is what would happen if Microsoft asked 250 developers each for their best idea for a new VS feature and then did it, every year for the past several decades, without fail. No need for these features to work together or make sense as a coherent whole, they're new features so therefore the whole package is better, right? It's like a metaphor for bad engineering practice for every Windows programmer to see.
I would rather use a more raw unrefined version of tech that is open source. So my code and DX is not at wimp of some corporate over lord! And given MS's track record I do
I always considered C#, F# and Rust as languages complementary to each other since each has their own distinct domain and use cases despite a good degree of overlap. Much less so than Java/Kotlin and Golang or any interpreted language (except Python and JS/TS in front-end) which are made obsolete by using the first three.
The only issue in C# is that structs come with a. default parameterless constructor (which can be overridden) and b. can be default-intialized (default(T)) without the compiler complaining by default. These two aspects are not ideal and cannot be walked back as they were introduced in ancient times but are rarely an issue in practice (and you can further use an analyzer to disallow either).
F# is more strict about it however and does not have such gaps.
And then none of those techniques work as well as manually typing out a required constructor which hard-enforces that required data be priced upon object initialization.
I understand required vs optional immediately a la Rust and F# (ignoring for a moment F#’s null awareness) but as a 17 year C# dev, I’ve had to create an initialization chart to keep straight all of the C# techniques.
F# has units of measure which are quite a bit more powerful: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
The process you describe (with “compile” replaced with “typecheck”) works fine for me in Python, with Pylance (and/or mypy) in VSCode.
> With non-statically-typed languages you’re on your own trying to find the todo list above.
This would be more accurately “in workflows without typechecking”, it’s not really about language features except that, long ago, it was uncommon for languages where running the code didn't rely on a compilation step that made use of type information to have typechecking tools available for use in development environments, and lots of people seem to be stuck in viewpoints anchored in that past.
The problem with python, ruby, JavaScript, and similar languages is that while yes, they have optional type checkers you can use… they were invented after the fact, and not everyone uses them, and it’s not mandatory to use them. The library you want to use may not have type information, etc. It’s a world of difference when the language has it mandatory from the start.
And that’s not even getting into how (a) damned good rust’s type checker is (b) the borrow checker, which makes the whole check process at least twice as valuable as type checking alone.
- Best mutability ergonomics of any language. E.g. `&mut` in a function parameter means the funciton can mutate it; `&` means it can't. This might be my favorite part of rust, despite sounding obvious. Few languages have equivalents. (C++ and D are exceptions).
- Easy building and dependency management
- No header files
- Best error messages of any language (This is addressed explicitly in the article)
- Struct + Enums together are a fantastic baseline for refactorable, self-consistent code.
- As fast as any
- Great overall syntax tradeoffs. There are things I don't like (e.g. Having to manually put Clone, Copy, and PartialEq on each simple enum, and having to manually write `Default` if I need a custom impl on one field), but it overall is better than alternatives.
Rust enthusiasts online are often unpleasant, and it's perhaps their fault people are put off by the language. They repeat things like "fearless concurrency" "if it compiles, it works", and "that code is unsafe/unsound" without critically thinking. Or, they overstate rust as a memory-safety one-trick, while ignoring the overall language advantages.Tangent: Async rust is not my cup of tea for ergonomics and compatibility reasons. I have reason to believe that many people who like it think Async is synonymous with concurrent processes and nonblocking code.
I guess this is also a bit in the eyes of the beholder. It seems that any group that is enthusiastic about something new is “unpleasant” nowadays.
I would disagree with this, personally. Due to being tacked onto the language after the fact, the design of Rust's async made a number of concessions in order to fit into the language (for example, it had to work around the pre-existing restriction that all types are moveable by default).
But you're correct that no current popular language has yet developed anything better.
I would say Rust is still better than C++ here, because in Rust const is default. In C++, people often either forget to write const, or intentionally don't write const because writing it everywhere clutters up the code.
I don't often like being judgemental (at least publicly!), but I'd argue that's just people being very bad developers...
You could argue having to add '&mut' at call sites everywhere (i.e. opposite to the way C++ does const in terms of call site vs target site) also clutters up the code in terms of how verbose it is, but it's still largely a good thing.
The Ruby community seems the nicest, I wonder why that is.
Fortunately the "return" keyword is allowed so I include it in my code to make it explicit. I just have to remember to look for it in any other code I'm reviewing.
I think the other part of it is that it is just part of a cohesive language design where everything is an expression, including things like if's, matches, etc that would be control flow statements in other languages.. It would be a little weird to say that functions are the only thing that have different semantics.
Speed is great but let me just focus on the business problems and write something durable.
JetBrains also keeps a list of other data analysis libraries for Kotlin (and Java) as well: https://kotlinlang.org/docs/data-analysis-libraries.html
I would never consider sending PRs in another language, how would I know If I am wasting the projects time by contributing bad code? With rust though, I have clippy and the compiler helping me along the way, like pair programming, I can be fairly confident I'm sending something useful.
My understanding of Rust memory management is that move semantics and default lifetime-checked pointers are used for single threaded code, but for multi-threaded code Rust uses smart pointers like C++, roughly Arc = shared_ptr, Weak = weak_ptr, Box = unique_ptr.
My question is: what extra static checks Arc has over shared_ptr? Same for Weak over weak_ptr, and Box over unique_ptr.
The following program is obviously incorrect to someone familiar with smart pointers. The code compiles without error, and the program crashes as expected.
% cat demo.cpp
#include <iostream>
#include <memory>
int main() {
std::unique_ptr<std::string> foo = std::make_unique<std::string>("bar");
std::unique_ptr<std::string> bar = std::move(foo);
std::cout << *foo << *bar << std::endl;
}
% clang -std=c++2b -lstdc++ -Weverything demo.cpp
warning: include location '/usr/local/include' is unsafe for cross-compilation [-Wpoison-system-directories]
1 warning generated.
% ./a.out
zsh: segmentation fault ./a.out
The equivalent Rust code fails to compile. % cat demo.rs
fn main() {
let foo = Box::new("bar");
let bar = foo;
println!("{foo} {bar}")
}
% rustc demo.rs
error[E0382]: borrow of moved value: `foo`
--> demo.rs:5:13
|
2 | let foo = Box::new("bar");
| --- move occurs because `foo` has type `Box<&str>`, which does not implement the `Copy` trait
3 | let bar = foo;
| --- value moved here
4 |
5 | println!("{foo} {bar}")
| ^^^^^ value borrowed here after move
help: consider cloning the value if the performance cost is acceptable
|
3 | let bar = foo.clone();
| ++++++++
Not only does Rust emit an error, but it even suggests a fix for the error.That's not the whole story. There's also Send and Sync marker traits, move by default semantic also makes RAII constructs like Mutex<T> less error prone to use.
In “very rough c++ish”, stuff in a shared ptr is immutable unless it is also protected by a mutex.
`let lock = Arc::new(Mutex::new(0_u32));`
Doesn't this mean that Mutex introduces one more pointer?
For example, in Java every Object has a built-in mutex, adding some memory overhead in order to remove one extra layer of pointer dereferencing. As far as I understand, Rust introduces an extra layer of pointer indirection with Mutex, which can hurt performance significantly with cache misses.
So, the layout of Mutex<T> is the same as T and then some lock (well, obviously).
>Rust introduces an extra layer of pointer indirection with Mutex, which can hurt performance significantly with cache misses.
Why would there be an extra pointer dereference? There isn't.
No. That syntax is roughly equivalent to the following C++:
auto const lock = std::make_shared<std::pair<std::mutex, uint32_t>>(
std::piecewise_construct,
std::make_tuple(),
std::make_tuple(0));
1. The language and its future are heavily intertwined with JetBrains and their motivations. It's difficult to say whether this is a good thing or bad, but issues like the one discussed at https://discuss.kotlinlang.org/t/any-plan-for-supporting-lan... don't inspire confidence.
2. Java seems to be moving ahead at a rapid pace and is slowly absorbing many of the features that once distinguished Kotlin. This makes it difficult to jusify introducin Kotlin at a company where Java is heavily used.
Isn't it comparing apples and oranges? Is there any good reason to use Rust if you can live with a GC?
The one thing I do enjoy in rust is how you don’t need an excessive amount of tests to ensure it runs fairly correctly. I’ve spend more time writing tests in the last ten years using Python/js than writing actual code. Such a waste of productivity.
Django was (is?) the default tool for python, but i don't remember it being discussed as widely.
IME fastapi has eroded some of Django’s hold but it’s still chugging along nicely. Hype has certainly died down because it is ancient by today’s standards. Still a very good tool and quite a lot of work available around it.
Even though Rust is more verbose, and SeaORM has a few quirks, I am making faster progress in Rust than my existing mature Typescript + Node + apollo-graphql + ReactJS setup. Once I was over the initial setup & learning curve (about a week), I find myself able to spend more time on business logic and less time hunting runtime bugs and random test failures. There's something almost magical about being able to refactor code and getting it up and running in a matter of minutes/hours, compared to days for similar operations in Typescript.
It's definitely still a young ecosystem that desperately needs a Django equivalent (loco.rs is worth keeping an eye out, but it's not there yet.) But I'm willing to tackle a bit of immaturity & contribute upstream to avoid the constant needless churn of the js world.
Go doc had a bunch of, puzzling things going on / barely working, Rust doc is pretty much as described in the book and reference.
Although in the C ecosystem, Doxygen is pretty nice, the docs there have to be 3d to account for the way C code bases can work.
Surely you recognize the benefit in that sort of thing being pushed to a compiler error?
Instead the null propagates somewhere else where you assume non-null, and you get the panic there.
That's bad error reporting, and it only happens because Go lacks a proper nullable/option type.
My two main problems now are 1) that AI code assistants (Claude 3.5 Sonnet) + IDE support (VS Code / Cursor AI) are still much worse than with frontend frameworks like React/VueJS. The AI suggestions are mostly terrible. 2) Compilation is really really slow. There is no hot reload, and it often takes about 1 or 2 minutes for my new code to be live in my dev server. It's a real flow-state killer. It's a bit ironic as all the Python/Javascript frameworks are now super fast because they've been rebuilt on Rust.
The tone of the podcast seems needlessly incendiary.
EDIT: He actually quotes a part of the doc were they say that "memory leaks are hard but still possible". It's disappointing that Cosmic has leaks; but it's more about Cosmic than the language IMO.
It's true if you stick to the standard library, but I assume that the Cosmic frameworks adds a layer of complexity. I'm not familiar with their codebase, but I can easily see how an Rc cycle can appear behind a system with lots of shared references (e.g. callbacks, GUI components with backlinks, etc.). You can also get memory leaks through caches without a clear eviction policy. You can also get cases of "memory amplification" where you hold an Rc to a larger struct despite only needing a tiny amount of data from it.
Basically, I agree that the standard library tries to steer you away from memory leaks. However, I also understand that it's not foolproof and can see how you can get in a situation with leaks when your approach is to take some tech debt to avoid short-term delays.
It does sound like the kind of factoid that should be super upfront though. Half the "better than c" comments I see here always seem to hint about c memory leaks being the big problem that rust comes to fix. So if it's not that, I honestly don't know what is being referred to by memory safety in this context then.
Safe Rust has no undefined behavior. Memory leaks are bad but they don't cause undefined behavior (your program might use more memory than it needs, but an attacker can't gain remote code execution from a memory leak). So Rust has tools (such as RAII) to help prevent accidental memory leaks, but it doesn't guarantee absolute freedom from memory leaks.
- No data races: it is impossible in safe Rust for two threads to simultaneously mutate memory without appropriate guards
- No reads/writes from memory that is not owned/allocated
- Any location in memory may only have a single mutable reference at a time (and any number of immutable references)
Combined, you eliminate a large class of memory-related bugs, including use-after-frees, double frees, and buffer under/overflows.
So a panic is when something happens that shouldn't and you want the app to just die. But the problem is that third party libraries can do this as well. And there is no way to wrap this behaviour.
For example, I used a PDF library that would panic when the file was doing something not in the spec. And rather than me being able to put up a dialog that said "this PDF is invalid" my entire process would die. Not great for a desktop app.
It is one of the more insane situations I've ever seen in programming in 30+ years. You literally have to beg third party developers to consider what is best for you rather than them.
> And there is no way to wrap this behaviour. [..]
As a sibling comment mentioned, this is possible with std::panic::catch_unwind. That is prominent in the std::panic documentation (literally the first function for std::panic) and if you Google "rust stop panics", the first Stack Overflow result (third down on the page for me) describes this directly. Just about anyone who had put in a modicum of good-faith effort would have found this quickly.
> You literally have to beg third party developers to consider what is best for you rather than them.
I'm assuming this means third-party developers that you're paying and have signed a support contract with? Because if you mean any of the three Rust PDF libraries that I just looked at, those are written by open source developers who have no obligation to consider what is best for you instead of them, owe you exactly nothing, and for whom you should be, if anything, only thanking for doing some of the initial legwork that allows you to use that library at all. If you'd like a change, make a pull request or fork the library.
> It is one of the more insane situations I've ever seen in programming in 30+ years.
Great. You've been in the field a while; nothing written about should surprise you.
I'll bet $20 USD to the open source project of your choice that the authors of whatever PDF library was being referenced here did not go out of their way to abort on panic, and that it's just a normal unwind.
I can legitimately want my app to fail if it’s in a bad state but not have third party libraries do this on my behalf.
> Presume is used when someone is making an informed guess based on reasonable evidence. Assume is used when the guess is based on little or no evidence.
I'm not assuming they don't know what they're talking about, I'm asserting (or presuming) that they don't know what they're talking about based on supporting evidence showing that it is possible to catch panics. Similarly, I didn't say that they didn't know how to Google. I presumed it was likely they didn't put in a good-faith effort to do so, because in my judgement if they had, it would have been trivial to find the aforementioned information per my experience having just done the same.
But the point is that I need to now do this with every use of a third party library. And for example with pdf-rs it was happening on relatively minor things e.g. incorrect date format. And what if I want to set panic=abort on my app to prevent data corruption in my code.
Setting panic in an app shouldn’t mean it is applied globally.
Well, yes. You have to manage your dependencies (by either catching potential panics or forking/modifying them to meet your needs) or accept their behavior. You're using someone else's code for free; this is no one's responsibility but yours, nor is your convenience guaranteed. "This software is provided as is, without warranty" and whatnot.
> And what if I want to set panic=abort on my app to prevent data corruption in my code.
I obviously don't have direct insight into your application, but you could likely use std::process::abort if you feel that data corruption is a risk in a given circumstance (to be fair, I've never personally seen data corruption caused by an unwinding that would have been prevented with an aborting panic instead). Globally setting panic=abort is not necessarily the only approach to achieving your desired behavior.
> Setting panic in an app shouldn’t mean it is applied globally.
You could make a case for a more granular approach to specifying panic behavior. Sure. I don't even disagree with this. But do you see how that's moving the goalposts on your original comment? From "there's no way to wrap this behavior" to "It's possible, but I wish managing this was more convenient for my particular situation."
And my point is that I have never had to do this with other languages before.
Rust is the first where I need to actively worry about dependencies.
And there is no way for me to wrap this behaviour in all cases e.g. if I set panic=abort, if the library has unique types that don't support UnwindSafe.
Which of those is the case for the desktop app described by the parent?
I typically don’t control every type that I am interacting with.