It's a snowclone based on the meme, "Mom, can we get <X>? No, we have <X> at home." : https://www.google.com/search?q=%22we+have+x+at+home%22+meme
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
[...] please use the original title, unless it is misleading or linkbait; don't editorialize.
It is much worse, I think, to regularly drastically change the meaning of a title automatically until a moderator happens to notice to change it back, than to allow the occasional somewhat exaggerated original post title.As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
Anyway, going forward, if anything like this happens again folks should simply shoot an email immediately to the mods and if the topic is really interesting deserving of more discussion they can always request the mods to keep the post on the frontpage for a longer time period via second-chance pool etc.
It just takes a minute or two of one's time and hence not worth getting het up over.
Again, this post was misrepresenting Raymond's words for over 7 hours. That's most of its time on the front page. The current system doesn't work.
Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
I don't think you understand.
If you need to run cleanup code whenever you need to destroy a resource, there is already a special member function designed to handle that: the destructor. Read up on RAII.
It somehow you failed to understand RAII and basic resource management, you can still use one-liners. Read up on scope guard.
If you are too lazy to learn about RAII and too lazy to implement a basic scope guard, you can use one of the many scope guard implementations around. Even Boost has those.
https://www.boost.org/doc/libs/latest/libs/scope/doc/html/sc...
So, unless you are lazy and want to keep mindlessly writing Java in ${LANGUAGE} regardless it makes sense or not, there is absolutely no reason at all to use finally in C++.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".
Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.
Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting: using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)I think Java has something similar called try with resources.
try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error. void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.
Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.
The exception is memory.
In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.
readonly record struct Result<TResult, TDisposable>(TResult? IfHappy, TDisposable? Disposable): IDisposable where TDisposable : IDisposable
{
public void Dispose() => Disposable?.Dispose();
} using (var result = foo.GetSomethingIfLucky())
{
if (result.IfHappy is {} success)
{
// do something
}
}I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
Java is actively removing it's finalizers.
They are fundamentally different concepts.
See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153
Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
If you can't, it's not remotely "basically the same as C++ RAII".
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
What i mean is that in cpp all the numerous language features are exposed through little syntax/grammar details. Whereas in Lisps syntax and grammar are primitive, and this is why macros work so well.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.
> whether C++ syntax ever becomes readable when you sink more time into it,
Yes, and the easy approach is to learn as you need/go.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
Defer is more flexible/requires less boilerplate to add callsite specific handling. For an example, see https://news.ycombinator.com/item?id=46410610
busy = true
Task {
defer { busy = false }
// do async stuff, possibly throwing exceptions and whatnot
} func atomic_get_and_inc() -> Int {
sem.wait()
defer {
value += 1
sem.signal()
}
return value
} struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
fn callFoo() -> FooResult {
let fooParam: Int // declared, not defined yet
defer {
// fooParam must get defined by the end of the function
foo(fooParam)
otherStuffAfterFoo() // …
}
// all code paths must assign fooParam
if cond {
fooParam = 0
} else {
fooParam = 1
return // early return!
}
doOtherStuff()
}
Blame it on it being years since I’ve coded in swift, my memory is fuzzy. #include <iostream>
#define RemParens_(VA) RemParens__(VA)
#define RemParens__(VA) RemParens___ VA
#define RemParens___(...) __VA_ARGS__
#define DoConcat_(A,B) DoConcat__(A,B)
#define DoConcat__(A,B) A##B
#define defer(BODY) struct DoConcat_(Defer,__LINE__) { ~DoConcat_(Defer,__LINE__)() { RemParens_(BODY) } } DoConcat_(_deferrer,__LINE__)
int main() {
{
defer(( std::cout << "Hello World" << std::endl; ));
std::cout << "This goes first" << std::endl;
}
}In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
how old is this post that 3.2 is "now"?
In Java the following is perfectly valid:
try { throw new IllegalStateException("Critical error"); } finally { return "Move along, nothing to see here"; }
The existence of two different patterns each with their own pitfalls is why we can’t have nice things. Finally shouldn’t return a value. Simply a void expression. Exception driven API’s need to be snuffed out.
If your method throws, mark it as such as force me to handle the exception if it does, do not return a non-value value in a finally.
Using Java as the example shows just how far we have come with this thinking, why old school Java style exception handling sucks and why C++ by proxy does too.
It’s difficult to break old mental habits but it’s easier when the compiler yells at you for doing bad things.