> The user now has an interface value error that the only thing
> they can do is access the string representation of ... The only
> resort the consumer of this library has is to parse the string
> value of this error for useful information.
This shows a lack of understanding about the `error` interface, `errors.Is`, `errors.As`, error wrapping etc.Personally, I think Go errors are fantastic, just the right sprinkling of structure on top of values. And panic/recover for truly exceptional circumstances.
I think it's a shame that Go doesn't have sum types, exhaustiveness checking, and pattern matching, because it would make its error model more enforceable and concise (see Gleam!) but given that it doesn't for whatever reason, I think the solution it's gone with is genuinely the best possible second option that gives you 90% of the benefits of result and option (out of band errors, structured error data, errors as values so you can directly and in line decide what to do with them without special control flow, no invisible or non-local control flow, it's immediately obvious when anything can return an error, etc).
And I say this as someone who started out hating go.
It's nice when you understand how to do it well and move on from, say, printing errors directly where they happen rather than creating, essentially, a stack of wrapped errors that gets dumped at an interface boundary, giving you much more context.
It wouldn't be hard — rather easy, even — to write a static analyzer for that if you constrained your expectations to cases where sum types could practically be used. No reason to not do it right now!
But sum types don't really solve for a lot of cases. Even in Go you can add additional constraints to errors to get something approaching sum types. You don't have to use a naked `error`. But you are bound to start feeling pain down the road if you try. There is good reason why even the languages that support defined error sets have trended back to using open-ended constructs.
It does sound great in theory, no doubt, but reality is not so kind.
Rust does it that way & has never trended in any other direction. The generally-accepted wisdom is "thiserror¹ for libraries, anyhow² for applications"—i.e., any error that's liable to be handled by a machine should be an enum of exhaustively-matchable error kinds, whereas anything that's just being propagated to the user should be stringified & decorated with context as it bubbles up.
Certainly, unified error types which wrap and erase more specific errors are sometimes desirable, but equally they often are not. Languages which support exhaustive matching support both, permitting us to choose based on context.
I have my own complain against Rust's error handlings, but I can't bring myself to praise what Go has been doing.
One problem with the error interface is that you can't know exactly which error type will be returned, and the Go authors may add new error types form time to time. `net.OpError` itself is a great example for that, which is a newer type than net.Error.
This style of error signalling is best used when the error handling is binary, either "No error, continue" or "Errored out, abort", but not branched out path like "If encountered error A, do this. If encountered error B, do that. Otherwise, abort".
Rust's error handling encourages such branched out handlings by default, but if you want to so the same in Go, there will be a nightmarish manual type discovery and tracking operation waiting for you.
So for me, I rather use a dedicated signal code to tell which error branch I should do next rather than relying on the error, i.e:
if next, err := func(); err != nil { // If errored, abort it
return err
} else if next == condition1 {
return op1()
} else if next == condition2 {
return op2()
} else {
panic("unsupported condition")
}Is this a problem with Go the language or Go the standard library or Go the community as a whole? Hard to say. But if the standard library uses errors badly, it does provide rather compelling evidence that the language design around it wasn't that great.
There’s no way for me to know or even check what are the possible errors this function can return? Sure, sometimes a comment in the library might be illuminating, but sometimes not.
I agree that errors as values that I can handle at the call site rarely feel useful.
Some of the Is, As ergonomics have improved, but damn if coding agents don’t love to just fmt.error any code it writes. Thus hiding the useful information about the error in a string.
"The user now has an interface value error that the only thing they can do is access the string representation of."
This is false. Didn't used to be, but that was many years ago.
It's mostly false.
Technically, if you use `fmt.Errorf`, then the caller can't get anything useful out of your errors.
Types are promises. All the `error` interface promises is a string representation. I acknowledge that most of the time there is more information available. However, from the function signature _alone_, you wouldn't know. I understand that this is more of a theoretical problem then a practical problem but it still holds (in my opinion).
(Rust then improves drastically on the situation with pattern matching, which would simply improve Go with no tradeoffs I can really discern, just so we're clear that I'm not saying Go's error story is at parity with Go. But I'd also point out that Rust error handling at a type level is kind of a painful mess as well.)
it's only present when you downcast though?
Lots of different fine grained error types with complex logic spread out over several call layers.
ime it’s better to aim for simpler handling, which seems to match go
Hes definitely wrong on how the error types can help account for that but it would be best in class if we could properly chain them AND use all the greatness from interfaces.
When I use modern languages like Go or Rust I don't have to deal with all the stuff added to other languages over the past 20 years like unicode, unit testing, linting, or concurrency.
I use Go where the team knows Java, Ruby or TypeScript but needs performance with low memory overhead. All the normal stuff is right there in the stdlib like JSON parsing, ECC / RSA encryption, or Image generation. You can write a working REST API with zero dependencies. Not to mention so far all Go programs I've ever seen still compile fine unlike those Python or Ruby projects where everything is broken because it's been 8mo.
However, I'd pick Rust when the team isn't scared of learning to program for real.
I don't like that for fairly basic things one has to quickly reach for crates. I suppose it allows the best implementation to emerge and not be concerned with a breaking change to the language itself.
I also don't like how difficult it is to cross-compile from Linux to macOS. zig cc exists, but quickly runs into a situation where a linker flag is unsupported. The rust-lang/libc also (apparently?) insists on adding a flag related to iconv for macOS even though it's apparently not even used?
But writing Rust is fun. You kind of don't need to worry so much about trivialities because the compiler is so strict and can focus on the interesting stuff.
Everything is literally built-in. It's the perfect scripting language replacement with the fast compile time and tiny language spec (Java 900 pages vs Go 130 pages) making it easy to fully train C-family devs into it within a couple weeks.
Too ba null/nil is to stay since no Go 2.
Or maybe they would? Iirc 1.21 had technically a breaking change related to for loops. If it was just possible to have migration tooling. I guess too large of a change.
type Result[T, E any] struct {
Val T
Err E
IsErr bool
}
type Payload string
type ProgError struct {
Prog string
Code int
Reason string
}
func DoStuff(x int) Result[Payload, ProgError] {
if x > 8 {
return Result[Payload, ProgError]{Err: ProgError{Prog: "ls", code: 1, "no directory"}}
}
return Result[Payload, ProgError]{Val: "hello"}
}It is disingenuous to say that whatever it ships with is huge also.
The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).
Buuut with Go one in general tends to reach less for dependencies so less likely to run into this and cgo is not go ;) https://go-proverbs.github.io
but for cross-compiling actually ended up filtering out the liconv flag with a bash wrapper and compiled a custom zig cc version with the support for exported_symbols_list patched in, things appear to work.
Should look into cross-rs I suppose. Hope it's not one of those "download macos sdk from this unofficial source" setups that people seem to do. Apparently not allowed by Apple.
Go is generally fine for crosscompiling.
edit: what gave me pain with Rust for a cli was clap (with derive, the default). Go just worked.
Unlimited access to a bunch of third party code is great as you're getting started.
Until it isn't and you're swimming in a fishing net full of code you didn't write and dependencies you do not want. Everything you touch eventually brings all of tokio along with it. And 3 or 4 different versions of random number generators or base64 utilities, etc. etc.
I've been learning Rust. It's elegant, and I am enjoying it.
The Rust people however are absolutely annoying though. Never have I seen such a worse group of language zealots.
I obviously enjoy programming Rust and I like many of the choices it made, but I am well aware of the tradeoffs Rust has made and I understand why other languages chose not to make them. Nor do I think Rust functions equally as well in every single use case.
I imagine most Rust users think like this, but unfortunately there seems to be a vocal minority who hold very dogmatic views of programming who have shaped how most people view the Rust community.
Is it perfect for everything? no. Is it the fastest compiled language out there? no. But, it'll do most things very well, and for me that's good enough. I choose go because when I need to make something, it steps aside and lets me build, and for that I have great respect and appreciation for it.
I will be defering all detractors and negative comments ;)
what does this mean? Go lib seems tiny compared to JDK. anytime i review some Go code from adjacent team i'm turned off by weird stuff like append and slices everywhere, as well as a bunch of strange string packages
When I think of a massive stdlib I think of a language like groovy
- context
- net/http as a full server framework, not just a request library
- net/http/pprof
- runtime/pprof
- runtime/trace
- embed
- testing as the canonical, required test framework
- net/rpc
- net/http/cgi
- net/http/fcgi
- net/http/httptest
- os/signal with integrated channel-based delivery
- sync/atomic with language-aligned memory model semantics
- runtime as a documented and supported API surface
It seems to be very relative depending on where you come from.
Literally the simplest way to deal with errors (cognitively and character wise). Since AI autocomplete entered the scene, typing this repetitive (for a reason) pattern became not a problem at all (I'm not even talking about post Claude Code era)
> The only resort the consumer of this library has is to parse the string value of this error for useful information.
Well, no. See for wrap/unwrap functionality https://go.dev/blog/go1.13-errors
> In Go, errors are values. They just aren’t particularly useful values.
In his example author could easily use his `progError` type instead.
Gosh, why it's so tempting to write a post about bad language instead of just reading docs or article about idiomatic usage?
Code is read 10x more than it is written. The noise this pattern introduces inhibits reading and rapid comprehension.
Things are getting marginally better now that go has errors.Is and errors.As, and also now that go is starting to get some functional iterators. But go is one of the least quickly-understandable of the modern languages currently in use.
Here is the thing... it's NOT a 'noise'. Untill you see handling 'error path' as noise, you will be trapped in searching magic bullet language solution to hide it. We've all been there.
When it takes five times as long to figure out how a function actually works (not only due to error handling, to be fair), the language has a problem.
When in Rust, it is explicitly clear when and where errors can come up, and what types of errors I have to deal with. It is far less clear in golang, since there is no help from the type system when I’m using errors.Is and errors.As. Not being verbose doesn’t make error handling any less explicit.
Nobody is upset about having to pay attention to the unhappy path. This is entirely a straw man that gophers fall back to in order to feel like they’re somehow enlightened, shut down conversation, and to avoid having to consider what other people are saying. We are desperately hoping to help you realize that there are other, better ways that don’t come at the ridiculous readability costs inflicted by the anemic golang approach.
It really is okay to stop making excuses and accept that your preferred language has warts. I promise. Nobody will think less of you.
You would rather dismiss other perspectives out of hand than actually reflect on how your chosen language might evolve.
In my perspective, this has been the entire historical progression of golang. Everyone vehemently denies that there’s a problem, eventually the language authors address it, and then everyone insists that it’s always been known that they needed to deal with the issue.
I don’t know why this seems to be so endemic to the golang community. Other languages’ adherents seem more than willing to accept the shortcomings of their preferred tools. But gophers continually insist that all the problems are actually strengths right up until it’s fixed and then they retcon their previous beliefs. It’s extremely off-putting.
I agree with you. I don't have problems with the `if err!=nil` syntax.
From my post: > It’s become something of a meme to bemoan the supposed difficulty of writing if err != nil. I actually won’t make that point because I think it is a very surface level point that has been talked about to death and *I don’t think the ‘verbosity’ of this code snippet is such an issue.*
I apologize if my language was ambiguous on whether or whether not I had a problem with that syntax.
>In his example author could easily use his `progError` type instead.
I agree, but it was my impression that type erasure was more idiomatic. I should have made it clear that I was criticizing that idiom, not the language.
>Well, no. See for wrap/unwrap functionality https://go.dev/blog/go1.13-errors
You are right. I should have done my research on that before writing that point. I still think that downcasting isn't the most elegant solution because it defeats the point of using the opaque return type but I agree that it is nowhere near as much of a problem as I thought it was.
Writing `pub` is personally more annoying to me than writing `if err != nil`. I understand that most people don't like how capitalization determines visibility, but I think it actually makes sense when you think about it.
that and the fact you can accidently silently ignore it without it being a compiler error
I agree with a lot of the sentiment of this post; the weirdness with “tuples”, the weirdness of the error types, and etc. It’s not a “bad” language, just one that I don’t enjoy using.
Historically to get something similar to Go-style concurrency I would use Clojure with core.async, but more recently I have started using Rust and Tokio.
I also really like Julia; it has a channel mechanism and it has a very nice macro syntax. It also is obscenely fast at number crunching tasks and I find the language pretty pleasant to use with a nice syntax. I use Julia whenever I need to do anything involving CPU bound number crunching stuff (which admittedly isn’t too often).
All that said, I have been mostly favoring Rust because the memory footprint is so much smaller and I still have a fair amount of fun with it :).
Also, depending on the task, I also get a fair bit of mileage out of ZeroMQ, which can bolt on channel-like semantics in a pinch. It’s not as nice as having it build directly into the language but it does support more flexible patterns. Whenever I have to use Java I almost always end up importing JeroMQ the moment that that built in BlockingQueues stop being sufficient.
The error story is not ideal but less bad than that most of the time, as you can downcast to access extra error data. Still, harder than it needs to be.
Overall, I've grown to like using the language even despite its warts.
I get by without it Go enums are an inferior representation of the same logical concepts. Sure, I can have (kind, value) and cast things for a hacky sum type for some kind enum. But Go lacks closed enums/exhaustive matching.
You can at least validate the match arms with things like type switches and marker interfaces, but they're still not exhaustive and they're terribly verbose.
And, again, I can get by without them! But I miss them because Rust-style enum representation comes up _so often_, even if you don't like the rest of Rust.
I mean, you have atomic and compound data types. Atomic ones represent single values, like "a string" or "an integer", and compound ones represent multiple atomic types combined in some way, like a struct or an enum. Enums are useful for the same reason structs are useful, they do the same core thing, just model it in a different way. It's the difference between "and" and "or", which are both useful tools.
How do sum types address the problem that the underlying type (int or string or whatever) is totally capable of describing values other than those your code was compiled with? I'm mostly thinking of version skew with something like a dynamic library or plugin, although casting would probably also have the same effect.
The in memory and ABI representation of enums/sum types is language dependent.
In Rust, an enum is equivalent to a C union where there's a discriminant value which is an integer of sufficient size to cover all variants (u8 is sufficient for 255 different variants), followed by as many bytes as the largest variant (a tagged union). Mucking around with the underlying bytes to change the discriminant and cause type confusion is UB. All the same strategies you'd take in C for versioning and ensure binary backwards compatibility would apply. When exposing these you would likely want a higher level abstraction/self-describing wire format, if you don't have control over both sides of the ABI boundary.
Other higher level languages do this for you already. I believe Swift is one of those, where it has a COM-like ABI layer that the compiler makes injects transparently to correctly interact with a dynamic library, handling versioning for you.
I am of the opinion that we need a new attempt at a multi-platform, language-agnostic, self-describing ABI, "COM for the modern world". I think some people have talked about using WASM for this.
Keep in mind that the concept (pattern matching, sum types) is not tied to how they are represented (Scala has both, but they are represented different to what I described earlier; IIRC it uses inheritance to represent the variants).
Could I ask what hardware this is on? Even when building LLVM from scratch too as part of building the toolchain (which hasn't been the default for a while now, but could see Gentoo doing so), it never took that long on any hardware I own. (Granted, I never tried on the EEE PC I've got on some drawer, and its 2G of RAM would likely kill the whole thing before it got started.)
$ time ./x.py build
... snip ...
Build completed successfully in 0:21:24
real 21m24.736s
user 67m48.195s
sys 1m54.906s
Subsequent builds during development are faster (as 1. it doesn't build the same compiler multiple times, you can use the stage 1 build as normal, stage 2 is not strictly necessary and 2. if you modify one of the constituent crates of the compiler, the ones that are lower in the dep tree don't get recompiled). I've used this laptop on and off for rustc development for the last 10 years. Nowadays I spent more time using a cloud desktop that has much faster IO, but still use it during travels sometimes.From the sound of it, I suspect that your issue might be that you don't have enough RAM and your build is swapping a lot.
Are you joking? It's trivially easy to drop the err or just not handle it accidentally in ways that are essentially impossible in rust. Especially when people re-use the `err` variable.
That said, compile times are great, the concurrency is dead simple, it's performant, and it's still easy to be really productive in it so it's not like I'd never consider it. Many other languages have many of the same issues, anyway.
Awful compared to ... what? `private` and `public` keywords? Ugly hacks like pythons `_` and `__`?
> it just feels very hacked together
> the wonky generics
What exactly about the generics is "wonky"? "Wonky" is not a term defined in any programming textbook I ever read. And languages are not designed on feelings, especially when the design goal is to be as pragmatic as possible, as is the case in Go.
> the lack of useful types like tuples and enums,
Need a tuple? Use an array and don't change it.
- [2]string: string 2-tuple
- [5]int: int 5-tuple
- [1]any: empty-interface 1-tuple
And btw. 99% of the time tuples are used, it's as a stand-in for multiple-returns. E.g. Python does that. Go simply has...multiple returns.> and enums,
Outside of language-enthusiasm with matching and whatnot (which more often than not is used because it looks cool rather than being useful), the most common (and again, 99%) use of enums, is to give names to magic values. Go has that covered:
type Color string
const (
RED Color = iota
GREEN
BLUE
)
> the bolted on module systemPray tell what exactly is "bolted on" about modules? They are simply an extension of the import system, nothing more, nothing less.
> the annoying error handling
The "annoying" thing about it is that it's explicit and forced. Both of which are positives as far as I'm concerned, because I AM FREKKIN DONE with shitty 10-mile stacktraces because some joksters library threw an "exception" 400 layers down in some sub-sub-sub-sub transient dependency lib.
You can't use an array for different types.
Matching _is_ useful, no one uses matching just because it looks "cool".
You can have explicit forced AND exhaustive error handling without exceptions. Go actually lacks this.
And I think `public` and `private` keywords are a verbose mess that adds nothing to a language.
> You can't use an array for different types.
Yes I can. I even provided an example for exactly that: `[4]any` can hold references to any type.
> Matching _is_ useful
A lot of things are useful, doesn't mean they are used for that useful case most of the time.
> You can have explicit forced AND exhaustive error handling without exceptions
Go has wrapped and typed errors, covering exactly that.
I want to clarify that I think very highly of Go as a language. I think it gets most things right. It's hard to introduce an abstraction into a language while still making sure that it's not abused. Rust's trait and type system, while powerful have been abused to create some absolutely incomprehensible and long types. Go, on the other hand, doesn't suffer from this issue. If I was too harsh against Go, that was certainly a mistake.
I think that the criticisms of my section on error handling are, for the most part, spot on. My impression was that type erasure for errors was the idiomatic solution, which is patently untrue. When writing the blog post, I was unaware of the functions in the `errors` module as well as the syntactic sugar and general acceptance of downcasting errors. While I would still maintain that Rust errors are more convenient to work with, I must admit that there really isn't any good excuse for my ignorance on this.
When it comes to enums, I stand by my statements on them.
I still believe that it is useful to restrict the values a type might have. And no, I am not using the term enum to mean a rust-style "enum" (tagged union). I am talking about classical enums, more or less as they appear in C.
As an aside, I am not terribly surprised that my website doesn't work well on some browsers. I hacked together the website including the HTML and CSS a couple of years ago and some of the hacks I used I ought to be ashamed of.
> All of these examples involve assigning to a constant a value known at compile time but none of them will work
Maps are not known at compile time. Hash functions are randomized based on a seed only known at execution time. The hashed value of "HELLO" is actually different each time the program runs. Even if the hash function weren't random, the runtime has to allocate buckets for map values on the heap, which involves calling the OS to get memory addresses for those buckets, etc.
In Go, `const` means "the compiler can completely evaluate this expression and store the final bytes in the executable," which has the effect of making them non-reassignable, but protection from reassignment is not an actual feature of the language the way it is in Java and C++ (goes back to the maintainers wanting to keep it simple).
Unless a const is literally a compile time constant inserted through the program, it's likely able to be changed somehow in most languages.
I don't like the term "enums" because of the overloading between simple integers that indicate something (the older, more traditional meaning) and using them to mean "sum types" when we have the perfectly sensible term "sum type" already, that doesn't conflict with the older meaning. If you want sum types, a better approach is to combine the sort of code structure defined here: https://appliedgo.net/spotlight/sum-types-in-go/ with a linter to enforce it https://github.com/alecthomas/go-check-sumtype , which is even better used as a component of golangci-lint: https://golangci-lint.run/
I'd also add my own warnings about reaching for sum types when they aren't necessary, in a language where they are not first class citizens: https://jerf.org/iri/post/2960/ but at the same time I'd also underline that I do use them when appropriate in my Go code, so it's not a warning to never use them. It's more a warning that sum types are not somehow Platonically ideal. They're tools, subject to cost/benefit analysis just like anything else.
I'm currently of the opinion that where you truly need this type of thing, write tests that use the ast package to validate that your expectations hold. That way you don't need to do anything strange with the code, and logic failure will show up alongside all of your other logic failures.
While it does venture into implementation details that shouldn't be tested, Go offers a discriminator between your actual tests and throwaway tests (i.e. `package foo_test` v.s. `package foo`), so as long as you've clearly mark the intent I find this to be an acceptable tradeoff. As implementation changes, and you no longer need that validation, others will know that your throwaway tests are intended as such.
I disagree with this. I'm old as hell, and I learned programming in a context where enums were always ints, but I remember being introduced to int enums as "we're going to use ints to represent the values of our enum," not "enums are when you use ints to represent a set of values." From the very beginning of my acquaintance with enums, long before I encountered a language that offered any other implementation of them, it was clear that enums were a concept independent of ints, and ints just happened to be an efficient way of representing them.
The type the link was struggling to speak of seems to be a tagged union. Often tagged union implementations use enums to generate the tag value, which seems to be the source of confusion. But even in tagged unions, the enum portion is not a type. It remains just an integer value (probably; using a string would be strange, but not impossible I guess).
It’s incredibly useful to be able to easily iterate over all possible values of a type at runtime or otherwise handle enum types as if they are their enum value and not just a leaky wrapper around an int.
If you let an enum be any old number or make the user implement that themselves, they also have to implement the enumeration of those numbers and any optimizations that you can unlock by explicitly knowing ahead of time what all possible values of a type are and how to quickly enumerate them.
What’s a better representation: letting an enum with two values be “1245927” or “0” or maybe even a float or a string whatever the programmer wants? Or, should they be 0 and 1 or directly compiled into the program on a way that allows the programmer to only ever need to think about the enum values and not the implementation?
IMO the first approach completely defeats the purpose of an enum. It’s supposed to be a union type, not a static set of values of any type. If I want the enum to be tagged or serializable to a string that should be implemented on top of the actual enumerable type.
They’re not mutually exclusive at all, it’s just that making enums “just tags” forces you to think about their internals even if you don’t need to serialize them and doesn’t give you enumerability, so why would I even use those enums at all when a string does the same thing with less jank?
Exactly. Like before, in the context of compilers, it refers to certain 'built-in' values that are generated by the compiler; which is done using an enumerable. Hence the name. It is an implementation detail around value creation and has nothing to do with types. Types exist in a very different dimension.
> It’s supposed to be a union type
It is not supposed to be anything, only referring to what it is — a feature implemented with an enumerable. Which, again, produces a value. Nothing to do with types.
I know, language evolves and whatnot. We can start to use it to be mean the same thing as tagged unions if we really want, but if we're going to rebrand "enums", what do we call what was formally known as enums? Are we going to call that "tagged unions" since that term now serves no purpose, confusing everyone?
That's the problem here. If we already had a generally accepted term to use to refer to what was historically known as enums, then at least we could use that in place of "enums" and move on with life. But with "enums" trying to take on two completely different, albeit somewhat adjacent due to how things are sometimes implemented, meanings, nobody has any clue as to what anyone is talking about and there is no clear path forward on how to rectify that.
Perhaps Go even chose the "itoa" keyword in place of "enum" in order to try and introduce that new term into the lexicon. But I think we can agree that it never caught on. If I, speaking to people who have never used Go before, started talking about iotas, would they know what I was talking about? I expect the answer is a hard "no".
Granted, more likely it was done because naming a keyword that activates a feature after how the feature is implemented under the hood is pretty strange when you think about it. I'm not sure "an extremely small amount" improves upon the understanding of what it is, but at least tries to separate what it is from how it works inside of the black box.
So while you can theoretically argue it makes sense to call them an "enum" I don't like it precisely because "enumerating" the "enum" types (being sum types here), in general, is not a sensible operation. It is sensible in specific, but that's not really all that special. We don't generally name types by what a small percentage of the instances can do or are, we name them by what all instances can do or are. A degenerate sum type "type Value = Value" is still a sum, albeit a degenerate one of "1", but nobody ever enumerates all values of "type Email = Email { username :: String, domain :: String }". (Or whatever more precise type you'd like to use there. Just the first example that came to mind.) There are also cases where you actively don't want users enumerating your sum type, e.g., some sort of token that indicates secure access to some resource that you shouldn't be able to get, even in principle, by simple enumerating across your enum.
If it's called an "enum" I want to be able to "enum"erate it.
I’m not sure the definition of "enum" enforces how things are identified. Random choice would be as good as any other, theoretically. In practice, as it relates to programming, random choice is harder to implement due to collision possibilities. Much simpler is to simply increment an integer, which is how every language I've ever used does it; even Rust, whose implementation is very similar to Go's implementation.
But it remains that the key takeaway is that the enum is a value. The whole reason for using an enum is for the sake of runtime comparison. It wouldn't even make sense to be a type as it is often used. It is bizarre that it keeps getting called one.
As with any language, Go has its own philosophy and you have to first understand why it was created and what it brings to the table. To summarize it in one phrase "Less is more"
I think this post should be a mandatory reading for everybody learning Go. If this resonates with you it's possible you're gonna like the language:
https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
I'm not saying these are "good", just wondering what alternatives look like?
This is what ZIO's type safe version looks like https://zio.dev/reference/stm/ Scala's for-comprehension is syntactic sugar for calls to flatMap, map, and withFilter, similar to Haskell's do-notation.
here's an example that uses a field from your blog example error type: https://go.dev/play/p/SoVrnfXfzZy
Also "In Go, errors are values. They just aren’t particularly useful values"
...sometimes user mistakes are due to the language being too complicated, maybe for no benefit, but I don't think that's the case here. It's a very good thing that you can just slap .Error() on any error to print it quickly, and not too crazy complex to say an error can be any normal type that you can use, as long as it also can print Error()
If I have
struct S { X int }
func (s *S) Error() string { ... }
but I return it as an error: func DoStuff() error
Then all the caller gets is an error interface. Without downcasting (`.(T)`/`errors.As`), you can only compare it to other instances (if the library author provided them) or see the string representation. if myErr, ok := err.(MyError); ok {
// handle MyError case
}
...
But you should always expect that there could be errors you don't expect to exhaustively handle with special logic. firefox doesn't show
I'm using the Firefox Developer version on Windows and everything renders correctly for me. I tested Firefox on Android and everything is present there as well.(Disclaimer, formerly worked at Google and used proto/grpc/go there and now in my own startup in github.com/accretional/collector which tries to address this problem with a type registry and fully reflective API. Not privy to the full history, just reasoning.)
Proto is designed so that messages can be deserialized into older/previous proto definitions by clients even if the server is responding with messages of a more recent version. Field numbers Re what let you to start serializing new fields (add a new field with an unused/the next number) or safely stop setting fields in proto responses (reserve a field) without risking older clients misinterpreting the data as belonging to some existing field they know about. This requires you to encode the field numbers alongside the field data in the proto wire format.
Two major problems: nothing in proto itself enforces that field numbers are assigned sequentially, because there is no single source of truth for the proto schema (you can still have one of your own, but it’s not a “thing” in proto). Also, the whole point of field numbers is that they can be selectively missing/reserved/ignored and allow you to deserialize messages without special handling for version changes in your code at runtime.
So, field numbers aren’t a dense, easily enumerable range of numbers, they’re basically tags that can be any number between 1 and 536,870,911 except for the reserved 19,000-19,999. This smells like serious tech debt/ a design flaw that completely closes the door for even fixing this at Google or anywhere else, because it’s arbitrarily in the middle of the range of field numbers and a leaked implementation detail from the internals. You couldn’t build your own dense field number management/sequential enforcement system on top of proto without ripping that part out, but your existing proto usage relies on that part and changing it would break existing clients because you’re removing field numbers, which is the whole fucking point of proto, and makes it difficult to roll out even if you did fix it yourself.
So, representing union/enumerable types in proto is impossible. For proto enums to have forward compatibility, they have to handle adding new enum values over time, or need to remove and reserve old ones. So, proto enums end up being basically just field numbers. That’s exactly what you see in Golang enums and I don’t think it’s a coincidence: Google has no good way to serialize/deserialize/operate on enumerable enums or union types anywhere they use proto/grpc. Golang inherits this “enums” implementation from protobuf because it’s the context in which it was created.
I agree with the point about sequential allocation, but that can also be solved by something like a linter. How do you achieve compatibility with old clients without allowing something similar to reserved field numbers to deal with version skew ambiguity?
I view an enum more as an abstraction to create subtypes, especially named ones. “Enumerability” is not necessarily required and in some cases is detrimental (if you design software in the way proto wants you to). Whether an enum is “open” or “closed” is a similar decision to something like required vs optional fields enforced by the proto itself (“hard” required being something that was later deprecated).
One option would be to have enums be “closed” and call it a day - but then that means you can never add new values to a public enum without breaking all downstream software. Sometimes this may be justified, but other times it’s not something that is strictly required (basically it comes down to whether an API of static enumerability for the enum is required or not).
IMO the Go way is the most flexible and sane default. Putting aside dedicated keywords etc, the “open by default” design means you can add enum values when necessary. You can still do dynamic closed enums with extra code. Static ones are still not possible though without codegen. However if the default was closed enums, you wouldn’t be able to use it when you wanted an open one, and would have it set it up the way it does now anyway.
It still seems to me that you're addressing a completely separate issue from having a specific field that is an enum - not an enum of a field number, but an enum of something else, like encryption algorithm or SHA type or something.
Am I missing your point?
Hence, proto enums are essentially non-enumerable wrappers around numeric values. And this is (almost certainly) why Golang’s enums are structured the same way, as transparent wrappers around values that are not necessarily sequential or enumerable.
enum mask {
BIT_0 = 0x1,
BIT_1 = 0x2,
BIT_2 = 0x4,
BIT_3 = 0x8,
NIBBLE_0 = 0xF,
...
};
C++ does sparse enums just fine. Are you saying that those are not "real" enums because their sparse? Or that C++ doesn't have "real" enums because it allows that? Or what?But yes Enums are so much nicer in Kotlin vs Go. That's true, it doesn't impact productivity much, but he has a point.
Over the last 25 years in the SaaS world, I have never seen python evolve into a system that is easy to reason about and debug. It lets you do too many things. In over 30 cases, I have seen teams deliver better software faster in Go after replacing their Python.
Then I got to the second half of the post, with the things he didn't like about go.
No enums? Wtf? Not having sum types is a lesser evil imo, it places you in the mediocre, but mediocre is mostly ok so that's kinda fine. But no enums? We're stuck with magic numbers? Hell naw, we banished that demon in the nineties, I ain't signing up to bring it back!
In go you don't strictly need magic numbers because you can define constants:
type Status int
const (
IoProblem Status = iota // we start at zero and go down from there
JsonParse
...
)
The problem is that there is no exhaustiveness with these constant groups. The type Number is not a closed set of just IoProblem and JsonParse like an enum in C. It is just an int.Obviously, though, the only way this happens is if you know a priori what actually failed, because otherwise it might be some other error type. And you don't, obviously, because it's an error! If your behavior is deterministic then just return whatever you want in your own API. If it's not, you need to parse/extract/inspect the error.
Basically it's entirely contrived. This never happens, and to the extent it does it's a terrible bug where you have code making assumptions about runtime error state without fully inspecting that state.
The second failure is that Go does in fact have runtime type inspection facilities ("type assertions" is the particular jargon) and if you want you can absolutely "cast" that error into a derived type to get the data out. So it's not even a problem in the language as it exists.
func rootInfo(root, p string) (has bool, isDir bool, err error) {
p = path.Clean(p)
info, err := os.Stat(root + "/" + p)
if info != nil {
has, isDir = true, info.IsDir()
return
}
if errors.Is(err, os.ErrNotExist) || errors.Is(err, syscall.ENOTDIR) {
err = nil
}
return
}Can someone with experience in both Go and Elixir compare the two? I’m sure I can have GPT whip up a comparison and see the syntax diffeeences, but I’m curious what the real experience “in the trench” is like.
I recall hearing that Jose was making progress on types. Not sure where that landed.