I agree. I find that Options are more desirable for API-design than making function-bodies easier to understand/maintain. I'm kind of surprised I don't use Maybe(T) more frequently in Odin for that reason. Perhaps it's something to do with my code-scale or design goals (I'm at around 20k lines of source,) but I'm finding multiple returns are just as good, if not better, than Maybe(T) in Odin... it's also a nice bonus to easily use or_return, or_break, or_continue etc. though at this point I wouldn't be surprised if Maybe(T) were compatible with those constructs. I haven't tried it.
But as you say, it is more common to do the multiple return value thing in Odin and not need `Maybe` for a return values at all. Maybe is more common for input parameters or annotating foreign code, as you have probably noticed.
1. I was a mobile dev, and I operated at the framework-level with UIKit and later SwiftUI. So much of my team's code really was book-keeping pointers (references) into other systems.
2. I was splitting my time with some tech-stacks I had less confidence in, and they happened to omit Option types.
Since then I've worked with Dart (before and after null safety,) C, C++, Rust, Go, Typescript, Python (with and without type hints,) and Odin. I have a hard time not seeing all of this as preference, but one where you really can't mix them to great effect. Swift was my introduction to Options, and there's so much support in the language syntax to help combat the very real added-friction, but that syntax-support can become a sort of friction as well. To see `!` at the end of an expression (or `try!`) is a bit distressing, even when you know today the unlikelihood (or impossibility) of that expression yielding `nil.`
I have come to really appreciate systems without this stuff. When I'm writing my types in Odin (and others which "lack" Optionals) I focus on the data. When I'm writing types in languages which borrow more from ML, I see types in a few ways; as containers with valid/invalid states, inseparably paired with initializers that operate on their machinery together. My mental model for a more featureful type-system takes more energy to produce working code. That can be a fine thing, but right now I'm enjoying the low-friction path which Odin presents, where the data is dumb and I get right to writing procedures.
I used to be an "initialize everything" partisan, but meaningful zero-values have grown on me. I still don't think everything should be zero-initialized, though (that is, valid when initialized to zero). I'd prefer it if there were a two-colour system where a type is only zero-initialized if it don't contain a pointer, either directly or transiently.
The trick would be that zero-initialized sum types have a default variant, and only that variant needs to be zero-initialized. So a type that requires explicit initialization can be made zero-initialized by wrapping it with Optional<T>, whose default value is the zero-initialized None value. So even though you end up with coloured data types, they're easily contained & they do not spread virally.
I think this offers the best of both worlds. It gives you explicit nullability while still permitting something like `make([]SomeStruct, n)` to return a big block of zeroes.
> Languages like Rust were designed from day zero around explicit individual-element based initialization
> Ownership is a constant concern and mental overhead when thinking in an individual-element mindset. Whilst ownership is obvious (and usually trivial) in the vast majority (99+%) of cases when you are in the grouped-element mindset.
I think there's a sort of convergence here, because one of the tips that's often recommended for dealing with lifetimes issues in Rust (e.g. cyclic data structures) is grouping elements together in a parent structure and referring to them with handles instead of references. Often this is done with integer handles in a simple Vec<T>, but libraries like slotmap exist which make that pattern safer & more ergonomic, if desired. The language features of Rust naturally encourage grouped-element thinking by making individual-element strategies high friction. In fact, I wouldn't be surprised if a reliance on individual-element strategies were part of the reason so many people struggle with Rust when they're first learning the language.
Ownership and lifetimes are a source of essential complexity in programming. I don't see any contradiction between managing them with a grouped-element mindset and managing them with language features; in fact, I think the two go hand-in-hand quite nicely.
Rust by default doesn't really encourage the group-element mindset and people have to work against the language in order to do it. And what usually happens is people work around the borrow-check as a result, through the use of a handle system. And they do this in order to get the lifetime benefits, performance benefits, and also get around the problems of the ownership semantics.
Of course you can do this in Rust, but the point is that the default approach in Rust does not encourage it at all. The default Rust mindset is the individual-element mindset.
It's true that you don't see stuff like arena allocators very often, but Rust's lifetime/ownership semantics mean it's often best for related pieces of data to be owned by a single parent, which unifies their lifetimes such that they all start and stop being valid at the same time. I suppose that's only part of the equation, since an arena has you allocate everything at once in a continuous block, which has performance advantages.
Still, I can't help but see an underlying conceptual symmetry there. I feel like the scoped arena school of memory management would be nicely complemented by compiler lifetime analysis. Arenas definitely simplify lifetime issues, but that feels like an opportunity to experiment with additional static compiler checks, not a reason to eschew them. I'd love to see a language which required explicit deallocation but enforced it with linear types.
> *TL;DR* null pointer dereferences are empirically the easiest class of invalid memory addresses to catch at runtime, and are the least common kind of invalid memory addresses that happen in memory unsafe languages. The trivial solutions to remove the “problem” null pointers have numerous trade-offs which are not obvious, and the cause of why people think it is a “problem” comes from a specific kind of individual-element mindset.