Gratuitous allocations are gratuitous.
The whole "prevent double free" claim is completely bogus. Setting a variable to `NULL` only works for cases where there is one, obvious, owner, which is not the circumstance under which double free is prone to happening in the first place. Actually preventing double free requires determining ownership of every object, and C sucks at that.
That old thing again...
The _t postfix is only reserved in the POSIX standard, but not in the C standard (and C and POSIX are entirely different things - outside the UNIX bubble at least).
It's unlikely that POSIX changes anymore, but if you get a name collision in a new POSIX version it's still just a simple name collision, and it's up to the user code to fix that.
And it's not like symbol collision problems are limited to POSIX, the world won't end because some piece of C code collides with a symbol used in a dependency, there's always ways to isolate its usage.
Also, it's good practice in the C world to use a namespace prefix for libraries, and such a prefix will also make sure that any _t postfix will not collide with POSIX symbols (the question is of course why POSIX couldn't play nice with the rest of the world and use a posix_ prefix - historical reasons I guess, but then just going a ahead and squatting on the _t postfix for all eternity is a bit rich).
> A potentially reserved identifier becomes a reserved identifier when an implementation begins using it or a future standard reserves it, but is otherwise available for use by the programmer.
Which, in practice, does mean using _t is likely to cause you problems, as it may become a reserved identifier, when an implementation like POSIX begins using it.
What POSIX reserves or doesn't reserve doesn't affect code that follows only the C standard but doesn't care about POSIX compatibility, and especially _t is so widely used in C libraries that POSIX's opinion obviously doesn't matter all that much in the real world.
> Other identifiers may be reserved.
If an implementation of C uses it... Just... Don't. The standard won't save you here, because it's happy for an implementation to do whatever they feel like.
Is your point "why did posix not establish a prefix_ ... _suffix" combo, and maybe even better some reserved "prefix_" namespace?
which --- I think --- for better or worse leads to the reality that C doesn't have a namespace mechanism, like, say, Java.
The problem with C++ style namespaces as language feature is that they require name mangling, which opens up a whole new can of worms.
In the end, the POSIX _t just means "don't blame us when your library names collide with POSIX names", and that's fine. Other platforms have that problem as well, but the sky hasn't fallen because an occasional type or function name collision.
to all of this I agree:
if the linker doesn't have namespaces (and it doesn't, unlike, say, the Java class loader, or even more extravagant the OSGi bundle loading mechanism), you need to flatten names into one name space. Which means, as you say, name mangling. and that, even without overloading, is a major PITA.
and indeed, not prescribing a prefix and just blocking a useful suffix was also an idea others hopefully took as inspiration how to not do things...
wrt prefixes in the C stdlib, I'd strictly prefer the prefix to be '#define' able, so _if_ you need to move th stdlib to a namespace, #define the prefix before #include-ing the library. needs a statically linked trampoline, though or some other nasty lionk time mechanism. meh. there is a reason languages come with namespaces from the start, these days ...
The way I interpreted the author's intent was that, the logic of error handling (something C sucks even more at) can be greatly simplified if your cleanup routine can freely be called multiple times. At the moment an error happens you no longer have to keep track of where you are in the lifecycle of each local variable, you can just call cleanup() on everything. I actually like the idea from that standpoint.
I was reading something a day or two ago where they were talking about using freed memory and their 'solution' to the problem was, basically, if the memory location wasn't reassigned to after it was freed it was 'resurrected' as a valid memory allocation. I'm fairly certain that won't ever lead to any impossible to diagnose bugs...
I understand exactly why it was necessary, but to my mind that highlighted an urgent need to provide actual namespacing so that we don't need to rope off whole categories of identifiers for exclusive use by the stdlib, with the implication that every single library will need to do the same. This should have been addressed last century IMO.
Some newer parts of the standard library use a stdc_ prefix now (https://en.cppreference.com/w/c/numeric/bit_manip.html).
"Completely" means "for all". Are you seriously claiming that "for all instances of double-free, setting the pointer to NULL after freeing it would not help"?
Not in the case of bogosity. Completely bogus things might occasionally work under some very particular circumstances, but unless those particular circumstances just happen to be the circumstances you actually care about, complete bogosity can still obtain.
> setting the pointer to NULL
There is no such thing as setting a pointer to null. You can set the value of a variable (whose current value is a pointer) to null, but you cannot guarantee that there isn't a copy of the pointer stored somewhere else except in a few very particular circumstances. This is what the GP meant by "setting a variable to `NULL` only works for cases where there is one, obvious, owner". And, as the GP also pointed out, this "is not the circumstance under which double free is prone to happening in the first place." Hence: complete bogosity.
From experience though I've found that wrapping all data in newtypes adds too much ceremony and boilerplate. If the data can reasonably be expressed as a primitive type, then you might as well express that way. I can't think of a time where newtype wrapping would have saved me from accidentally not validating or accidentally inputting the wrong data as a parameter. Especially the email example is quite weak, with ~30 lines of code just being ceremony due to wrapping a string, and most likely it's just going to be fed as is to various crud operations that will cast the data to a string immediately.
Interacting with Haskell/elm libraries that have pervasive use of newtypes everywhere can be painful, especially if they don't give you a way to access the internal data. If a use-case comes up that the library developer didn't account for, then you might have no way of modifying the data and you end up needing to patch the library upstream.
From that perspective, there is a clear trade-off on the size of the parsing–logic interface. Introducing more granular, safer validated types may give you better functionality, but it forces you to expand that interface and create coupling.
I think there is a middle ground, which is that these safe types should be chunked into larger structures that enforce a range of related invariants and hopefully have some kind of domain meaning. That way, you shrink the conceptual surface area of the interface so that working with it is less painful.
So you say, okay, I'll make an `email_to_string` function. Does it return a copy or a reference? Who frees it? etc, etc, and you're back to square one again. The idea is to keep char* and friends at "the edge", but I've never found a way to really achieve that.
Could just be my limitations as a C programmer, in which case I'd be thrilled to learn better.
I'm thinking of that recent git bug that occurred because the round-trip of `string -> type -> string` had an error (stripping out the CR character). Using a specific type for a value that is being round-tripped means that a bugfix needs to only be made in the parser function. Storing the value as simple strings would result in needing to put your fix everywhere.
> The trouble I have with this approach (which, conceptually, I agree with) is that it's damned hard to do anything with the parse results.
You're right - it is damn hard, but that is on purpose; if you're doing something with the email that boils down to "treat it like a `char *`" then the potential for error is large.
If you're forced to add in a new use-case to the `email_t` interface then you have reduced the space of potential errors.
For example:
> Want to print that email_t? Then you're right back to char, unless you somehow write your own I/O system that knows about your opaque conventions.
is a bug waiting to surface, because it's an email, not a string, and if you decide to print an email* that was read as a `char *` you might not get what you expect.
It's all a trade-off - if you want more flexibility with the value stored in a variable, then sure, you can have it but it comes at a cost: some code somewhere almost certainly will eventually use that flexibility to mismatch the type!
If you want to prevent type mismatches, then a lot of flexibility goes out the window.
“Serialization” is the act of taking an internal data structure (of whatever shape and depth) and outputting it for transmission or storage. The opposite is “deserialization,” restoring the original shape and depth.
I'm afraid the point was not some childish and immature comparison of C with modern languages.
The point was to demonstrate what type safety there is, and how to use it. The advantages of modern languages are even acknowledged:
> Much to the surprise of, well, everybody, C actually has type safety. Sure, it isn’t as enforceable as (for example) Rust… and, sure, if you are willing to do extra work you can bypass it,
The entire point of TFA is actually in TFA:
> The problem isn’t that C lacks type safety (it clearly enforces most types in most expressions), it’s that raw pointers do not encode semantics (e.g., a char * doesn’t tell you if it’s an email, a name, or a filename).
The benefit is to avoid treating char*s as email_t, not avoiding treating email_t as char*.
And as I just saw, Python 3.10 also introduced a NewType[2] wrapper. I'll have to see how that feels to handle.
1: https://blog.nelhage.com/2010/10/using-haskells-newtype-in-c...
2: https://typing.python.org/en/latest/spec/aliases.html#newtyp...
If you're suggesting getting around this by casting an email_t* to char* then I wish you good luck on your adventures. There's some times you gotta do stuff like that but this ain't it.
While the article does hide the internal char*, that's not strictly necessary to get the benefit of "parse, don't validate". Hide implementation details sure, but not everything is an implementation detail.
Examples where I've used it in the past: ValidatedEmail, which is a special form of Email, one that has been validated by the user.
We can have actions that require a `PriviligedUser`, which can be created from a `User`. That creation validates ONCE whether your user is privileged.
This saves you from a whole bunch of .is_priviliged() calls in your admin panel.
The post 'Boolean Blindness' [0] talks about much of the same issues.
[0]: https://existentialtype.wordpress.com/2011/03/15/boolean-bli...
I've also had a negative experience with using types to encode privileges. Types work well in simple situations, but they scale very badly with additional complexity. Something like PrivilegedUser works fine as long as privilege is binary and one-dimensional, but the need for new types will very quickly grow out of hand for only a modest increase in the complexity of requirements. Encoding privileges as data handles a combinatorial explosion of possibilities much more gracefully, and it is much more straightforward for checking rules that are stored outside the codebase.
E.g., requiring that a string be base64, have a certain fixed length, and be provided by the user.
E.g., requiring that a file have the correct MIME type, not be too large, and contain no EXIF metadata.
If you really always need all n of those things then life isn't terrible (you can parse your data into some type representing the composition of all of them), but you often only need 1, 2, or 3 and simultaneously don't want to duplicate too much code or runtime work, leading to a combinatorial explosion of intermediate types and parsing code.
As one possible solution, I put together a POC in Zig [0] with one idea, where you abuse comptime to add arbitrary tagging to types, treating a type as valid if it has the subset of tags you care about. I'm very curious what other people do to appropriately model that sort of thing though.
"Parse, don't validate" doesn't mean that you must encode everything in the type system -- in fact I'd argue you should usually only create new types for data (or pieces of data) that make sense for your business logic.
Here the type your business logic cares about is maybe "file valid for upload", and it is perfectly fine to have a function that takes a file, perform a bunch of checks on it, and returns a "file valid for upload" new type if it passes the checks.
https://lean-lang.org/doc/reference/latest/Basic-Types/Subty...
Of course, it would be reasonable to claim that the accept/reject step is validation, but I believe “Parse, don’t validate” is about handling input, not an admonition to never perform validation.
> I believe “Parse, don’t validate” is about handling input, not an admonition to never perform validation.
It's about validation happening at exactly one place in the code base (during the "parse" - even though it's not limited to string-processing), so that callers can't do the validation themselves - because callers will validate 0 times or n>1 times.
You don't need that. A practical solution is a generic `error` type that you return (with a special value for "no error") and `name` or `email` output arguments that only get set if there's no error.
It doesn't mean you should completely eliminate `if` statements and error checking.
The "practical" part really bugged me because the entire post is trying to explain exactly why it is not.
The only way to make C reasonably safe is to encode information via newtype pattern. Wrap `char *` inside a struct that have proper names and include the size in there as well.
Basically, there should be ZERO pointers except at creation and consumption by outside libraries (open, write, etc)
I solve for this by using reflection and auto generating all value objects (inheriting by default from base types) and auto generating all accessor/controller classes or methods into the domain model. Therefore I model in base types, override the generated value object constructors for validation (if required), and all of the boundaries are using value objects. The internal code generally works with the underlying base types, because boxing/unboxing the value objects can be non-negligible performance impact when serializing a lot of data (which tends to be common in web applications... SQL > JSON > HTML).
I'm a huge fan, but I think ymmv. Web applications tend to have a wide interface (much of the domain model is user accessible). I think it's ideal for this case because of the number of fields a user can ultimately set and reused across many places.
I do remember you? You had/joined a sartup in NZ, correct? Warm regards Lycium :-)
I saw you're doing consultancy last years, cool :) Lemme know if you wanna chat on Discord or so, cheers!
> I saw you're doing consultancy last years, cool :) Lemme know if you wanna chat on Discord or so, cheers!
I do wanna chat and perhaps catchup :-) Although I have a discord account, I don't remember ever using it.
it is against the rules to call someone dumb on this server.
email_t theEmail = parseEmail(untrustedInput);
if (theEmail == PARSE_ERROR) {
return error;
}
An email_t is not a parse error, and a parse error is not one of the emails, so this shouldn't compile (and I don't take 'pseudocode' as an excuse).Weird hill to die on, since neither email_t nor PARSE_ERROR were defined in the sample snippets. How do you know PARSE_ERROR is not email_t?
This pseudocode is "Validate" for at least 3 reasons:
Forgetting to check:
this check is fragile: it’s extremely easy to forget. Because its return value is unused, it can always be omitted, and the code that needs it would still typecheck.
Repeatable/redundant checks: First, it’s just annoying. We already checked that the list is non-empty, why do we have to clutter our code with another redundant check?
Second, it has a potential performance cost. Although the cost of the redundant check is trivial in this particular example, one could imagine a more complex scenario where the redundant checks could add up, such as if they were happening in a tight loop.
Not using the type system: Use a data structure that makes illegal states unrepresentable. Model your data using the most precise data structure you reasonably can. If ruling out a particular possibility is too hard using the encoding you are currently using, consider alternate encodings that can express the property you care about more easily. Don’t be afraid to refactor.
> How do you know PARSE_ERROR is not email_tIt has to be for it to compile, right? Which means that email_t is the type which represents both valid and invalid emails. How do you know if it's valid? You remember to write a check for it. Why not just save yourself some keystrokes and use char* instead. This is validate, not parse.
I feel this kind of fundamentalism is letting the perfect be the enemy of the good.
The only fundamentalism involved in PdV is: if you have an email, it's actually an email. It's not arbitrary data that may or may not an email.
Maybe you want your emailing methods to accept both emails and not-emails in your code base. Then it's up to each method to validate it before working on it. That is precisely what PdV warns against.
As established, head is partial because there is no element to return if the list is empty: we’ve made a promise we cannot possibly fulfill. Fortunately, there’s an easy solution to that dilemma: we can weaken our promise. Since we cannot guarantee the caller an element of the list, we’ll have to practice a little expectation management: we’ll do our best return an element if we can, but we reserve the right to return nothing at all. In Haskell, we express this possibility using the Maybe type
^ Weaken the post-condition. In some contexts null might be close enough for Maybe. But is Maybe itself even good enough? Returning Maybe is undoubtably convenient when we’re implementing head. However, it becomes significantly less convenient when we want to actually use it! Since head always has the potential to return Nothing, the burden falls upon its callers to handle that possibility, and sometimes that passing of the buck can be incredibly frustrating.
This is where the article falls short. It might be good (the enemy of perfect), but it ain't PdV.They write the non-pseudo variant later. There, the return value is a pointer and the check is against NULL. Which is fairly standard for C code, albeit not always desirable.
email_t theEmail = parseEmail(untrustedInput);
if (theEmail.error != PARSE_OK) {
return error;
}
You made an email-or-error type and named it email_t and then manually checked it.
PDV returns an non-error-email type from the check method.
But I can spot when code is doing exactly what the cited article says not to do,
This line is the "validate" in the expression "parse, don't validate":
if (theEmail.error != PARSE_OK)
You might like it, but that's not my business. Maybe this C article should have been "parse, then validate".You'd be better off reading the original: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...