I'm surprised the copy editor was more comfortable using git than using a web-based review tool to leave comments, especially given that she was reviewing a Go book and didn't seem to know what Go was.
How does that even happen? It seems bizarre that Manning had this copy editor at all.
I recently had a negative experience with Manning. I sent them an email saying that I'm in the process of writing a book, and I'm self-publishing it, but I was curious about the possibility of applying to Manning for a second edition. I asked whether they accept second editions after a self-published first edition and what document formats they accept.
I got back a form letter telling me that they'd rejected my proposal. When I followed up and said I didn't send a proposal but was asking preliminary questions, they told me that they understood I hadn't sent a proposal, but they were going off of the table of contents on my book's website. I guess they decided to pre-emptively reject me?
They also only said Google Docs as a document format, but based on this blog post, clearly they accept AsciiDoc.
This is pretty off topic but did you test how your book works on an E-Reader? I check a sample chapter and there where a lot of pictures and colors to distinguish information, this will probably not work very well on my Kindle.
The first few chapters, I've been primarily targeting web and not testing on e-readers. I figured that until I knew whether people actually wanted to read it, I should just focus on making the web excerpts look decent and try to avoid over-optimizing for web.
Now that the book is officially a go, the PDF version is a first-class citizen, and I'll be testing e-reader experience on my rm2.
I mean all mainstream word processing applications have a 'commentary' / 'review' mode where someone can leave comments and suggest edits.
If you browse around Go's stdlib use of sync.Pool, you'll see a variety of tiered pools with fixed sizes, and many drop anything over a large enough size (sometimes gigantic! as much as 16KB!): https://cs.opensource.google/go/go/+/refs/tags/go1.24.0:src/...
It's a pretty well-established gotcha, sadly, and https://github.com/teivah/100-go-mistakes/blob/master/src/12... falls right into it.
After copy editing multiple chapters, they sent it back to me with all the content on a single line. I was so incredibly upset that they ditched all my painstaking format that I almost abandoned the project there + then.
It sounds like from your experience that it has barely changed. I ended up moving to self-publishing so I have a greater control over the whole process. I wrote it up long-form here: https://ryanbigg.com/2015/08/my-self-publishing-success-stor...
Someone likely loaded that into some tool, made changes and saved and likely never even looked at the XML.
Why would anyone care what the XML looks like?
Might as well write it in Word if that's the case.
A little secret about the book is a lot of the "mistakes" are introductions to some aspect of Go worded as a mistake. "Not using fuzzing" and "Not using errgroup" are a couple of examples.
Now that I'm starting another big Go project I'm going to look at it again.
What I like most about this book is it feels like it's all "real world" stuff. You can tell the author has built a lot with Go and can save you time by telling you were the potholes are. Great stuff!
That was the funny part from the intro of the article - the author had not, in fact, build a lot with Go at the time.
But, it proves you don't actually need to in order to become knowledgeable about a subject and / or write a book.
So many programming books are like that, and usually it shows.
Is there a reason the common mistake is about goroutines specifically? If I instead just made function closures without launching off goroutines, would they all refer to the same 'i' variable? (I assume it's maybe just that the mistake tends to go hand in hand with goroutine-launching code in a pattern like that).
I'd presume the book would say right after the example :)
But otherwise: the author gets serious respect from me for going through that process, taking feedback and learning how to communicate, i.e. taking "how do I make a good book?" very seriously and trying their best. And also things like for putting their foot down when the problematic copyeditor. I'm almost interested in the book, not for learning about Go but learning about what it looks like when I know the writing has some serious intent behind it to communicate clearly.
Thank you very much for your comment, though. It means a lot.
You give me a feeling you really care about the craft and just making a good useful resource, which what I respect. I looked around the site and bookmarked it as a technical writing example I might go to read around now and then.
I sometimes teach coding or general computing things (but hands-on, less about writing) and I've come to appreciate that sometimes it is extraordinarily difficult to educate on or communicate complicated ideas.
Quoting you: To give you a sense of what I mean by improving the book “over and over“, keep in mind that between feedback from my DE, external reviewers, and others, there are parts of the book that I rewrote more than ten times.
I also do rewriting especially with content I intend to be a resource or education piece. Obsessively rewrite. Make it shorter. Clearer. Oops that reads like crap let's start over. IMO having an urge to do this and address all feedback or your own itches means you care about your work. Just have to remind myself that that perfect is the enemy of good enough (or something like that I forgot if the expression went exactly like that).
"We might expect this code to print 123 in no particular order" should really say "exactly" or "in order", since it's proved in the next paragraph to be inconsistent.
And that would be the layman's explanation of concurrency resulting in things added sequentially happening out of order.
And assuming FIFO on async execution, akin to running everything in series, is probably the first mistake anyone will make when they encounter concurrency for the first time.
1) In Go, the 'i' variable in the for loop is the same 'i' for each round of the iteration, meaning closures created inside the loop all refer to that same 'i' variable, instead of getting their own copy of it. Very easy to accidentally think the all closures have their own copy of 'i'. Goroutines are only mentioned because in Golang this mistake tends to come up with Goroutines because of a common code pattern.
OR
2) Goroutines themselves either behave or have some weird lexical scope rules in a way I don't know and it doesn't really have to do with closures but an entirely Golang-foreign-alien concept to me I cannot see, and this is why the book example had Goroutines mentioned with the mistake.
I rarely write Go myself so I was curious :) It looks like it was 1) unless I am bad at reading, and I think the Go 1.22 change is good. I could easily imagine myself and others making that mistake even with the knowledge to be careful around this sort of code (the link shows a more complicated example when scrolling down that IMO is a good motivating example).
You think you're working with a value but you're actually working with a reference to a value under the hood.
Such a nice place to work, where you can just decide "Let's implement thing A in a completely new stack for us that shows promise" and then, after some time, say, "Ah... this is too hard, bad decision though. Let's try another one"
I have a hard time with this point. It feels to me like a lot of books have A LOT of unecassery padding all over the place.
The example of taking 28 words and turning it to 120 is pretty good at showing this. The first paragraph is totally pointless - we are reading a book about 100 most common mistakes, obviously this mistake is very common, how did this increased the value?
Then we have another line that explaining what happens in the code, which is totally useless because the code is super trivial.
Then the code, with more explanations on the side as if the previous line was not clear.
And only after that we get to the crux of the issue.
I understand that book publishers feel they need to justify the price of a book by reaching the 300p mark in some or other way, but in my way this only makes the book worse.
> The first paragraph is totally pointless - we are reading a book about 100 most common mistakes, obviously this mistake is very common, how did this increased the value?
There are different levels in terms of common mistakes, and this one was probably one that all the devs did at some point. So I think highlighting the fact it's a frequent one does make sense.
> Then we have another line that explaining what happens in the code, which is totally useless because the code is super trivial.
I have a rule: always explain the intention of the code. Even if it's 5 lines of code, it helps the reader to better understand what we will want to highlight.
> Then the code, with more explanations on the side as if the previous line was not clear.
The explanations on the side do not impact the size of the book so the argument doesn't hold. I did it in many code snippets to highlight where the reader needs to focus.
> I understand that book publishers feel they need to justify the price of a book by reaching the 300p mark in some or other way
This is more about guiding the readers, making sure the expectations are crystal clear and that they can follow me throughout an explanation. You judge it as a criteria to justify the price of the book, but it's not the real reason. At least not for my book and I'm sure it's the case for many others :)
Sure, but this holds true for the blog version as well, right?
To be clear, I'm not advocating for The Little Schemer version, and am not arguing that the blog version is the best it can be, but surely we can agree that book padding phenomenon does exist.
By the way, I have read parts of your book over at O'Reilly Learning, and I do think it is a good book. So I'm not trying to take a dump on your work. My criticism is aimed at publishers.
Instead, my DE multiple times told me that it's better to favor just-in-time teaching over just-in-case teaching. Meaning multiple times, he made me drop certain section because they weren't really serving the chapter. They were "perhaps helpful" and he made me drop all of those.
I guess it also depends on who you're working with and which publisher. On this aspect, Manning was fair, imo.
I've worked with editorial teams and I'd rather have that than PDFs and/or Word files without version control.
On a different note, def. feel you pain regarding the copyeditor.
I usually avoid them by not using go. Or waiting until missing features (generics) are added.
That means you can't find a language with no mistakes. But across all languages there are languages with astronomically more gotchas and mistakes then others.
Golang is one of these languages that is astronomical. I mean just compare it to the language with zero runtime errors.
Probably not. All those languages failed.
It's definitely proof that software can be written in such a regime, though, and I hope we see something similarly dogmatic some day.
The proof is in the pudding. Here’s a quote from rob pike the creator of golang:
“The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
Basically in a nutshell he’s saying they dumbed down golang so it’s useable by beginners. Golang is a step backwards. A failure in language development but a success in popularity.
You dumb down a language to a point where the language is so dumbed down it hits the largest demographic. You are part of that demographic. It’s similar to the demographic that voted for trump because he’s not fakeish like all the other candidates.
I think Pike is acknowledging the practical realities of engineering at scale, and intentionally designed Go with simplicity in mind, which leads to more maintainable code and faster onboarding for new devs.
I'll also add that outside of the popularity metric, Go is not all bad. Fast compile times, readability, excellent standard library and toolchain, backward compatibility, to name a few things.
Doubt it. Read what he wrote. He's literally referring to people without much experience in programming. The stuff you said is literally NOT what he said.
>I'll also add that outside of the popularity metric, Go is not all bad. Fast compile times, readability, excellent standard library and toolchain, backward compatibility, to name a few things.
I agree with readability and fast compile times.
Your argument that Go is a step backward because it was intentionally designed to be simple for novice programmers seems flawed. It's design was a deliberate tradeoff to address a specific problem. While I don't think it is a language that should be used for everything, it is good at the things it is good at.
What is it about Go that you have a problem with, specifically?
1. Around the 20 minute mark: https://learn.microsoft.com/en-us/shows/lang-next-2014/from-...
> The "brilliant" language he refers to here is C++, which I'm sure you're aware has many of its own downsides.
No. The brilliant language he’s referring to is a hypothetical one he could have created. Instead he created golang because he needed to create a language catered to people with less experience. That is what he is saying. I find it strange that you can literally read what is written and also reference the video and literally not understand what was literally said/written.
Maybe you’re just making up meaning subconsciously to cater to your own biases rather then facing the cold hard truth that pike created go to be not “brilliant”.
> What is it about Go that you have a problem with, specifically?
Oh there’s tons of stuff. One is that errors don’t have stack traces. You create an error that can’t be handled and so it bubbles up the stack until the only way to handle it is with a panic. You see the panic in your logs but now you have no idea where the error came from because no stack trace. You get the trace of the panic but no trace of the error. The whole thing is just poorly thought out.
I've been a lot of places and done a lot of things but I've never had somebody liken me to a Trump voter because I like a programming language. Is this the new Godwin's law? Did panic() and nil kill your grandpappy?
He's saying that developers can't handle Coq (the brilliant language), so they had to build a language that is like every other commonly used language, for better or worse.
> A failure in language development
As brilliant as Coq truly is, he's not wrong, is he? It is no coincidence that nobody is using Coq to build web servers. Which is, after all, what Pike said Go was designed for – that it was not intended to be a general purpose programming language. The vast majority of developers, even outside of Google, truly can't grasp it... And of even the scant few developers who can, they will tell you that the tradeoffs aren't worth it for something like a run-of-the-mill CRUD web server.
Your, being a researcher who does understand Coq, perspective is interesting from an academic angle, but Pike's point is that you don't understand the realities of engineering. This "Use Coq or you are no better than a Trump supporter" shows he was exactly on point. Cry as you might, nobody is going to be using Coq to build web servers, and for good reason.
Stands to reason. It is true that I do spend my time on the engineering side of the industry. While I have great appreciation for the brilliant languages, they don't offer a whole lot for practical production work after you've weighed the tradeoffs. Especially in the particular niche Go is designed for. You are going to use a blub language like Rust for those types of problems, and for good reason.
> And we both know Pike is not even referring to coq.
Lean, then? The brilliant list isn't terribly long. We do know he isn't talking about Scala and Haskell, at least. He lumps them in with C++ and Java – albeit he has expressed that they are more beautiful. Not that anyone would consider them brilliant anyway. Well, maybe if you consider Trump to be also brilliant... There is always that guy.
Exactly. It follows the same basic "loop, variable, function" programming model as Java, C++, Haskell, PHP, Ruby, Rust, Python, LISP, Smalltalk, basically every language you've seen production code written in, that is familiar to early career beginners who have come out of traditional learning paths (e.g. college). Where once you understand one of them, you can jump into another with minimal overhead. None of these languages brilliant, but they are useful. Which is where he said they wanted Go to fit as well: A language that is useful and familiar.
That isn't what researchers and language theory enthusiasts want. They are enthralled by languages that think about programming in an entirely different way. The key point here is that it wasn't built for them. That is what he said.
You also talk about looping for languages with no loops. And additionally pike never brought up looping at all. You just made this part up out of thin air. Your evidence is made up. He never said or referenced any of things you said.
> The key point here is that it wasn't built for them. That is what he said.
No read what he said again. He didn’t say golang was not designed for language experts. He did say it was designed for programmers just out of school with barely any experience with programming languages. He did not say he designed golang for an average engineeer who is not a PL expert he said he designed golang for literally people just out of school.
I don’t know how you can make stuff up out of thin air like this. Read what he literally said.
In fact, I would argue that the gotchas are an intentional part of Go's design philosophy. I think it is strange to work in Go when coming from another language because of this, which leads people to think Go sucks.
I mean, Go does suck. Maybe a language should be accommodating to bad designs. But still, those shouting "Go sucks" from the rooftops never seem to be willing to bring introspection into where they might have failed. It's always someone else's fault. (Something not limited to the case of Go, of course)
Why have functions return err, nil? Why even allow for a runtime error here? It's a really simple fix. You don't even have to make the language complex to support this. Instead the entire program is littered with holes and if statements you have to check in order to prevent a actual crash
Why not? It doesn't make any difference in practice. Without a complete type system you must write tests to ensure that error conditions (to stay with your example, although this also applies broadly) do what you need of them. If you somehow introduced a runtime error there, your tests would be unable to not notice. Whether your compiler cries or your test suite cries when you screw up is not a meaningful difference.
> You don't even have to make the language complex to support this.
A complete type system is insanely complex to implement and even harder to write against.
Without a complete type system, all you can have is silly half-measures. Maybe the error becomes an optional/result type with forced unwrapping, for example, but you still haven't asserted in the types what needs to happen with the error. So you still need to write the same tests that you had to write anyway. So, other than moving where you discover the problem – from your tests to the compiler – nothing has changed.
The half-measures are a cute party trick, I'll give you that, but makes no real difference when actual engineering is taking place. They might, however, give a false sense of security. They might even convince you that you don't need to write tests (you do). Maybe those make for desirable traits?
TO prevent a runtime error. You say it Doesn't make any difference in practice meaning you never had a runtime error while running go? Impossible.
>A complete type system is insanely complex to implement and even harder to write against.
Who says you need a complex type system? You just need exhaustive evaluation of sum types. That's one feature that's it.
Removing run times errors doesn't mean building the most complex type system in the world.
I am not sure I have written enough Go to comment there, but I have worked extensively in other languages where runtime errors are possible, similar to Go in that regard. I have encountered runtime errors in said tests now and then, sure, but then you know about it and deal with it... So, in practice, no different than if the compiler told you that there is a possible runtime error.
> Who says you need a complex type system?
It is needed if you want to avoid the need for said tests. With a complete type system the type system can become your test suite, so to speak. But the languages people normally use, even those with "advanced" type systems, are nowhere near expressive enough for that. Meaning that you have to write the tests anyway. And then you'll know if there are any runtime errors as soon as you run your tests because how could the tests run without encountering the runtime error too? It is not like a CPU magically changes how it works if it detects that a test is being run. So, in practice, the type system doesn't change the outcome. But it is a cool party trick. I'll give you that.
That said, aside from these hand-wavy, make-believe stories, you are still very right that Go would benefit from sum types. For the reason that they map to the human model of the world very well, succinctly communicating structures that are often needed to be expressed. Languages are decidedly for humans. You can sort of work in the same basic idea in Go using interfaces, but it is far more confusing to read and understand than sum types would be. For a language that claims to value readability...
If your program has runtime errors then that means you can deploy it to production and catch your errors in production.
If your compiler catches all possible runtime errors and refuses to compile. Then you will have no runtime errors in production guaranteed by proof. The program cannot even exist with runtime errors. It can only exist with no runtime errors.
So no difference catching errors in production vs. compile time? I beg to differ. Big fucking difference imho.
> It is needed if you want to avoid the need for said tests.
I’m referring to the fact that you don’t need a complex type system to design a language that will absolutely never have any runtime errors. You’re going off on a tangent here about how you need a type system to have less tests which is completely different from what I’m talking about. This entire paragraph you wrote here is like you’re responding to an irrelevant topic.
> I am not sure I have written enough Go to comment there
Honestly it seems that you haven’t just not written enough go. But basically any programming language . It seems that you’re not clear about runtime errors and you seem to have only encountered these types of errors during tests. So yes you don’t have much experience imho and rob pike deliberately targeted the language towards people like you.
That questions: Why are you allowing your programs to be deployed when tests are failing? This is not a realistic scenario in the real world. Yes, you can invent contrived hypotheticals all day long, but it is meaningless. We've been clear that we are referring to practical settings.
But, but, what if there is a bug in your compiler that sees the runtime error slip through??? Who gives a shit? In some imaged world it may be possible, but it is not realistic. Not worth talking about.
> I’m referring to the fact that you don’t need a complex type system to design a language that will absolutely never have any runtime errors. You’re going off on a tangent here
What you are referring to is clear, but it cannot be considered in a vacuum. The alternative is to see the program keep trodding along, but do the wrong thing. In that case who cares if the program crashes instead? You're getting incorrect behaviour either way.
What you actually need is assurances that the program won't do the wrong thing top to bottom. That requires either a complete type system or, more realistically in the real world, testing. If you go the testing route, you'll know about any runtime errors when you run your tests.
> Honestly it seems that you haven’t just not written enough go.
There is nothing unique to Go here. Many popular languages suffer the same problem. But, if we want to place extensive Go experience as a requirement to speak to this then we have to defer to your experience. Perhaps you can choose an example of where you wrote code in Go that produced a runtime error, show us your tests, and explain how the condition evaded your checks and balances? – I'm fascinated to learn how your code ran perfectly while under test but then blew up in production.
Tests don’t catch everything. You can have a billion tests and there can still be uncaught runtime errors.
If you had a language that probably does not have runtime errors you don’t even need one test. Your program cannot fail in that way.
I honestly don’t think you know what you’re talking about. I didn’t make up a single hypothetical. This is real. Production errors can happen in spite of tests. Are you not familiar with this happening? It just means this: no experience.
Your compiler having a bug or not is orthogonal to the topic. Again you don’t know what you are talking about. If our compiler allows for runtime errors but is fully correct then no amount of tests can guarantee a runtime error will never happen. Golang as a fully correct compiler cannot be gauranteed to have no runtime errors with tests ever.
> What you are referring to is clear, but it cannot be considered in a vacuum. The alternative is to see the program keep trodding along, but do the wrong thing. In that case who cares if the program crashes instead? You're getting incorrect behaviour either way.
You’re writing this because you don’t have experience with programs that can never crash. A program that doesn’t crash doesn’t mean you never exit the program. The program can exit if you want it to. You just need to deliberately tell the program to exit. In golang if you do a division by zero, the program crashes. If you had sum types all divisions return an optional. Both paths of the optional must be handled by exhaustive matching so you must handle the case where the division yields a number or its undefined. If you want the program to exit when it is undefined you can do so. In golang the compiler doesn’t force you to handle both outcomes It just crashes. It’s the same with out of bounds access of an array.
Again real world testing doesn’t guarantee shit. A “complete type system” can be as extensive as dependent types like COQ or much simpler like rusts where you just have sum types and exhaustive pattern matching.
> Perhaps you can choose an example of where you wrote code in Go that produced a runtime error, show us your tests, and explain how the condition evaded your checks and balances?
Oh easy. we had a function that calculates velocity from a stream of input data. That’s (p2 - p1 / t2 - t1). Our integration tests and unit tests have dozens of tests that never yielded an error and we never saw an error in production for years. We switched to a new iot device that sometimes sent identical measurements to our system. Division by zero. We had a crash in production.
> I'm fascinated to learn how your code ran perfectly while under test but then blew up in production.
You’re inexperienced that’s why you’re fascinated. If you have formula involving velocity there an almost infinite amount of combinations that will never produce a runtime error and an infinite amount of parameter combinations that do. True full coverage that completely proves the function works with tests involves infinite tests. Better to prove the function works via proof with a simple extension to the type system. Sum types.
Again, they will catch your runtime errors if your behaviour is covered. If your behaviour isn't covered, then you're just shifting the problem to the program doing the wrong thing instead of crashing. That is not a win. It might even be worse! So, this doesn't matter in practice. Your purely academic view of the world doesn't work with the discussion taking place, I'm afraid.
> If you had a language that probably does not have runtime errors you don’t even need one test.
Go on.
Here, let's use your example:
> That’s (p2 - p1 / t2 - t1).
Traditionally, the calculation is (p2 - p1) / (t2 - t1). I'll assume you had a non-standard situation that necessitated a different formula, but this could have equally been a mistake. It wouldn't be too hard to forget the inner parentheses. We'll assume that divide by zero was already eliminated by the type system, but now show us how sum types would avoid someone from making that mistake.
Maybe you did need tests after all...
> You’re writing this because you don’t have experience with programs that can never crash.
Not so. I spend most of my days writing code in programming languages that do provide such guarantees. It is a cool party trick, but doesn't really matter at the end of the day because they still don't offer the expressiveness to ensure that the program does what is expected of it. I still have to write tests, and once I've written the necessary tests to ensure all the behaviour is correct, there is no practical way you can miss a crash situation.
> Our integration tests and unit tests have dozens of tests that never yielded an error and we never saw an error in production for years.
And had you avoided the crash you'd get erroneous results from the function instead. You're not really any farther ahead. You still need assurances that the function actually behaves correctly. And if you had those assurances, you'd have caught the divide by zero condition.
You're not going to convince me that a complete type system is academically better. I already agree with that. But absent a complete type system, you're going to have to resort to tests. Once you've written those tests, you're going to uncover the runtime errors anyway.
> Division by zero. We had a crash in production.
Your fuzz tests never tried passing in values that would lead to division by zero? For such a simple function that has many possible states that can lead to that condition, that seems completely inconceivable. Hell, I just tried it for fun and it found the issue in less than 100 tries! This must have been the time you were talking about where you deployed to production without running the tests?
But let me be try to be clear: The compiler warning you that you haven't considered a division by zero case does not mean you've handled it correctly. Absent a complete type system, you still need tests to ensure that the behaviour is consistent with expectations even in those edge cases. But with those tests, runtime errors can't go unnoticed anyway, so you didn't really need the type system.
> Better to prove the function works via proof
Agreed. Complete type systems are unquestionably better theoretically. Writing tests is tedious. But it remains that with the languages people actually use, even those with "advanced" type systems, you can't prove much. You have to fall back to testing, and at that point you're going to uncover the runtime errors too.
> You’re inexperienced that’s why you’re fascinated.
There's a good way to change that. Let's see your code!
So you're saying write tests that cover every possible behavior. Makes sense right? It's like saying write code without any bugs. Simple! You're not getting it. You can go run around telling people to write tests that eliminates 100% of bugs and that if you think that will eliminate all bugs from the world, well you're just not experienced.
>Go on. To continue with the original example, I have a function that tries to write to a file. If that fails, the caller is to try to write to a file on a different device. If the caller does anything else the program is broken with serious consequences and should not be shipped to production. Express that expectation using sum types. Hell, express it using any type construct available in popular languages. Good luck!
You can do this on rust. Literally it's the core of the rust sum type system. Good luck? Have you done basic programming with rust? Here's some psuedo code:
match getFile(fileName) {
Some(file) => do someghing
Error => match getFile2(fileName2) {
Some(file) => do something
Error => exit()
}
}
The above is psuedo code. The thing with the match operator is that the program DOES not compile if you do not handle both SOME and ERROR. For golang you can handle the Some and then it crashes if there's a problem. You aren't required to explicitly handle it.>Not so. I spend most of my days writing code in programming languages that do provide such guarantees. It is a cool party trick, but doesn't really matter at the end of the day because they still don't offer the expressiveness to ensure that the program does what is expected of it. I still have to write tests, and once I've written the necessary tests to ensure all the behaviour is correct, there is no way you can miss a crash situation.
There is a way. You're just not getting it. there's about infinite ways to crash a program that has runtime errors.
>Your fuzz tests never tried passing in values that would lead to division by zero? For such a simple function that has many possible states that can lead to that condition, that seems inconceivable. Hell, I just tried it for fun and it found the issue in less than 100 tries! This must have been the time you were talking about where you deployed to production without running the tests?
Hell I used a programming language with no run time errors and I didn't write a single test. Amazing! There are tons of functions complex enough such that your fuzz test will miss it. Again we had this code working for years because we implictly assumed said devices will never pass duplicate data.
Also we don't write fuzz tests. We just do basic testing. Fuzz testing is something our start up doesn't have time for. We would prefer guarantees without the need of extra testing/work/time in this area.
>Traditionally, the calculation is (p2 - p1) / (t2 - t1). I'll assume you had a non-standard situation that necessitated a different formula, but does serve as a great example of how behaviour is your real concern. One could easily input the formula you gave where they expected (p2 - p1) / (t2 - t1) and sum types wouldn't care one bit.
Nope your formula is correct. I just assumed you were intelligent enough to know what I meant even though I didn't put in all the parenthesis (I'm typing on my phone afte rall). I thought wrong.
>Agreed. Complete type systems are cool. Writing tests is tedious. But we await your proof to my case for a realistic setting where one uses a typical production programming language. If all you have is silly half-measures that only cover a small number of cases, you're not really proving much. All you are doing is giving yourself a false sense of security.
Yes rust. Jesus. You're so inexperienced you don't even know when it's standing in front of your face. You don't need the borrow checker from rust. You only need the sum type. Then take the sum type apply it to division by zero and out of bounds array access and all IO calls. Boom that's it. No more runtime errors.
>There's a good way to change that. Let's see your code!
just look at elm man. Yuo don't even know what I'm talking about because you literally don't have experience. You want to see code that never crashes? Get some experience with Elm and you'll see why it never crashes and you'll see it doesn't take a "complete" type system to make it that way. Elms type system is woefully simplistic.
Rust is like 80% of the way there... the reason why people don't use it is lifetimes and the borrow checker and the complexity associated with it. Additionally rust left some holes so it can crash (like division by zero), but it has all the primitives needed to prevent it.
Have you? I am sorry that the good luck did not shine upon you. With the types remaining intact, I modified the (pseudo)code:
match getFile(fileName) {
Some(file) => do someghing
Error => do something else unintended
}
It still compiles. You failed.> The thing with the match operator is that the program DOES not compile if you do not handle both SOME and ERROR.
But as you come off your hubris, you can now see above it will compile even when you screw up the error handling. So you haven't gained anything. You still need to write tests to ensure that you are actually doing the right thing. And once you've ensured you are doing the right thing, how do you think crashes are going to go beyond that? Right. Not going to happen.
> Fuzz testing is something our start up doesn't have time for.
Testing, is testing, is testing. If you have time to write tests, you have time to write fuzz tests where they are appropriate. To throw a test out the window just because it has a slightly different execution model (as provided by the language; not something you have to build yourself) is bizarre. In fact, in your case it seems it could have supplanted the other tests you wrote, actually saving you time not only while writing the code, but also later when you had to waste time dealing with the issue. Time clearly isn't as constrained as you let on.
> For golang you can handle the Some and then it crashes if there's a problem. You aren't required to explicitly handle it.
Technically, in Go 'Some' should always be valid, regardless of whether or not there is an error. That is a fundamental feature of the Go language. Given (T, error), the values are not dependent. You don't need to explicitly handle the error. That is a huge misunderstanding. The same would not be true in Rust, which does consider them to be dependent by design, but Go is a completely different language. You can't think Go as being Rust with different syntax. There is a lot more to languages than the superficial.
Your modification makes no sense. "do something else unintended" Wtf does that even mean? What are you doing? Why don't you spell it out? Because in golang you can do this:
v, err := getFile(fileName)
v.read()
And that's a fucking crash. You understand examples are used to illustrate a point right? And that your example shows you missed the point. Hey why don't I insert some psuedo code called "blow up the earth" in my program and that disproves every point ever made by anyone and I'm right. Genius.>But as you come off your hubris, you can now see above it will compile even when you screw up the error handling.
Think of it like this. The point I'm illustrating is that in rust, you have to handle an error or the program won't compile. In go, you can forget to handle an error and your program will compile. You're going to have to write a bunch of tests to only POSSIBLY catch a missed error handling case. Understand? I don't think you do.
>Testing, is testing, is testing. If you have time to write tests, you have time to write fuzz tests where they are appropriate. To throw a test out the window just because it has a slightly different execution model (as provided by the language; not something you have to build yourself) is bizarre. In fact, in your case it seems it could have supplanted the other tests you wrote, actually saving you time not only while writing the code, but also later when you had to waste time dealing with the issue. Time clearly isn't as constrained as you let on.
It's not bizarre. It's again, lack of experience from your end. Why do I want to spend time writing generic test code that executes off of fuzzed input? I can write test specific code for specific use cases and that's much faster to write then attempting to write tests that work for generic cases.
Also how about not writing tests all together? I mean that's the best solution right? Honestly not to be insulting here, but it's not at all bizarre that you're not seeing how a better type system is better then tests that check for runtime errors. The root of it is that you're just stupid. Like why jump through a bunch of hoops and just call what I'm saying "bizarre" and just be straight with me. We're both mature right? If I think you're truthfully stupid and you think of me the same, just say it. We can take it. Why dance around it by calling my points "bizarre". No your points aren't "bizarre". They are stupid and wrong.
>Technically, in Go 'Some' should always be valid, regardless of whether or not there is an error.
That's why go is bad. You don't need to handle an error if err is not nil. V will be a nil here. And you know what's the only thing you can do with a nil besides check if it's a nil? Crash the program. Literally. With rust, you can do this:
match getFile(fileName) {
Some(file) => do someghing
Error => {}
}
and do nothing. Which is the same effect as golang. But rust at least tells you to explicitly watch for it.>You can't think Go as being Rust with different syntax. There is a lot more to languages than the superficial.
It's not about what I think of the language. It's about the intention of the designers. Go was made for people with not much experience. Straight from the horses mouth. Pike is saying he designed it for you.
>That is a fundamental feature of the Go language.
I think you're kind of not getting it. Seriously like the feature of golang is to allow you to unintentionally crash the program and you think that's a good thing?
v, err := getFile(fileName)
doSomething(v)
Take some time to think here. I know you think you're smart, but you need to hit the brakes for a second. Think: What is the purpose of the above code? If err actually is not a nil, and v ends up being a nil. What is the purpose of this type of logic to even exist? Is it for v to crash somewhere in doSomething? Are you saying that a fundamental feature for golang to crash somewhere inside doSomething?Really think about this. You literally said it's a feature for golang to not handle an actual error and for v to still be "valid." So if err is not nil, v is a nil. What happens here? You think this is a feature? Or are you just not thinking straight? Just pause for a second.
Another thing to help you along: You know the inventor of the null/nil value called it his billion dollar mistake right? Have you thought about why it's a huge mistake? Here's a hint: You can't do anything with a null/nil except check if it's a null or crash the program by using it improperly. The existence of a nil/null signifies the existence of feature that you can only use to crash your program unintentionally.
Hopefully you get it now. If not I can't help you.
The reason why the books don't exist for <highbrow language of choice> is because there's only 50 programs written in it, 49 of which are tooling for the language.
And languages with no room for mistakes have their own issues, like readability or productivity, but I don't have any experience with those; what language(s) are you thinking of? I don't know it myself but Rust seems more "bolted down" when it comes to that aspect.
There is no honesty in praising yourself, no matter what you did and achieved.
Honesty is when you pay tribute to someone else by saying he is a source of inspiration.