I also dislike TDD but for a different reason: it incorrectly assumes that spec comes before code. Writing code is a design act too. I talk about that in Street Coder.
1. write the test code first (possibly with a skeleton implementation) if you want to get an idea/feel for how the class/code is intended to be used;
2. write the code first if you need to;
3. ensure that you have at least one test at the point where the code is mimimally functional.
More generally:
1. don't aim for 100% code coverage (around 80-90% should be sufficient);
2. test a representative example and appropriate boundary conditions;
3. don't mock classes/code you control... the tests should be as close to the real thing as possible, otherwise when the mocked code changes your tests will break and/or not pick up the changes to the logic -- Note: if wiring up service classes, try and use the actual implementations where possible;
4. use a fan in/out approach where relevant... i.e. once you have tests for various states/cases in class A (e.g. lexing a number, e.g. '1000', '1e6', '1E6') you only need to test the cases that are relevant to class B (e.g. token types, not lexical variants, e.g. integer/decimal/double);
5. test against publically accessible APIs, etc... i.e. wherever possible, don't test/access internal state; look for/test publically visible behaviour (e.g. don't check that the start and end pointers are equal, check that is_empty() is true and length() is 0) -- Note: testing against internals is subject to implementation changes whereas public API changes should be documented/properly versioned.
There is software for which writing code is a design act, and there is software for which you write specs before anything. I don't know if a) they are the same, b) they are different, c) one is better than the other.
Recognizing and understanding that there's a larger problem with discounts is systems thinking. Fixing the code so that all discounts are applied in a predictable order, rather than just fixing the specific issue reported by a user, is systems thinking. Ditching the individual tests that independently cover the user-reported bug input/output, and replacing it with a test that covers the actual discount application ordering intended and expected and (hopefully) implemented by the code, is systems thinking.
Maybe that doesn't (or does?) illustrate the "Stop Hunting In Tests" concept, but I thought it was important nonetheless.
The author looks legit - or at least has contributions for over a year.
But github is free & idk if they scan user repos for malware
Are .pdfs and .epub safe these days?
Ty for sharing your book, it's pretty fun
Depends on the viewer. Acrobat Reader? Probably not. PDF.js in some browser? Probably safe enough unless you are extremely rich.
But if you are insinuating AI made all this up on it's own, I have to disappoint you. My points and my thoughts are my own and I am a very human.
No worries, I am not a native English speaker myself. I was genuinely interested in whether commercial LLMs would use "bad" words without some convincing.
For comparison, I have also tried the smaller Mistral models, which have a much more complete vocabulary, but their writing sometimes lacks continuity.
I have not tried the larger models due to lack of VRAM.