I developed a lot of my problem solving skills in semiconductor manufacturing where the cost of a bad assumption tends to be astronomical. You need to be able to determine exactly what the root cause is 100% of the time or everything goes to hell really fast. If there isn't a way to figure out the root cause, you now have 2 tickets to resolve.
I'll throw an entire contraption away the moment I determine it has accumulated some opacity that antagonizes root cause analysis. This is why I aggressively avoid use of non-vanilla technology stacks. You can certainly chase the rabbit over the fence into the 3rd party's GitHub repo, but I find the experience gets quite psychedelic as you transition between wildly varying project styles, motivations and scopes.
Being deeply correct nearly all of the time is probably the fastest way to build a reputation. The curve can be exponential over time with the range being the value of the problem you are entrusted with.
But, most frameworks and libraries aren't built to be audit-grade robust, don't have enterprise level compatibility promises, can't guarantee that there won't be suprise performance impacts for arbitrary use cases, etc.
Sometimes, a third party library (like sql-lite) makes the cut. But frameworks and libraries that reach the bar of "this will give me fewer complications than avoiding the dependency" are few and far between.
The guy like you on a mission critical team at a cutting edge company is a godsend and will be a big part of why the project/company succeeds. The guy who wants to build his own ORM for his no-name company's CRUD app is wasting everyone's time.
I once unfortunately joined a project where an off-the-shelf ORM had been selected, but when development was well into the deep edge cases started to reveal serious design flaws in the ORM library. A guy wanting (perhaps not a in a joyful sense, but more not seeing any other choice) to build his own ORM that was mostly API-compatible was what saved the project.
This was a long time ago. The state of ORM libraries is probably a lot better today. But the advice of ensuring that a library is SQLite-grade before committing to it does rings true even for simple CRUD ORMs. Perhaps especially so.
One of my favorite features of Entity Framework from my .NET days is that it's very easy to just break out of the ORM functionality, even from within an EF-specific function, or to have multiple instances with slightly different configuration (I never had to do that last bit but I know it was possible a decade ago).
But in your example, even if an ORM doesn't provide native breakout functionality, it should be obvious that you can maintain a bespoke path to the database for cases where the ORM doesn't fit. Where that isn't obvious to someone, perhaps 'creating their own ORM' isn't the waste of time you make it out to be, but is actually necessary education?
If you are No-name CRUD Company you're probably not hiring the million dollar per year devs who have learned all the lessons. You kind of have to accept that you are working with budget workers and thus learning on the job is going to need to take place, even if you wish that weren't the reality you face.
At the risk of going off on a tangent, the median dev salary is something like $100-150k/yr. So half of devs in the country make less than that. Gergely Orosz has a great discussion of this.[0] $1m/yr TC is the top 0.01% of the top tier of companies. Some FAANG-level tech firms are here but otherwise it's almost entirely IB, HFT, hedge funds, that sort of thing. I'd be shocked if anyone making close to $1m/yr TC is ever touching an ORM in their day job.
[0] https://newsletter.pragmaticengineer.com/p/trimodal-nature-o...
But, as I said, even if it isn't built-in, it doesn't make any difference, does it? Either way there is no reason to throw a perfectly good ORM out the window just because in some cases it isn't the right tool for the job. Surely you agree? That was my interpretation of the intent of your earlier comment.
While it may be true that ORMs today are of SQLite quality, the original commenter's point still stands: You need to make sure that is the case, else you are going to quickly wish that you did write it yourself.
> So half of devs in the country make less than that.
You may take things a bit too literally, but if you want to go down this road, do you truly believe that half of all devs have learned all the lessons there are to learn in software? Color me skeptical. 0.01% is likely even pushing it.
Slowly replaced it with Dapper and handwritten SQL, a simple migration versioning system, and database seeding with validation. Once that was done, startup time was cut by more than 10 seconds on a standard SSD and about 30 on CFast. Even finally replacing the database connection with SQLite standard libraries shaved off 2 seconds.
EntityFramework maybe useful but it lacks performance when time to start using the software is important.
Even back when it was launched EF was miles ahead of most mature ORMs of today, and I believe your 95% number. But other than EF plus a handful of other mature ORMs, the 95% number looks more like 50%.
I would even argue that new-ish ORMs are virtually useless for anything that's not CRUD, and that the CRUD part can be 100% replaced seamlessly by something like PostgREST/Supabase or Hasura without losing much.
I don't disagree with the feeling in general, but I feel like we are making mistakes by having as much faith in modern ORMs and even libraries in general. Veeeeeeery few things even come close to being 1% as good as Entity Framework, ASP.NET, Rails, Postgres or SQLite.
I have a side project that uses Clerk for auth but basically every other supabase product there is and it really is great for smaller use cases. I don't know how it stacks up once you start needing really fine-tuned database permissions or functionality though.
I find it that if you accept Supabase as-is, it can get you pretty far and save a lot of time and money.
And for edge cases, it's like you said above about ORMs, we don't have to throw it out, we just handle those cases separately.
There are a large number of fundamental impedance mismatches between relational data and object based data. Any ORM can fix some of them at the cost of ignoring others, but the fundamental character of ORMs is such that taking an opinionated line on tough tradeoffs is as good as you can hope for.
This is why ORM guy is wasting everyone's time - his problem is almost definitely not going to have a unique or even valuable perspective on all of those tradeoffs.
Should your application map objects and relations at all isn't usually a question you get to ask unless it is doesn't do much or lives on its own private island. Should you do it yourself or lean on a toolkit to help is the question that you have to contend with.
I believe Active Record is a more specific implementation of something that is ORM-like. We can stop speaking of Active Record since my point holds for the more generic ORM, and therefore holds for Active Record as well.
To clarify my point, there is a fundamental impedance mismatch between object mapping of data vs relational database mapping of data. One implication of this is you cannot use database as a service. Interactions with database must instead be gated behind the ORM and the ORM controls the database interaction.
I'll note that database as a service is very powerful. For example, when there is an API contract exposing a value that is powered by some raw-dog SQL, when the database changes, anything using the API does not need to change. Only the SQL changes. In contrast, when an ORM exposes an object, an attribute might sometimes be loaded, sometimes not. A change to load or not load that attribute ripples through everything that uses that object. That type of change in ORM-land is the stuff of either N+1 problems, or Null-Pointers.
To back up a bit, let me re-iterate a bit about the impedance mismatch. Wikipedia speaks of this [1]: "By contrast, relational databases, such as SQL, group scalars into tuples, which are then enumerated in tables. Tuples and objects have some general similarity... They have many differences, though"
To drive the point home - in other words, you can't do everything in object world that you can do in a database 1:1. A consequence of this is that the ORM requires the application to view the database as a persistence store (AKA: data-store, AKA: object store, AKA: persistence layer). The ORM controls the interaction with database, you can't just use database as a data service.
I believe this point is illustrated most easily from queries.
To illustrate, let's pull some query code [3] from Java's Hibernate, a prototypical ORM.
```
public Movie getMovie(Long movieId) {
EntityManager em = getEntityManager();
Movie movie = em.find(Movie.class, new Long(movieId));
em.detach(movie);
return movie;
}```
So, getting a release year might look like this:
```
int movieId = 123;
Movie m = orm.getMovie(movieId);
return m.getReleaseYear();
```
In contrast, if we put some raw-dogged SQL behind a method, we get this code:
```
int movieId = 123;
return movieDao.getMovieReleaseYearByMovieId(movieId);
```
Now, let's illustrate. To do this, let us look at the example of finding the release year of the highest grossing movie. As a service, that looks like this:
```
return dao.findReleaseYearOfHighestGrossingMovie();
```
In contrast, as an ORM, you might have to load all Movies and then iterate. Maybe the ORM might have some magic sugar to get a 'min/max' value though. We can go on though, let's say we want to get the directors of the top 10 grossing movies. An ORM will almost certainly require you to load all movies and then iterate, or start creating some objects specifically to represent that data. In all cases, an ORM presents the contract is an an object rather than as an API call (AKA, a service).
For the update case, ORMs often do pretty well. ORMs can get into trouble with the impedance mismatch when doing things like trying to update joined entities. For example, "update all actors in movie X". Further, ORM (and objects) creates issues of stale/warm caches, nullity, mutability, performance, and more... What is worse, all of this is intrinsic, relational data and objects are intrinsically different.
[1] https://en.wikipedia.org/wiki/Object%E2%80%93relational_mapp...
ORM and entity manager – which, in turn, is a query builder combined with a few other features. Your code is really focused on the latter. While the entity manager approach is not the same as active record, that is true, the bounds between query building and ORM, I think, are even clearer. In fact, your code makes that separation quite explicit. I can at least understand how ORM and query building get confused under active record.
> We can stop speaking of Active Record
While I agree in theory, since we are talking about ORM only, if we go by Wikipedia we cannot as is ends up confusing active record and ORM as being one and the same. That is a mistake. But as my teachers, and presumably yours too, told me in school: Don't trust everything you read on Wikipedia.
But we don't need to go to Wikipedia here anyway. Refreshingly, ORM literally tells what it is right in its name. All you need to do is spell it out: Object-Relation Mapping.
Which is no doubt why most newer applications I see these days have trended towards carrying relations as far as they can go, only mapping with objects at the points where it is absolutely necessary.
> It's also rarely practical to take on the project of making a slightly different set of compromise
I suppose that is the other benefit of delaying mapping until necessary. What needs to be mapped will be more limited in scope and can be identified as such. You don't have to build a huge framework that can handle all conceivable cases. You can reduce it to only what you need, which is usually not going to be much, and can determine what tradeoffs best suit in that. In this type of situation it is likely that using a ORM library is going to be a bigger waste of time, honestly.
My experience tells me that the largest among these impedance mismatches is the inability for OOP languages to express circular dependencies without resorting to messy hackarounds. Developers often fail to realize how far they are into the dragon's den until they need to start serializing their object graphs.
But as the idea and project cements itself, you start to see exactly where the biggest flaws are, and you might draw the conclusion that a lot of problems could be fixed at the ORM layer, so you opt for working on that.
Maybe it would have been obvious from the beginning, but chances are the people working on the codebase initially had a very different idea of what exactly is the ideal design, compared to later on in the lifetime of the project.
I would say it's not. Sure old ORMs still have their features, but newer ORMs and especially ORMs in newer languages have a fraction of the features of something like ActiveRecord or Entity Framework.
That's a bit orthogonal. Even if you use an ORM library, you'd be remiss to not put it behind a DAL. But from your DAL if you emit/accept objects of your own transformation: Congratulations, you've just invented an ORM. You can emit/accept relations, which is quite justifiable, but even then you are bound to have to map it to objects at some point. e.g. interfacing with third-parties that require objects. There is really no escaping ORM in any reasonably complex real-world application.
Generally, you can go a long way with simple, combined types which are closer to maps/hashes/dicts than to "objects" other than syntax (.attr vs ["attr"]).
And really, that would be my preference: combine a query builder (some ORMs have great ones too) with native types representing the data read from the database.
Agreed. Of course, strictly speaking, a relation is specifically a set of tuples. But if you are working with a SQL database, which has been implied, you are already long past that idea, so it is understood that we're speaking of the concept somewhat more loosely. An instance of a class with a set of basic properties nestled in an array would still reasonably be considered a relation as it pertains to this discussion, as far as I am concerned, and seemingly you too. Fair to say you haven't meaningfully changed the semantics of the data in that.
But that doesn't mean you won't need to map to objects. You almost certainly will at some point in a reasonably complex application, even if only to interface with third-parties.
It’s overkill for small projects and not expressive enough if you’re doing really complicated stuff. Even if you do have a good use case for an ORM currently as your requirements grow it gets harder to hack stuff on that you need.
A lot of it is pretty basic: checking every single return code, testing every single branch, verifying that the external environment is doing what it claims or should be doing. All of this is muscle memory now, I find it difficult to write a throwaway python script without doing this. I also don’t feel like the degree of thoroughness I put into it significantly slows down development either compared to other developers that YOLO it a bit more; I spend a bit more time writing it, they spend a bit more time debugging it in test. And in prod, the former approach has lower defect rates.
It doesn’t need to be safety critical embedded software, which has a somewhat niche set of problems. Even fairly high-level data infrastructure has many of these same challenges.
We respond to incentives. If a developer's only incentive is "we reward shipping as fast as possible" then they will carelessly ship slop as fast as they can type it. If that incentive is removed, they can learn a better way...
However, to add onto this, I'm consistently shocked at how often it is much CHEAPER to "roll your own." We've done some reviews on systems after a few years and the number of bugs and security vulnerabilities we experience with code based around packages is much MUCH higher. Its hard to put a number to it because the time cost of fixing those issues is variable, but its substantial. Its also amazing to me that it can be cheaper to build your own vs using a 3rd party vendor for something that would appear to be highly specialized - of course opportunity cost is a real thing.
The library space has become competitive, and people are running them as business. The goal is not to be correct or even good, but to be a "first mover" and selling tutorials, books, Github sponsorships, Patreon subscriptions...
It's bad not only in terms of security, but also in terms of developer experience.
I am constantly amazed at how little documentation things have, at how many BASIC cases they don't cover (let alone edge cases) and how many security holes those libraries have, and the number of dependencies just keeps getting bigger and bigger.
Another issue is that newer developers are being taught just the newfangled library and have zero experience with the foundations. The number of React devs who don't know how to use HTML forms without a library is nuts.
How could you be shocked? Everything that's happened in the software industry outside of medical/DoD has been about delivering features as fast as you can, quality be damned.
I have qualified my statement with "modern".
They are helpful if you have the same problems as that other company, but I worry about software that uses frameworks because the programmers don't feel confident building their own. It means when it comes time to evolve beyond the off-the-shelf architecture, programmers keep plowing ahead far past when they should have added another level of abstraction & encapsulation.
On the other hand, I also see applications where people dogmatically avoid using pre-existing architectures but also don't take the time to build their own. You end up with references to one table scattered all across the code base in hand-composed SQL.
I'd much rather take an outgrown-framework over a spaghetti mess.
Other than that, time and crypto are two things I also wouldn't code myself, both are just too easy to mess up.
Also, there are higher level libraries that are sometimes good, because why reinvent the wheel every time you make something? Pixijs is one I use a lot. Also, Chart.js. Jquery. Moment.js. Good ol' Bootstrap 4 alpha's CSS handles a ton of use cases, and I never need to think about it again. There's very little in those that I haven't at one time or another rolled myself, but having a toolkit that you know and doesn't change much saves you a lot of time. The danger is more getting into libraries that aren't already dead, and are still in active development ;)
Do you have a recommendation to replace Moment for that use case?
I’m hopeful for the future with JavaScript Temporal.
Until we got word back from packaging post test. Every single die failed for excessive current draw. Several hundreds of thousands of dollars worth of scrap. I was correct, but I wasn’t deeply correct.
What surprises me in retrospect is that everybody signed off on this. It’s not like we didn’t have processes, I just somehow managed to talk a bunch of people who should have known better into doing it anyway.
I was with you until this line. I've never seen a codebase where Not Invented Here Syndrome resulted in a stack that "antagonizes root cause analysis" in any way. I once worked at a C++ shop that had NIH'ed a supposedly thread safe string class, and it wasn't pretty.
There's plenty mature robust tech out there, and the chance that your own inventions are equally free of quirks and edge cases as battle-tested frameworks/libraries/databases that have had 1000s of eyeballs on them sounds quite unlikely to me, regardless of your programming skill.
1) look for all the problems we were going to need to solve and look for 3rd party libs that solve those problems
2) I would do a set of PoC using just that 3rd party library and see what the dev experience as like. I build it from source, read the code, look at the code hygiene, etc.
3) Everything would get checked in as source into our repo, a "full build" would build the 3rd party libraries, you would get source level debugging into everything. You could make invasive changes into libs as part of development, etc.
Every dependency had to earn its place, you didn't just pull in a bunch of things because you needed one function.
When you need this capability is at the exact wrong time for your build and dev process to be able to take on this work. People are panicking, shits broken, no one knows what is going on. If you have everything lined up, you can still do solid engineering using the scientific method, fix your problem and move on.
> Don’t Guess
I find that, when working with a new "thing," I often like to guess for about an hour or so before I really do a deep dive into the reference. Or, I'll read a stackoverflow answer or two, play around with it, and then go to reference.
Why?
Often there's a lot of context in the reference that only makes sense once I've had some hands-on time with whatever the reference is describing.
This is especially the case when learning a new language or API: I'll go through a tutorial / quickstart; "guess" at making a change; and then go back and read the reference with a better understanding of the context.
BTW: This is why I like languages and IDEs that support things like intellisense. It's great to be able to see little bits of documentation show up in my IDE to help me in my "guess" stage of learning.
>> Don’t Guess
> I find that, when working with a new "thing," I often like to guess for about an hour or so before I really do a deep dive into the reference. Or, I'll read a stackoverflow answer or two, play around with it, and then go to reference.
I think that's fair. I've definitely seen "not best" programmers only guess and only read stackoverflow, over and over, forever, and never read the reference. They have no idea what's going on and just spin making a mess in until something sticks. I kinda read that item as a response to people like that.
In practice, so many times I've spun my wheels thinking I just didn't understand the reference only to find out that there was a bug or change that invalidated the reference. Nowadays, if I must interface with a product built buy someone who doesn't understand the user, I'll go straight to the source code if guessing fails or resort to probing the system if code isn't available. Not only is it faster, but you'll gain a better understanding of what is going on than some stumbling attempt to describe it in natural language will ever be able to communicate.
Yet, we had engineers who would still just hunt and peck and stumble guess until they either accidentally got it working or they'd ask a more senior guy for help (who would always first say: did you read the documentation?) There was no excuse back then, the documentation was good and accurate, and if you followed it, it worked. Were there hidden dark corners of Win32? Sure, but the 99 percentile developer never needed to even go near them.
The fact that I remember this is the evidence that it was not everyday occurrence that the docs were good overall.
I think we're talking about different kinds of guessing. I'm not talking about skilled educated guessing, I'm talking about dumb, ignorant guessing. Like "I don't know anything, so I'm just going to try "stuff" I find online without really understanding. Those people do that even with the most beautiful interfaces with the best documentation.
But even with the best designed interfaces, not everything is discoverable (e.g. another fantastically designed but orthogonal interface in the same library that solves your problem).
A reasonable place to start. But fair that you can't stop there if it isn't working. Next step, in my opinion, is to look at the interface more closely to see if it provides any hints. It will most of the time if it is well designed.
> But even with the best designed interfaces, not everything is discoverable
Sure. That's what the test suite is for, though: To document for users full intent and usage. You're still not going to go to a reference for that. As an added bonus, it is self-validating, so none of the "is it me or is the reference incorrect?" rigamarole.
> A reasonable place to start. But fair that you can't stop there if it isn't working.
It's not a reasonable place to start. You're basically talking about copy-paste coding. Google search, stack overflow, paste in the first answer. Afterwards, ask the dev if they know what they did and why it works, and they won't be able to answer because they don't know.
> Next step, in my opinion, is to look at the interface more closely to see if it provides any hints. It will most of the time if it is well designed.
The people I'm taking about can't and won't do that.
> Sure. That's what the test suite is for, though: To document for users full intent and usage. You're still not going to go to a reference for that. As an added bonus, it is self-validating, so none of the "is it me or is the reference incorrect?" rigamarole.
I'm getting an "I don't need comments because code is self-documenting" vibe here. I disagree with that. Prose is a better way to express many, many things related to code than the code itself or even its test.
Sure, the code is the most authoritative place to find what was implemented, but it's not the best way to find the why or the concepts and thought behind it.
Why not? If it works it works. Not everyone is concerned with receiving the award for best programmer.
> they won't be able to answer because they don't know.
I do understand that you are thinking of a specific person here, but broadly, you will know how it works more or less because you'll already know how you would implement yourself if you had to. But since someone's else code already did, no need to think about it further. This remains a reasonable place to start.
> but not the why
If you are not capturing "why" in your tests, what are you testing, exactly? The "what" is already captured in the implementation. You don't need that written down twice. Worse, if you do end up testing "what" you are bound to have to deal with broken tests every time you have to make a change. That is a horrid situation to find yourself in.
I do agree that writing useful tests is really hard, at least as hard as writing good reference material, and thus beyond the skill of most. But if you have to work with something built by the unskilled, all bets are off no matter which way you look.
> Why not? If it works it works. Not everyone is concerned with receiving the award for best programmer.
Ok, that clarifies things: programmers who avoid reading the docs to guess, or follow the "Google search, stack overflow, paste in the first answer" cycle are mediocre programmers. If they don't want to be good programmers (which what the article is talking about), they can keep doing what they're doing.
> If you are not capturing "why" in your tests, what are you testing, exactly?
You can't capture why in code. Your tests are a demonstration of the "what."
That depends on the beholder.
- A programmer who applies a laundry list of what they do to determine who makes for a "best" programmer, who doesn't guess themselves, is likely to exclude anyone who does.
- A business person is apt to consider someone who successfully delivers a product quickly by using someone else's code among the "best".
> You can't capture why in code.
Then you can't capture it in natural language either, making this whole thing moot. But I disagree with that idea.
> Your tests are a demonstration of the "what."
You have a point that some testing frameworks carve out a special declaration for "example" tests that are marked for inclusion in generated API docs. There might be a time and place for that kind of documentation, but that isn't the kind of testing I was thinking of. That isn't representative of the vast majority of the tests you will write. If it is, you're doing something wrong – or at very least aren't being fair to those who will consume your tests later.
In my laundry list, concern for the next guy is what separates the "best" programmers from the mediocre. But I understand why your laundry list differs.
I'm surprised this is controversial; engineering ostensibly follows the scientific method. Without forming hypotheses and testing them, there is no scientific method. Unless we want to nitpick the difference between guessing and hypothesizing, making guesses is a critical part of programming.
I always come back to Peter Naur’s essay "Programming as Theory Building". It's 40 years old now but still nails the essence of programming. The value produced by programming is a shared working theory of the system, not the source code itself. I can't see how you would develop a sufficient theory without first forming hypotheses.
Once I got to architecture I got a general direction and relied on googling (eg. Lots of nuance to if I should use swiftData/coreData/sqlite etc found in conversation threads)
You should have a strong sense of the model that a tool or library presents to you as the consumer. And you should use that model to "guess" about the behavior of the tool. You should choose tools that are coherent, so that your guesses are more accurate than not, and avoid using libraries/tools with many special cases that make it hard to "guess" what they do.
The best programmers do not double check the docs or implementation for every function that they call. They are good at writing tests that check lots of their assumptions at once, and they are good at choosing tools that let them guess reliably, and avoiding tools that cause them to guess incorrectly.
Leverage in programming comes from the things you don't have to understand and the code you don't have to read in order to accomplish a goal.
That's why examples are also critical.
I don't know if I do the same thing with programming.
It's a bit like math books. I dreaded reading formal math during my engineering -- always read accessible text. Got a little better in my master's and could read demse chapters which got to the point quickly. At least now I can appreciate why people write terse references, even Tutte books.
Some references are a pleasure to use. For rust crates, I always go to docs.rs and search there. It's just fantastic. i can search for a function that returns a particular type or accept a particular type etc. hoogle from Haskell was lovely too when I took a functional programming course in college. Cpp reference is also pretty good -- thanks for adding examples.
Today I was reading boto3 python library docs, and I immediately missed docs.rs!
> It's a bit like math books. I dreaded reading formal math during my engineering -- always read accessible text. Got a little better in my master's and could read demse chapters which got to the point quickly. At least now I can appreciate why people write terse references, even Tutte books.
I don't think that's what he means by the advice. I think it's more about systematic knowledge vs. fragmented knowledge. Someone who "learns" through an LLM or Stack Overflow is not going to have the overall knowledge of the tool to be able to reason what's available, so will tend to use it in very stereotyped ways and do things in harder ways because they don't know what's possible. You can still get that systematic knowledge through an accessible text.
I have been playing around with Zig a lot lately, and their doc system is quite nice too. I particularly like how they will embed the source of how what you are looking at is implemented, and often an example of how it is expected to be used. Being able to see the language in action all over the docs has helped with making it super easy to pick up. Being able to search based on type signature a la Hoogle would really be killer though.
Being able to jump to a definition of library code lets you really quickly move from your code to some function you're trying to figure out. With code editor support this is a seamless experience that can happen without a real context switch.
Without this, you might leave your code editor, Google for the project it's related to, find it on GitHub, open up the "dev" version of GitHub (hitting . when logged in on a repo's home page) so you can explore the project, then do a project search for that function and wade through a bunch of results until you find it.
That or find the code locally where your package manager might have saved it but if your app is in Docker that could be a problem because it might not be volume mounted so you won't be able to explore it from the comfort of your local code editor.
I'm happy this person works with quality software. I haven't been always been as lucky.
http://literateprogramming.com/
The two best programmers I worked with had well-thumbed copies of Knuth's TAoCP....
For the uninitiated, boto3 is the official AWS python library. To get a client for, say, S3, you do `boto3.client(‘s3’)` - instead of the sane thing, like, you know, `boto3.s3.S3Client()`
That's how you can work against the normalization of deviance. Never dismiss new people commenting on what you may doing wrong for no reason. Yes, you've been doing X in an unusual way and no accident happened still; but there's a reason you should not do it this way and it may cost a lot to relearn it by experiencing it.
And same thing with old rules for which no one has an idea of why they exist but are still followed. Any rule should have an explanation for its existence and their relevance checked periodically.
Things change really fast, more so with AI tools, so it's important to have people question why we do it a certain way.
"Don’t go to Stack Overflow, don’t ask the LLM, don’t guess, just go straight to the source. Oftentimes, it’s surprisingly accessible and well-written."
It has been, I think, close to 15+ years since I have been actively coding professionally. I am always learning. When I started my career I spent a fair bit of time answering questions on Stack Overflow rather than asking questions. That helped a lot as it felt like a "Real-World challenge" to solve someone else's problem. So it totally depends on how you use Stack Overflow.
With LLMs, I don't use it for "vibe coding" as the kids do these days. This is, IMHO, a wrong way to use LLMs. LLMs are great for integrations into software you are building where it has to analyze realtime events and produce summaries, or even for automating mundane things. But it definitely is not a replacement for a programmer. At least not in its current incarnation. The way to use LLMs is to ask it to provide a birds-eye/10,000 ft view on a topic you want to understand/explore. Why? Because sometimes, you don't even know how something works because you have no idea what it is called (technical terminology/jargon). That's where LLMs help. Once you know the terms/jargon, you can then refer to official documentation/papers rather than relying on the LLM. This IMHO is an underrated superpower of LLMs.
I learn by doing, not reading. If I read something but don't actually use it I'm liable to forget it. My brain seems to classify it as "not practical, not needed". If I do actually use it, however, I tend to learn it quickly.
So for me documentation is pretty terrible, reading how something works doesn't help, I need to see examples. When I see something in action, I actually learn it. Even copy/pasting works as I'll poke at the copied code, changing variables, playing with params, add/remove comments, etc. No code is ever just copied, it's always manipulated, cleaned up, unnecessary cruft removed.
And there's a whole load of documentation out there that has no examples, or really poor examples that don't relate to how you want to use something.
And for me with an API that doesn't make "sense" I find it really hard to ever remember. Like SQL Server's "OVER" clause, I've used it intermittently over the years and every time I come to use it, I have to re-learn it again. I find those sort of APIs really frustrating.
I don't think you can answer SO questions as a hobby any more. I used to do it on my morning coffee but at some point it got full of professional reputation growers that answered everything 30 seconds before it got posted. And when you do find an unanswered question the mods jump on you for giving "teach the man how to fish" answers instead of ready to copy/paste code.
Now that they've gotten the hug of death they'll probably plan for it next time.
Good engineers build things that eliminate failure modes, rather than just plan for "reasonable traffic". Short of DDoS, a simple blog shouldn't be able to die from reaching a rate limit. But given the site is dead, I can't tell, maybe it's not just a blog.
Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.
There is no such thing as eliminating all failure modes, which was exactly the point I was making in my post above. The best you can do is define your goal clearly and design a system to meet the constraints defined by that goal. If goals change, you must redesign.
This is the core of engineering.
Is basic availability not a goal of a blog?
Phrased differently: given two systems, one that fails if a theoretically possible, but otherwise "unpredictable" number requests arrive. And one without that failure mode. Which is better?
> From the outside you can't tell what the goals are.
I either don't agree, not even a tiny bit, or I don't understand. Can you explain this differently?
> This is the core of engineering.
I'd say the core of engineering is making something that works. If you didn't anticipate something that most engineers would say is predictable, and that predictable thing instead of degrading service, completely takes the whole thing down, such that it doesn't work... that's a problem, no?
He was identifying the best programmers he knows (as is obvious from the title). I don't think it is unreasonable at all for even a semi-technical person to be able to do that.
Also, it is highly likely that the author never expected their article to receive a high volume of web traffic, and allocated resources to it with that assumption. That doesn't say a thing about their technical abilities. You could be the best programmer in the world and make an incorrect assumption like that.
Not going to speak for the author, but some of us just want to be able to write a blog post and publish it in our free time. We're not trying to "maintain systems" for fun.
Some of those posts get zero views, and some of them end up on the front page of Hacker News.
Moreover, the author appears to be a lot more serious than just a free time blogger:
https://web.archive.org/web/20250405193600/https://endler.de...
> My interests are scalability, performance, and distributed systems
> Here is a list of my public speaking engagements.
> Some links on this blog are affiliate links and I earn a small comission if you end up buying something on the partner site
> Maintaining this blog and my projects is a lot of work and I'd love to spend a bigger part of my life writing and maintaining open source projects. If you like to support me in this goal, the best way would be to become a sponsor
> If you are owner of this website, prevent this from happening again by upgrading your plan on the Cloudflare Workers dashboard.
Looking into it, my hypothesis is that the owners page is SSRd using cloudflare workers and they reached the daily limits.
And a few companies have been very successful in this effort.
I wonder about this often: If you want to have impact/solve problems/make money, not just optimizing killing your JIRA tickets, should you invest a given hour into understanding the lowest code layer of framework X, or talk to people in the business domain? Read documentation or a book on accessibility in embedded systems? Pick up yet another tech stack or simply get faster at the one you have that is "good enough"?
Not easy to answer, but worth keeping in mind that there is more to programming than just programming.
We can look at a software developer as a craftsperson, and appreciate their skill and their craft, and we can look at them as a business asset, and those are two different things.
Both of which have their merits, but this article is clearly focused on the craftsperson side and that's enough. We don't need to make everything about business and money, and we definitely don't need to reduce the beauty and craft of writing code to Jira tickets.
I treat it as a craft, and do it for personal fulfillment and learning. I enjoy learning, and solving problems. I also enjoy creating stuff that I find aesthetically pleasing.
For example, I write iOS apps, and I’m working on a new version of a timer app that I’ve had in the App Store, for over a decade. I had added a Watch app to it, and had gotten to the point where it was ready for the App Store, but I kept having sync issues. It wasn’t a “showstopper,” but it was aesthetically not pleasing.
I determined that it was an issue that could be addressed by improving the fundamental design of the app, which had basically been constant for many years.
So I'm rewriting it completely.
That’s not something that makes “commercial” sense, but it’s what I want to do. I’ll also take the opportunity to redesign the basic UI.
I enjoy having that kind of freedom.
I also like to write about my work. I know that very few people are interested in reading it, but I do it, because it helps me to learn (the best way to learn, is to teach), and it helps me to focus my thoughts.
This is manifest in management methodologies: developers are largely interchangeable cells in a spreadsheet. I'm not saying this is a good thing.
The reasons for this are complex, but generally, business people want us to solve the technical problems they can't handle themselves, they don't want us to "relieve" them of product management, customer relationships, and industry knowledge. Why would they? It would devalue them.
One aspect might be that a developer who engages in "business" effectively stops being "subordinate". Management decisions need to be justified on a different level to maintain legitimacy.
It's one of the reasons I went back for a business degree and then re-entered tech. No, of course nobody in Silicon Valley cares about the "MBA" title (HN sees it as a negative), but everywhere I've interviewed/worked they've appreciated that we could talk about the economic and business impact of the software, and not just the algorithms and data structures.
I've found it possible to migrate to a less top-down Desert style just by finding executives who are frustrated by those problems and saying, "I have an idea I've seen help" and then getting the team together and saying, "hey, it turns out the executives would like us to write software well. What should we try first?"
Product has plenty of work remaining: they should be handling whatever subset of strategy, prioritization, analytics, BI, QA, facilitation, design and contracts that they have the skills for. But it requires engineers to actually collaborate with them as a peer, rather than engage in power struggles, and that requires everyone on the team to understand what we are building, for whom, and why.
Chicken/egg imho.
These roles require wildly different skills and knowledge.
Usually the outcomes are better if you combine two people who are good at their jobs rather than hoping one person can do it all.
Why it makes sense for them to be a single person? Often, "changing requirements" really comes from an engineer learning new things (this framework does not provide this, this external dep is going to be late, I'd need to learn 2 new things so will need more time...), and really, an engineer is the first one who'll know of some of the challenges and what's even feasible!
Now, the skills an engineer needs to develop to be a good PM is good communication and ability to document things at the right level, and lots of empathy for a customer and a business person (so they can "walk in their shoes"). Arguably, all things that will make a great engineer even better.
I've been in teams where we've had a very senior, experienced PM tell us that he's looking for another position in the company because our team does not need them: we already did the stuff they were hired to do. That was a sign of a great PM who did not try to actively wrestle control out of our hands when the team was chugging along just fine.
Scoping tickets is more of a project management skill. Again, not a dev skill.
Estimating effect on user experience - requires empathy, again not a dev skill.
If you redefine the dev job as including PM skills then sure, PM skills are dev skills.
But theyre not.
>Why it makes sense for them to be a single person? Often, "changing requirements" really comes from an engineer learning new things
So? Happens to me too. I can tell the PM these things i learned. Thats a hell of a lot easier than managing all stakeholder interactions, empathizing and balancing their demands.
It only really makes sense to combine the two roles if the project is inherently very straightforward, a salary can be saved and the person doing both roles is suffiently qualified for both roles.
If you are not doing that, you are being micromanaged and I feel for you in your engineering job.
And trust me, non-technical PMs are ill-equipped to figure out an incremental path to that North Star product or feature you want to ship — how you split branches and deliver value incrementally is something only a good engineer can do (well).
If you do not consider how an implementation will affect the user, you might just satisfy the requirement with an actually terrible experience (but the ticket never said it needs to load in under 30s and with no visible redraws and jumping elements): a good engineer will implicitly consider all of these, even if unspecified in a task (and many more, I only used an outrageous example to make a point).
Breaking down problems is certainly a life skill, but engineers are inherently good at it: it's the very definition of an engineer, and you can't be one without it. I have however seen PMs who mostly channel and aggregate customer experiences and stakeholder requests without an ability to consider (broken down, stepwise, incremental) paths to completion.
If you are good at all of these, you'd likely be a good engineer too: this does not mean that one can't be good at PM without being an engineer, just that a great engineer is very close to being a great PM too.
I am not against the division of labour and different motivations driving where each person invests their time, but if we are talking about a great either PM or engineer, they are pretty much of the same mindset with focus on different parts of the job that needs to be done — 90/10 split vs 10/90 split (and anything in between).
And finally, whether you are a great craftsman at engineering (or PMing), it is slightly different from a great engineer.
There is a very low cap on career growth if you are purely focused on programming.
So yes, if you want to climb the corporate ladder or run your own business, programming is a fraction of the skills required.
I think though it's okay to just focus on coding. It's fun and why many of us got into the industry. Not everyone likes the business side of things and that's okay.
There is no inherent value to producing software, as there may be in producing car tires or bananas. The best software is no software.
And then who is the better programmer, the one who knows more about how to make software, or the one who knows more about what software to make?
There is an inherent value in programming, just like there is one in gardening, woodworking, producing art, or playing a musical instrument.
The value is in the joy that the activity brings. (Note that this tends to be a different kind of value than business value.)
Do you imagine that we just somehow evolve capabilities beyond it? or do we eventually produce universally perfect software solutions and leave it at that?
If I hire you to make software for me, I don't really want software; I want a problem to go away, a money stream built, a client to be happy. Of course, that probably requires you to build software, unless you invent a magic wand. But if you had the magic wand, I'd choose it every single time over software.
Not so with food, furniture or a fancy hotels, where I actually want the thing.
The magic wand argument doesn't make sense. Then you can also get everything else.
Eh, I disagree. I like a lot of the software I'm using. There's inherent value to producing music with Ableton, cutting videos with Final Cut Pro, or just playing Super Mario for entertainment. Those are all more software than no software.
As a side note, this is what I keep pointing out when people talk about code generated by LLMs. As an activity, this is just one thing that programmers do.
I think the answer to your question (a good question indeed) is "both", or rather to balance development of both capabilities. The decision of how to spend time won't be a single decision but is repeated often through the years. The Staff+ engineers with whom I work _mostly_ excel at both aspects, with a small handful being technical specialists. I haven't encountered any who have deep domain knowledge but limited technical depth.
(edit: formatting)
The trap avoid are those business impact folks that demonstrate an unwillingness to get better at actual programming, which ironically would increase their impact.
Edit: an example is fixing a problem without understanding its cause.
I think talking to people in business domain is the most important thing you can do in SWE or IT in general. The business is the entire reason you write every line of code, the more you understand, the better you will be at your job.
I do find drilling down into lower layers of your software stack helpful, and can make you a better programmer, but in a much more specific way.
> Pick up yet another tech stack or simply get faster at the one you have that is "good enough"?
Both of these are programming skills and less important, IMO. Trends and technologies come and go; if they're useful/sticky enough, you'll end up having to learn them in the course of your job anyway. Tech that's so good/sticky it sticks around (e.g. react) you'll naturally end up working with a lot and will learn it as you go.
It's definitely good to have a solid understanding of the core of things though. So for react, really make sure you understand how useState, useEffect work inside and out. For Java it'll be other things.
It's actually not the entire reason i write or have written every line of code.
It may be surprising to some people on this website for entrepreneurs but there are in fact people who enjoy writing code for the sake of it.
I think your question is most interesting in terms of long term skill mix or "skill portfolio" a.k.a the career viewpoint, while the parent's is more interesting on a day-to-day basis as you navigate the state space of bringing a project to completion. On a given day, understanding the business may not be the most valuable thing to do, but to your point over the course of a job or career it probably is.
(For example, I can say that I already have sufficient business context to do my programming task for tomorrow. Asking more questions about the business would be wasteful: I need to go update the batch job to achieve the business outcome.)
EDIT: I might go one step further and say the most valuable skill is not understanding the business but understanding how to match and adapt technologies to the business (assuming you want a career as a programmer). Ultimately the business drives income, but presumably you have a job because their business requires technology. So the most valuable skill is, as efficiently as possible, making the technology do what the business needs. That's more of a balance / fit between the two than just "understanding the business."
I found Lean Startup to be very good too.
I feel like the way universities teach data structures and algorithms isn't a great way to instill the joy of problem solving.
That being said, not everyone has that spark. And you can always lose it.
Otherwise rate limited
I don't blame the author at all - we should do "works most of the time" projects more often and stop caring that much about SLAs. But that downtime in an article that focuses on "Know Your Tools Really Well" is hilarious way of showing that dev and ops are rather different skill sets.
They have a history of using static hosting (GH Pages) but prob decided it wasn't necessary when they switched to CF. And whipping up your own lil scheme using compute is more fun and it let them mirror the request to their analytics service.
I don't blame them: I'm so used to infinite free tiers especially from CF that without seeing my blog saturate the worker limit, I wouldn't have considered it.
- When I do something, I did it understanding that's what I had the limited time/attention for and you should give me grace, especially when I'm dicking around with my personal blog on a Saturday afternoon.
- When other people do something, they engineered what they thought was the necessary + perfect solution, and they chose that exact impl after every option was evaluated to its end, and I will criticize them accordingly.
[0]: https://web.archive.org/web/20250328111057/https://endler.de...
Or at least it used to be the answer when I still cared about analytics. Nowadays, friends send me a message when they find my stuff on social media, but I long stopped caring about karma points. This isn't me humblebragging, but just getting older.
The longer answer is that I got curious about Cloudflare workers when they got announced. I wanted to run some Rust on the edge! Turns out I never got around to doing anything useful with it and later was too busy to move the site back to GH pages. Also, Cloudflare workers is free for 100k requests, which gave me some headroom. (Although I lately get closer to that ceiling during good, "non-frontpage" days, because of all the extra bot traffic and my RSS feed...)
But of course, the HN crowd just saw that the site was down and assumed incompetence. ;) I bury this comment here in the hope that only the people who care to hear the real story will find it. You're one of them because you did your own research. This already sets you apart from the rest.
Remember when nginx was written in 2002 to solve the C10K problem?
So what are you talking about?
This is a surprising stumbling block for a lot of developers when they encounter a problem. Most times the solution is hiding in plain sight (albeit at least one level of abstraction lower sometimes) and reading what the error was can help to quickly solve an issue.
Anecdotal evidence: We use `asdf` for managing Python, Go and NodeJS versions for our main project. On a fresh Fedora/Ubuntu install, running `asfd install` fails to compile Python as it is missing a few dependencies that are required for Python's standard library. The output that is provided when the `asdf` command fails is pretty self explanatory IF you care to read it.
Like just today I got a nebulous-ass error trying to compile some old cpp package, threw that into 4o and in a few seconds I get an in-depth analysis back and a one line correction that turned out to fix the entire thing. Literal hours saved lmao.
Some industry standard tools, like Jackson, don't even have documentation but instead point you to various tutorials that are written by others on how to use it: https://github.com/FasterXML/jackson-docs
One of the nice things that LLMs have done is taken some of that mess of missing/poor/convoluted/scattered-to-the-winds documentation and distilled it into something accessible enough that you either get a useful answer or get a useful direction on how to narrow your search for the answer.
I think "read the docs" is good advice; but even the article itself doesn't go so far as to say *all* documentation is good.
I wonder if AI generated stuff would pass our existing checks, e.g. linters, test coverage, sonar, etc.
It's how they use the AI. If they see it as a glorified StackOverflow where you paste a big chunk of code and ask "why does it not work", they'll be in trouble. If they are able to narrow-down their problems to a specific context, express them well and take the output of the AI with a grain of salt, they'll be 10x programmers compared to what we were in the 2000s, for example.
Fixing someone else's code is a great exercise, so maybe they're actually learning useful skills by accident? :)
With a good combination of Cursor, NotebookLM, flashcards (I use RemNote) and practicing you can accelerate a lot your learning.
Nothing stops you from reading specs, docs and having AI assist you doing so.
I understand the power of flash cards and SRS in general. But was wondering how you decide when to put something into an SRS when learning something new. Especially in a tech/programming context.
It's a bit random, with time I then ignore some topics.
>Don’t Guess
If you are working on critical software like code running in a rocket or medical device, sure, never guess is mandatory.
But I and many other people can be in a hurry. We have to or want to move fast where it matters. I don't have the time to research every single detail nor I am interested in knowing every single detail.
I am mostly interested in building something or solving a problem, I don't care about implementation details as much. Sure, some times details do matter a lot but it's a part of the job to have an understanding of which details matter more and which matter less.
So, I don't guess out of laziness, but because I have things that are more important and more interesting to do and time is a finite resource.
Many decisions can be reverted with minimal loss if they will be proved wrong in the future. Bugs can be solved with ease.
I'm not saying to move fast and move things, but learn how to do the right trade-offs and making educated guesses is a valuable tool.
So I would add another assertion to the the list:
Learn to value time, don't procrastinate, avoid analysis paralysis.
Also don't guess when it's easy to test. "Maybe divisorValue is zero" well slap a printf() in there and see if it is! Often you don't have to guess. I've seen discussions go round in circles for hours asking "what if it's X" when it'd take 2 minutes to find out if it's X or not.
Clearly the left end is dangerous but the right end can also be, due to opportunity costs. Making a judgment on where the slider should be for each decision/change is a key skill of the craft.
And I find that this skill of organising is the limit for how large/complex systems I can build before they become unmaintable. This limit has increased over time, thankfully.
[1] By reading more of the code, RTFM, observing logs, tracing with a debugger, varying inputs, etc.
I've been reluctant to learn frontend development and our framework of choice is apparently fairly well documented, but I don't even understand large parts of the terminology, why certain solutions work, or why some are more correct. So I guess, and I learn, but you need to keep iterating for that to work.
If you just guess, or ask an LLM, and doesn't question your guess later, then I can see the point of simply not recommending guessing at all. Mostly I think flat out recommending againts guessing ignores how a large percentage of us learn.
Reading the source code is also a great idea. You'll always pick up some good ideas from other people's code. I learned this the hard way, but also kind of the easy way... at Google, sometimes we had good documentation. Eventually I realized it was often out of date, and so just got in the habit of reading the server code that I was interacting with. Suddenly, everything was much clearer. Why does it return an error in this exact case with this exact set of circumstances? There is a condition for just that case. Set this useless flag and that block is skipped. You'll never see that in the documentation, the exact details behind why something works how it's working exists in one person's mind at one point in time, and the code is the only thing that remembers. So, ask the code.
In both cases, some easy-to-use text searching tool is helpful. I always use rg (and deadgrep inside emacs), but there are many.
> Most developers blame the software, other people, their dog, or the weather for flaky, seemingly “random” bugs. > The best devs don’t. > No matter how erratic or mischievous the behavior of a computer seems, there is always a logical explanation: you just haven’t found it yet!
I don't see how you can conclude from that that real issues would be overlooked? I interpret this to be the opposite.
I don't think that addresses complaining, but rather redirecting blame to something nobody has control over instead of digging into the issue and finding the root cause
Or could, in any case, after a bizarre hiring boom it seems the market has quieted right down again.
I talk regularly to recruiter friends and there seems to be a bimodal distribution going on, with some developers finding jobs straight away vs unlucky ones staying unemployed for months on end.
I recently helped a couple devs that were laid-off in 2023 and still haven’t found anything else.
Archive available at: https://archive.ph/0GcBe
By trying to read some documentation like a book from cover to cover will be waste of time.
Many people seem to have an irrational aversion to reading documentation and manuals. They'd rather speculate and guess and state false things than to just open the docs to understand how the thing works.
I'd say if you use the tool daily for years, it makes sense to invest into actually reading the docs.
In this way, LLMs may passively discourage discovery by providing immediate, specific answers. Sometimes there is value in the journey.
There is no "only SO and LLM shortest path" vs "always read full documentation" - there are always more sources like blog posts, walk troughs that will be much better investment of time.
Sometimes you need someone to explain architecture of the tool much differently than it is described in documentation when documentation is dry. Sometimes you need better examples better aligned to what you are trying to do. You need different approaches to grasp the thing.
Two programmers that start with similar levels at the thing that OP listed, but one able to stay focused and productive and the other jumping on YouTube, HN half the day (yes, that's me in many periods) etc are gonna have different impact and progression with time.
And it's all about "getting in the zone". Once I'm there, things fly, but the sputtering engine of editing 3 lines then alt-tabbing to youtube for 20 minutes makes progress very slow.
It's weird, I know exactly what to do, or what to look into, but just "don't want to?"
I have noticed one thing, namely evenings are my golden hours. I'm trying to do all of my shopping, chores, etc during regular working hours, and then actaully working 5pm-?am. Maybe this will fix it.
I tend to distract myself under stress/boredom e.g., when I realize this is happening I get back to working and resist the urge of distracting.
In time, you get better, you need to train it till it starts being more natural.
Now I’m the most focused person in the office. But it took a very very long time.
Yes yes! Although, in an interview, there is not always an easy way to separate out people that can’t say “I don’t know” during day to day work, from those that can. But what a difference. A strong need to hide a lack of omniscience is wildly irritating to me, in co-workers and people in general. It causes such problems!
For your head of SRE or something, sure that's what you want. But I'd argue that for founders especially, it's often better to get good enough to get to effectiveness, even if you're not able to perfectly debug every edge case.
Those can be two different things in the same way a sous chef might be much more proficient at cooking/preparing several dishes but not have the skills to create a menu and run a successful cuisine in a successful restaurant/hotel.
Of course the developers still needed some domain knowledge, but much less.
All the other topics bring great wisdom, but 'go touch the code' is no longer the responsible thing for senior devs. Junior devs working on minor things, sure. However, senior devs should be rigorously following a process to 'getting their hands dirty'. When senior devs are tinkering with business critical or life critical code, they are usually unaware that the weight of responsibility of everything the software does is now theirs. That's why process exists.
https://en.wikipedia.org/wiki/Software_safety#Process_adhere...
I imagine that, if an engineer at Pacemaker Incorporated is given a task that's touching an area they're not comfortable working on, the author would suggest that "sorry, talk to Alice" is not a good attitude while "okay, but I'll need to go through some training, do you mind if I reach out to Alice for mentorship?" is.
They are not afraid to touch it.
They never say “that’s not for me”
Instead, they just start and learn.
Why didn’t I hand it off? Because I need to know how this subsystem works. It’s important and quite complicated and this is not going to be the last bug we have, or the last change it needs. It’s part of what I work on so I should understand it. The payoff wasn’t just fixing the bug, it was learning enough to know how to fix it.
``` You cannot access this site because the owner has reached their plan limits. Check back later once traffic has gone down.
If you are owner of this website, prevent this from happening again by upgrading your plan on the Cloudflare Workers dashboard. ```
"Language" is omitted from this list; the author is "Rust consultant".
This could be a coincidence.
> don’t ask the LLM
To be fair, you can use LLMs to get to the reference. Gemini is pretty good at that. ie. You can ask it how to do such-and-such in Python and to provide a link to the official documentation.
If you want to solve nasty, hard Heisen-bug problems you need to write. Ask questions, write down the answers. Follow the trail.
Some times you need to be more formal. Learning how to write maths is invaluable. Distributed systems, security, performance at scale… all benefit from a solid understanding of some pretty simple maths.
Writing is thinking! Challenge yourself, examine your thoughts, and learn to write well. It pays off in spades.
100% !!! I get so annoyed when engineers get an error message and just copy/paste in Slack saying "I got this error: xyz" and nothing else... No inspection of the error, no reading docs or at least putting it in a search engine. No "This is what I tried so far...". Just a brand-dead copy/paste and shoving it someone else's plate to perform all those steps.
I'd love to do this, but (if I may play the victim for a second), I have real trouble reading something long form; I glaze over after a few paragraphs or just fall asleep. I don't find it compelling reading, especially not if I realize the vast majority won't be relevant.
I don't know if this is from a broken attention span thanks to spending 25 years online, or simply from experience - there's so many tools I've used for one project and never again, or questions I've only ever had once, and "studying" feels wasteful.
This goes back to when I learned Java in college, second year; we had a book, I think I did like the first two pages, but after that I just winged it and googled stuff if I needed it. (which was probably the other huge change in the software development world at the time, Google and easily searched / found online documentation)
> its history: who created it? Why? To solve which problem?
> its present: who maintains it? Where do they work? On what?
I'm not clear on that. Why does the person matter? So you can check if their political views align with those of your tribe, or what?
But then, I never read the user names on HN comments I reply to, and I guess the article author does. Maybe even keeps files on them...
Similarly, it helps to know who maintains the code you depend on. Is there even a maintainer? What is the project roadmap? Is the tool backed by a company or otherwise funded? What is the company's mission? Without those details, there is a supply chain risk, which could lead to potential vulnerabilies or future technical debt.
That may have been relevant 15 years ago, whatever Eich wanted to do with JS, it's been out of his hands for a long time.
And you could also do the reverse and form an opinion about him based on JS :) Might not be very flattering.
all this will brand you as 'super nerd' and limit your career earning potential unless you are planning to become a principal engineer at google . don't be know for nerdy shit.
This was a problem _before_ LLMs were so widely used, but they have compounded it 100 fold.
In the end, I think it always comes back to RTFM. But that's the hard path and users have been conditioned to think of the Internet as a tool that allows them to jump to the end of that path, immediately heading to Stackoverflow, Reddit, Quora, etc. Admittedly, it is almost always easier to just have someone tell you how to solve a problem than it is to take the time to understand what the problem is, apply what you know and troubleshoot. But it will leave you stagnant, hardly able to grow as a developer, exercising no creativity and demonstrate a lack of understanding.
I'm a terrible programmer. I know I am. But every time I slog through a problem in my weird little projects, solving it in a way that makes my coding buddies go "uh, huh...well, it _does_ work..." I learn something, not just about solving that specific problem, but about how the system I'm working in functions.
RTFM culture had it right back in the day, though it annoyed me younger self to no end. As a wee lad, I'd jump on those local BBSs and just start pushing questions about computers to the greybeards, rarely getting straight answers and typically being pointed to the manual, which I clearly hadn't read. Started listening to them after awhile. Glad I did. The value of reading the docs prior to asking my questions extends well beyond code and even computing. Do it with your car, your home appliances, business models, heck, do it with your legal system and government. The point is, RTFM is the foundation on which the House of Understanding is built. It promotes self-sufficiency, greater familiarity with the system in which you are working and the intimacy required for more complex and creative problem-solving later one. Most importantly, it promotes growth.
Now, that's all assuming the documentation is good...which is a different conversation altogether.
- know what you don’t know.
- don’t be an asshole.
If you’re known, and known as somebody you want on the team life will be easier. Sorry if I’m repeating others, but life tends to be a team sport and you should play well.
> Don’t Guess
If you work with anything but a very simple program, you often must guess what could be the cause(s) of your issue in order to know where to look. The more experienced you are, the more accurate your guesses will be.
Really well written and insightful.
Although I will say the "Status Doesn’t Matter" thing is a symptom not a cause. If there isn't any question you're one of the best then status doesn't matter - people know you're the best and there is no need to remind them. The people who fight strongly for status are the insecure ones who think they need to remind people otherwise they'll lose their place in the pecking orders. Typically people who are a few steps shy of the best - silver medallists if you will - are the ones who need to spend a lot of time reminding people they are almost the best.
Please check back later Error 1027 This website has been temporarily rate limited
You cannot access this site because the owner has reached their plan limits. Check back later once traffic has gone down.
If you are owner of this website, prevent this from happening again by upgrading your plan on the Cloudflare Workers dashboard.
A guess might be your best opportunity to test a theory about a bug you don't understand. That's particularly true where you're a newcomer to an area and the documentation is written for someone with different experience.
A series of guesses and tests can narrow down the problem, and ultimately help you fix a bug on your own terms, a bit like tracer fire.
I much _prefer_ to build a mental model, and to spot the bug as a deviation from that model. And unguided guesswork can be a sign that you're flailing. But guessing can be a strategy to build that model.
> Read the Reference
> If there was one thing that I should have done as a young programmer, it would have been to read the reference of the thing I was using. I.e. read the Apache Webserver Documentation, the Python Standard Library, or the TOML spec.
> Don’t go to Stack Overflow, don’t ask the LLM, don’t guess, just go straight to the source. Oftentimes, it’s surprisingly accessible and well-written.
This is underrated. When I was a beginner at programming, I read books like "Sams Teach Yourself Java in 21 Days" and various O'Reilly books such as CSS. But over time, I drifted over to reading online documentation, and mostly from primary sources such as documentation that comes with the language. For example, I read Sun/Oracle's Javadoc web pages and Python's standard library web pages instead of any third-party summary of it; I read the Java Language Specification as well as new improvements in JSR/JEP pages; I read MDN web pages instead of books (I'm aware of W3C spec pages but they are too dense for daily use; I only consult them for rare edge cases and ambiguity). I learned Rust from first-party tutorials ( https://doc.rust-lang.org/book/ ) and not someone else's rehash; the first-party material is excellent and hard to beat.
This doesn't apply to everything, though. I don't think there is a good first-party tutorial or reference for C and C++. I still don't know what the best tutorials are out there, but my favorite reference is https://en.cppreference.com/w/ , which is clearly a third party.
Also, as I get more experienced over the years, I drift into reading the source code of the applications and libraries that I use, and sometimes read mailing lists / issues / pull requests to see or participate in the software development process.
> An expert goes in (after reading the reference!) and sits down to write a config for the tool of which they understand every single line and can explain it to a colleague. That leaves no room for doubt!
I work under this philosophy, but it can be extremely hard. I still have trouble understanding what every line of a Maven XML file (Java build tool) does. Ant (for Java) was similarly opaque. I've seen way too many tutorials that have an ethos of "just copy this example code and modify a few bits without questioning the rest, trust us about it".
> Read The Error Message
The failure of the user to read computer error messages is a source of IT jokes since forever ago.
> Most awesome engineers are well-spoken and happy to share knowledge. The best have some outlet for their thoughts: blogs, talks, open source, or a combination of those.
Check. I have a website and repositories of open-source code authored by me.
> Never Stop Learning; Build a Reputation; Don’t Be Afraid to Say “I Don’t Know”
Check to all.
> Clever engineers write clever code. Exceptional engineers write simple code.
This is something I always strive to do - make the code as simple as possible while still solving the problem (such as functional and performance requirements). I sometimes find myself de-optimizing code to sacrifice a bit of speed in exchange for shorter code and better human comprehension.
## Read the reference
It is very difficult to dance from written instuctions, but I think one hugely underappreciated resource is watching very intently great instructors dance basic movements. I did this in my early days, because it is very enjoyable entertainment when you apreciate this dance, and it was I think a large part of why I progressed rather quickly.
I could go on about this point, but I think there is a similar thing going on with humans and our mirror neurons when we watch others do something, to how we acquire spoken languages, and the recent wave of input-based language learning movements.
Another way to interpret this point might be to know the history of the dance, of Argentina, the broader dance communities and movements across the world, and the culture in general. The main advantage to this I think is to contextualize a lot of what you learn, and that the dancer becomes more immersed in the activity.
## Know your tools really well
Dancing involves some tools external to the dancer, like clothing and shoes, the dance floor of course, perhaps talcum powder to reduce friction, and most importantly the music.
While there is considerable advantage to be gained from wearing an outfit suited for dancing, there's a quick and hard cutoff as to how much knowing more about these things improve your dance. The same applies to the floor surface and so on.
But of these "tools", I think the biggest gain is found in familiarizing oneself with the music. Both gaining intuition about the structure of songs, melodies, and rhythms, but also gaining deeper insight and access to the emotions at play. Dancing is an opportunity to interpret all of these aspects of the song, and being familiar with the music, the medium that connects one with ones partner and also the thing you are trying to represent through movement, goes hand in hand with being able to execute the movements of the dance at all.
All of the points of the article apply here: the history of the music inform you of what the tango is all about, and of the different sounds and movements that are available to us today; the present, in the sense of what music is currently popular, and played live; limitations, in the sense of what different styles of tango music work with what sorts of movements and feelings; finally, the ecosystem is like a summary of all of the above, and something that people discuss at length in every milonga, like which orchestra they prefer, or which contemporary groups they like.
However, one thing that I think qualifies as a tool, although somewhat subtly, is the dancer's own body. I have not pursued this avenue very far yet, and am thrilled to realize that this is something I really ought to do. I know only a little bit about human anatomy, after strength training years ago. And as for my own body specifically, perhaps something like yoga, pilates, or calisthenics would be valuable.
## Read the error message / Break down problems
While there are no error messages in dancing, you definitiely feel when something isn't quite working out. If you're in a class and are trying to learn a step, it is crucial to be able to be critical of your own performance, and look for faults or inconsistencies.
Maybe a step feels a little off, like there's something awkward going on between you and your partner.
One thing I have noticed is that, if you are trying to go over a sequence of steps A-B-C-D, and something isn't quite working out at point C of the sequence, the soure of the error is usually actually somewhere in either point B, or perhaps already at point A.
This might remind some of looking at a stack trace of an error, and trying to figure out at which turn things went sideways. The problem is frequently not located exactly at the point where the error was actually raised.
## Don't be afraid to get your hands dirty
One of the dangers for any learner is calcifying into bad habits that were adopted at an early stage of learning. In order to break out of these, you have to be willing to abandon old guardrails, be uncomfortable over again, and to learn something over. This might be analogous to refactoring some kind of broken legacy code.
Growth is also possible through experimentation, abandoning old patterns in search of something new and potentially interesting. This also requires courage, and feels a lot like getting ones hands dirty, and applies to both programming and dancing and probably many other things alike.
## Always help others / Write / Status doesn't matter / Build a reputation
Since dancing is a communal activity, it is not so vital to be writing in order to be heard. But I still think that communication in this space is hugely valuable.
From what I have seen, any healthy dance community has great communication between the more experienced dancers and the less experienced ones. Unhealthy ones are often referred to as snobbish, The alternative, where there is a strong divide and exclusion from the top downward, are often referred to as snobbish, and I would characterize that as unhealthy. That sort of a scene will gradually wane from the high barrier of entry, and will wither and die if not already sufficiently large.
## Never stop learning / Have patience
Any tango dancer will tell you, no matter how experienced or accomplished they may be, that one never stops learning this dance. Even decades into ones journey, it is extremely common to hear dancers say that they're still working on their walk - which also happens to be more or less the very first thing you learn in your very first class.
## Never blame the computer
In a dance that famously requires two people, it is very easy for a lot of people to blame one's partner when something goes wrong. I think it is much more valuable to take the opposite approach, and always look to what you can improve in your own dancing, whether you are a leader or a follower, long before attempting to throwing accusations and corrections at your partner.
There may of course eventually come a breaking point, at which you want to raise some questions and explore together for a solution. But to immediately correct your partner, before they've even had a chance to correct themselves, is never a good approach in my opinion.
## Don't guess
I think this one is hard to stick to rigidly when learning how to dance. If you want to be completely sure of a movement before you try it out, you'll forever remain paralyzed. We all have to do some guessing in the beginning, trusting that our muscles move us through space in about the right way as the dance is supposed to be performed.
However, these guesses that we make are frequently wrong, and result in incorrect technique and bad habits which must be weeded out and corrected before they calcify too much.
So while I think not guessing at all is impossible, I think we really should not underestimate the value of any means available to us for correcting incorrect guesses that we have made and accumulated. These include someone more experienced than us that we trust, or private lessons from tutors who know what they're talking about.
## Keep it simple
It is funny, but this exact piece of advice is also so very frequently heard in tango classes. As you progress and learn and acquire vocabulary in tango, speaking now mainly about leaders, it is very easy to want to use it all and throw every shiny new step and sequence you know at the poor follower that you've convinced to join you on the floor.
Many also are nervous and afraid of the silence that comes with not moving all the time, and keep leading step on every beat in order to never make it seem like they're running out of ideas.
But in actual fact, it can be wildly better to relax, and do simple steps well, with tasteful pauses aligned with the music, than to hurriedly toss around every step that you know.
## My own final thoughts
Despite the fact that code is run on computers, and dance is performed by humans, I think this analogy holds really well. If you think about it, dancers are just meat robots performing fuzzy instructions written to their brain by themselves and dance instructors, or whatever they've acquired by watching others dance. You could summarize as follows the mapping in this analogy:
Spec <-> The goal that the dance-student is aiming for
Code <-> Steps that have been acquired by a dancer (maybe imperfect)
Runtime <-> A night out on the dance floor
Error <-> Improper technique
Programming <-> Learning and improving as a dancer
Programmer <-> Learner/teacher
I think an interesting insight here is that both the learner and the teacher play a role as the "programmer". A learner that is totally passive and lacking in introspective ability will perhaps not learn as quickly. So, the points of the article are applicable to both of these parties.For any autodidacts out there, that last part is good motivation to reflect some more on the points of this blog post.
there is a version of this article already available at archive.org:
due to cloudflares:
"Error 1027
This website has been temporarily rate limited
..."
just my 0.02€
Rofl?
Shame the author doesn't mention the Swedish secret of snus. That's the best productivity hack I know bar none. Anyone else out there?