What surprised me was how much the ugly first version taught me that planning never could. You learn what users actually care about (often not what you expected), which edge cases matter in practice, and what "good enough" looks like in context.
The hardest part is giving yourself permission to ship something you know is flawed. But the feedback loop from real usage is worth more than weeks of hypothetical architecture debates.
Nice statement.
I think there is another equally pervasive problem: balancing between shipping something and strategizing a complete "operating system" but in the eyes of OTHER stakeholders.
I'm in this muck now. Working with an insurance co that's building internal tools. On one had we have a COO that wants an operating model for everything and what feels like strategy/process diagrams as proof of work.
Meanwhile I am encouraging not overplanning and instead building stuff, shipping, seeing what works, iterating, etc.
But that latter version causes anxiety as people "don't know what you're doing" when, in fact, you're doing plenty but it's just not the slide-deck-material things and instead the tangible work.
There is a communication component too, of course. Almost an entirely separate discipline.
I've never arrived at acceptable comfort on either side of this debate but lean towards "perfect is the enemy of good enough"
Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates." - "What actually worked" - "This hits close to home" - "Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions."
> Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."
Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.
And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.
I understand that it's not the main point in your comment (you're trying to determine if the parent comment was written using an LLM), but yes, we do exist: I've spent years planning personal projects that remain unimplemented. Don't underestimate the power of procrastination and perfectionism. Oliver Burkeman ("Four Thousand Weeks", etc.) could probably explain that dynamic better than me.
My struggle is having enough patience to do any planning before I start building. As soon as there's even the remote hint of a half-baked idea in my head, it's incredibly tempting to just start building and figure out stuff as I go along.
I resist working like that because I am mega ignorant and I know I will encounter problems that I won't recognize until I get to them.
But, I also HATE having to rework my projects because of something I overlooked.
My (attempted) solution is to slog through a chat with an AI to build a Project Requirements Document and to answer every question it asks about my blindspots. It mostly helps build stuff. And sometimes the friction prevents me from overloading myself with more unfinished projects!
This particular piece is LinkedIn “copy pasta” with many verbatim or mildly variant copies.
Example: https://www.linkedin.com/posts/chriswillx_preparing-to-do-th...
And in turn, see: https://strangestloop.io/essays/things-that-arent-doing-the-...
Relatedly, LLMs clearly picked the "LinkedIn influencer" style up.
My guess is some cross-over between those who write this way on LinkedIn and those who engage with chatbot A/B testing or sign up for the human reinforcement learning / fine tuning / tagging jobs, training in a preference for it.
We currently live in the very thin sliver of time where the internet is already full of LLM writing, but where it's not quite invisible yet. It's just a matter of time before those Dead Internet Theory guys score another point and these comments are indistinguishable from novel human thought.
I don't think it will become significantly less visible⁰ in the near future. The models are going to hit the problem of being trained on LLM generated content which will cause the growth in their effectiveness quite a bit. It is already a concern that people are trying to develop mitigations for, and I expect it to hit hard soon unless some new revolutionary technique pops up¹².
> those Dead Internet Theory guys score another point
I'm betting that us Habsburg Internet predictors will have our little we-told-you-so moment first!
--------
[0] Though it is already hard to tell when you don't have your thinking head properly on sometimes. I bet it is much harder for non-native speakers, even relatively fluent ones, of the target language. I'm attempting to learn Spanish and there is no way I'd see the difference at my level in the language (A1, low A2 on a good day) given it often isn't immediately obvious in my native language. It might be interesting to study how LLM generated content affects people at different levels (primary language, fluent second, fluent but in a localised creole, etc.).
[1] and that revolution will likely be in detecting generated content, which will make generated content easier to flag for other purposes too, starting an arms race rather than solving the problem overall
[2] such a revolution will pop up, it is inevitable, but I think (hope?) the chance of it happening soon is low
Remember back in the early 2000's when people would photoshop one animals head onto another and trick people into thinking "science has created a new animal". That obviously doesn't work anymore because we know thats possible, even relatively trivial, with photoshop. I imagine the same will happen here, as AI writing gets more common we'll begin a subconscious process of determining if the writer is human. That's probably a bit unfairly taxing on our brains, but we survived photoshop I suppose
Not sure about this user specifically, but interesting that a lot of their comments follow a pattern of '<x> nailed it'
Psy-ops, astroturfing, now LLM slop.
Probably. I've been known to spend weeks planning something that I then forget and leave completely unstarted because other things took my attention!
> Commenter's history is full of 'red flags'
I wonder how much these red flags are starting to change how people write without LLMs, to avoid being accused of being a bot. A number of text checking tools suggested replacing ASCII hyphens with m-dashes in the pre-LLM-boom days¹ and I started listening to them, though I no longer do. That doesn't affect the overall sentence structure, but a lot of people jump on m-/n- dashes anywhere in text as a sign, not just in “it isn't <x> - it is <y>” like patterns.
It is certainly changing what people write about, with many threads like this one being diverted into discussing LLM output and how to spot it!
--------
[1] This is probably why there are many of them in the training data, so they are seen as significant by tokenisation steps, so they come out of the resulting models often.
> "A typo or two also helps to show it’s not AI (one of the biggest issues right now)."
Ironically, I see this very often with AI/vibe coding, and whilst it does happen with traditional coding too, it happens with AI to an extreme degree. Spend 5 minutes on twitter and you'll see a load of people talking about their insane new vibe coding setup and next to nothing of what they're actually building
There is so much to be learned about a problem - and programming in general - by implementing stuff and then refactoring it into the ground. Most of the time the abstractions I think up at first are totally wrong. Like, I imagine my program will model categories A, B and C. But when I program it up, the code for B and C is kinda similar. So I combine them, and realise C is just a subset of B. And sometimes then I realise A is a distinct subset of B as well, and I rewrite everything. Or sometimes I realise B and C differ in one dimension, and A and B in another. And that implies there's a fourth kind of thing with both properties.
Do this enough and your code ends up in an entirely unrecognisable place from where you started. But very, very beautiful.
Fred Brooks, author of “The Mythical Man Month” wrote an essay called “Plan to Throw One Away” in 1975.
He argues much what you’ve described.
Of course, in reality we seldom do actually throw away the first version. We’ve got the tools and skills and processes now to iterate, iterate, iterate.
The plumbing also needs iteration and prototyping, but sound, forward looking decisions at the right time pay dividends later on. That includes putting extra effort and thinking into data structures, error handling, logging, naming etc. rather earlier than later. All of that stuff makes iterating on the higher levels much easier very quickly.
Of course you’ll also maintain the satisfaction of doing something well.
One of my friends calls it "development-driven development".
Do a thing. Write rubbish code. Build broken systems. Now scale scale. Then learn how to deal with the pattern changing as domains specific patterns emerge.
I watched this at play with a friend's startup. He couldn't get response times within the time period needed for his third party integration. After some hacking, we opted to cripple his webserver. Turns out that you can slice out mass amounts of the http protocol (and in that time server overhead) and still meett all of your needs. Sure it needs a recompile - but it worked and scaled, far more then anything else they did. Their exit proved that point.
There is a difference between shipping something that works but is not perfect, and shipping something knowingly flawed. I’m appalled at this viewpoint. Let’s hope no life, reputation or livelihood depends on your software.
"I spent weeks planning" -- using the terminology from that book: No, you didn't spend weeks planning, you spent weeks building something that you _thought_ was a plan. An actual plan would give you the information you got from actually shipping the thing, and in software in particular "a model" and "the thing" look very similar, but for buildings and bridges they are very different.
Not saying this is you, but it's so easy for people to give up and sour into hyper-pragmatists competing to become the world's worst management. Their insecurities take over and they actively suppress anyone trying to do their job by insisting everything be rewritten by AI, or push hard for no-code solutions.
This one works for me, and I've learned it from a post on HN. Whenever I feel stuck or overthink how to do something, just do it first - even with all the flaws that I'm already aware of, and if it feels almost painful to do it so badly. Then improve it a bit, then a bit, then before I know it a clear picture start to emerge... Feels like magic.
Got me through many a rough spot.
if youre worried about doing it well, youre a step or two ahead of where you need to be
Dan Harmon's advice on writer's block: https://www.reddit.com/r/Screenwriting/comments/5b2w4c/dan_h...
>You know how you suck and you know how everything sucks and when you see something that sucks, you know exactly how to fix it, because you're an asshole. So that is my advice about getting unblocked. Switch from team "I will one day write something good" to team "I have no choice but to write a piece of shit" and then take off your "bad writer" hat and replace it with a "petty critic" hat and go to town on that poor hack's draft and that's your second draft.
"The Gap" by Ira Glass: https://www.reddit.com/r/Screenwriting/comments/c98jpd/the_g...
>Your taste is why your work disappoints you... it is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions.*
'“One day, I’m gonna write that novel.” Pal? You better start tomorrow morning because the right time never happens. It’s when you boldly determine it. It’s like running on a rainy day. You’re fine once you get out there. The only difficulty is getting off the couch when you lace your shoes up.'
I learned this the bad way, but now I just lie and say it doesn't work until it's good enough for me
Remarkably common, but not inevitable. Thankfully there's plenty of workplaces which don't look like this.
And yeah, lying is certainly one way to get work done in a bad organisation. I'd much rather - if at all possible - to find and fix the actual problem.
I hate this, but seems to be fairly normal practice.
In my own work, this often looks like writing the quick and dirty version (alpha), then polishing it (beta), then rewrite it from scratch with all the knowledge you gained along the way.
The trick is to not get caught up on the beta. It's all too tempting to chase perfection too early.
Funny how these things when done by a human is a positive and when done by an LLM is a negative. According to all the anti-llm experts... Humans generate perfect code on the first pass every time and it's only LLMs that introduce bad implementations. And this isn't a callout on this user in specific. It's a generalization to the anti-ai sentiment on HN. If incremental improvement works, incremental improvement works.
> Humans generate perfect code on the first pass every time and it's only LLMs that introduce bad implementations.
That's not what the "anti-llm experts" are saying at all. If you think of LLMs as "bad first draft" machines, then you'll likely be successful in finding ways to use LLMs.
But that's not what is being sold. Atman and Amodei are not selling "this tool will make bad implementations that you can improve on". They are selling "this tool will replace your IT department". Calling out that the tool isn't capable of doing that is not pretending that humans are perfect by comparison.
[0]: https://strangestloop.io/essays/things-that-arent-doing-the-...
The contents are so similar, that it cannot be coincidence. It really seems like the author of this blog simply plagiarized the strangestloop post without referring to it at all...
They'd love to talk about problems, investigate them from all angles, make plans on how to plan to solve the problem, identify who caused it or how to blame for it, quantify how much it costs us or how much money we could make from solving it, everything and anything except actually doing something about it.
It was never about doing the thing.
Somewhat related, I've learned that when you're the one who ends up doing the thing, it's important to take advantage of that. Make decisions that benefit you where you have the flexibility.
specially the middle managers i.e engineering managers, senior engineering manager, director of engineering duh duh
there's less coordination to do - to keep managers up to date.
the most functional software orgs out there - don't have managers
The way to break through that is indeed to start doing. Forget about the edge cases. Handle the happy path first. Build something that does enough to deliver most of the value. Then refine it; or rebuild it.
Seriously. The cost of prototyping is very low these days. So try stuff out and learn something. Don't be afraid to fail.
One reason LLMs are so shockingly effective for this is that they don't do analysis paralysis; they start doing right away. The end results aren't always optimal or even good but often still good enough. You can optimize and refine later. If that is actually needed. Worst case you'll fail to get a useful thing but you'll have a lot better understanding of the requirements for the next attempt. With AI the sunk cost is measured in tokens. It's not free. But also not very expensive. You can afford to burn some tokens to learn something.
A good rule is to not build a framework or platform for anything until you've built at least three versions of the type of thing that you would use it for. Anything you build before that is likely to be under and overengineered in exactly the wrong places. These places make themselves clear when you build a real system.
Good enough is a self limiting fallacy.
A prototype failing to attract fans doesn't prove a lack of a market for the job the prototype attempts to perform. It only proves the prototype, as it stands, lacks something.
Beware quitting early. All good builders do.
At work we built something from a 2-page spec in 4 months. The competing team spent 8 months on architecture docs before writing code. We shipped. They pivoted three times and eventually disbanded.
Planning has diminishing returns. The first 20% of planning catches 80% of the problems. Everything after that is usually anxiety dressed up as rigor.
The article's right about one thing: doing it badly still counts. Most of what I know came from shipping something embarrassing, then fixing it.
"Preparation" isn't mentioned explicitly, but by my reading it would come firmly under "is not doing the thing".
How do you not be "toxic" after that? How do you retain a chipper attitude when you know for a rock-solid certainty that even if the project is successful it's likely by accident?
Or if you want another way of thinking about it, code isn't only useful for deployment. Its also a tool you can use during the planning process to learn more about the problem you're trying to solve. When planning, the #1 killer is unknown unknowns. You can often discover a lot of them by building a super simple prototype.
From the Red Dwarf book and quoted previously:
Pivoting to zero-planning, would also have a basket of flaws.
1. https://strangestloop.io/essays/things-that-arent-doing-the-...
What I am still on the fence about is when "design" or "architecture" type work counts as Engineering. There's a certain amount of design work that is valuable to do before coding and is part of the thinking process. But sometimes you get into a lot of abstract talking that is "not doing the thing".
In the GenAI era, "doing the thing badly without planning" has become so easy that some counterweight is needed.
The characters in the book are quick to cut non-productive discussions short, but it feels like the feel good discussions around "the thing" are about as far as many people want to go these days.
There are things that humans have to unfortunately do when working as a group of people. That's why we became the alpha predator. Not because we were the strongest ape. That includes:
- Filling in timesheets, quarterly, half yearly cycles, company meetings, team meetings is not doing the thing — as a solopreneur. But not as a member of a group.
- Writing tickets, reviewing PRs is not doing the thing — as a solopreneur.
- Commuting to work and back is not doing the thing — If I'm a solopreneur this doesn't even matter.
- Answering technical questions, analyzing data, attending to bugs is not doing the thing — If I'm a solopreneur especially on a greenfield stuff, I have zero baggage.
- Writing test cases and putting up alerts is not doing the thing — if it's only me judging me, I have nothing to judge.
I take it to mean: if you can just do the thing now (you are in the right place, healthy, with tools and prerequisites) and you choose not to because of (procrastination reasons) then you could be doing the task but you choose not to.
For corps: timesheets is one of the things.
I find that I don't have major issues doing a thing once I get started on it. The main problem is choosing from among many things that I could reasonably consider "the thing", and then feeling confident enough in that choice to start.
There are times where you obviously need to do the thing to understand the thing to see the process of doing the thing. This allows for breaking the process down into better steps. Just writing code to do things you think is doing thing but prove not to do the thing when actually doing the thing is common.
I needed this today. Currently questioning my career choices, as I hit my first wall where people are involved. Gave me quite the headache.
I like that this was included.
Whoever the guy from ‘Strangest Loop’ is it’s my impression that it’s meant to resonate with self-starters; as if he’s speaking from that vantage of and for hustle culture. The grinders. The movers. The seniors. The managers. The founders. [1]
I don’t get that vibe from this derivative and in fact I think it carries a slight affect of a neurotic employee while the original airs determination. Reading this brings one into the mind of an observer, the founder of a VC firm, watching OP wring over a Palo Alto brewed latte.
[1] Am I the only one who was unable to find out his actual name on this website?
This one hit me right in the feels, I have been buying more woodworking/DIY tools than the projects I've worked on with them.
But it's not good to lie to yourself about doing the thing while not doing the thing. If your joy comes from the result of doing the thing, but you're putting time into other things that aren't doing the thing, that joy is not getting any closer.
“Writing about writer’s block is better than not writing at all.”
I have found these articles on the exact same topic to be creating more actionable mindset.
1. The cult of done by No Boilderplate: https://youtu.be/bJQj1uKtnus?si=efV5OTF35LcDjuN3. Through the years, I have come back to this video many a times and even have the Cult of Done manifesto (snipped from this video) stuck on to my wall.
2. High agency by George Mack: https://www.highagency.com/. This is a long form article and sitting and just reading it has helped me unblock myself. I have a bookmark of this on my favourites bar at all times.
I.e. by making sure that they're doing the right thing.
Life is tough like that
Doing the thing2 is doing the thing2
What do you gain by saying it isn't thing? You have to do it first either way.
I still believe there's a mise en place step before doing the thing, when quality counts.
Doing the thing is going to involve both direct steps, and indirect steps necessary to do the direct steps.
Not doing the thing involves doing things other than the shortest/safest/effective path to getting the thing done.
You can very much do the thing when it's not too costly to fuck up. For many important things, thinking about doing the thing is even more important than doing the thing.
Why not? If i need a saw to build a deck, buying a saw must be the first step?
Edit: Seems like a way to show they’re looking for roles, I guess.
But as a metaphor for other creative pursuits, my experience is that most of the time when people are "planning" or working on other things that they like to believe will help them do the thing... they are really just avoiding doing the thing.
People spend years doing "world-building" and writing character backgrounds and never write the damn book. Aspiring musicians spend thousands collecting instruments and never make a song.
As you say, if it's just for fun, that's all fine. But if the satisfaction you want comes from the result of the thing, you have to do the thing.
No it's not. Sometimes (or maybe most of the time) doing it badly means maybe it's not your thing.
I used to have a neighbour who liked to play the piano and sing. He was doing it consistently badly and he didn't have anyone to tell him that he should probably stop trying.
To your neighbor, doing it badly is still doing the thing.
Doing the thing isn't about judging other people. That doesn't contribute to your thing.
If someone is bothering you, making it hard to do your thing, then your thing involves talking to them about your problem. Without judging what they are doing.
the answer isnt to stop practicing, its to practice the right thing and not practice doing it wrong.
theyre probably still better off playing badly and enjoying it, vs just staring at an unplayed piano though