> AlphaFold solved protein structure prediction, a fifty-year problem, not in decades but in a fraction of the time traditional research would have required. Not by thinking like a biologist. By finding patterns at a scale no human could reach. That is a domain detonation. Not progress. A before-and-after. The same logic is now moving through radiology, legal research, financial analysis, drug discovery, software engineering.
If you have good ideas, good insights and good stories, they deserve your own words. If you can't respect your own ideas enough to spend time writing them down and forming them into paragraphs and sentences, why should I respect them any more?
However what caught my eye and that to me does reflect the lens through which the author sees the world, unless I am completely misunderstanding their point:
"Most of the world's important problems have never been modelled at the precision AI requires to act on them. Pollution, traffic, healthcare, taxation, public infrastructure, water distribution."
Pollution, traffic, healthcare and public infrastructure however are not really problems that require "clever" solutions - rather they are problems of political will, regulating industry and moving to cleaner energy sources. For example, we have known about human caused climate change for decades and carbon emissions are just hitting their peak now.
From his statement it seems what he is really saying though is that it is the granularity of data is insufficient for an AI model to accurately or precisely evaluate a problem and then presumably solve it, assuming there is, let alone a human-acceptable solution.
As I mentioned, you can have the most precisely modeled problem in the world and it won’t make a difference if it’s not accurate, especially since there is a very uncomfortable reality starting to face us, at least in the West, that all the little lies we were told and we perpetuated because we have been trained on them from birth, across generations now, are simply wrong and have polluted our minds to such a degree that many people could never accept if AI told them they’re wrong and everything they believe they know and have known all their life is wrong.
On top of that, it shatters people’s narcissistic self-image of having been the good guy, because accepting what AI tells them is actually the truth means accepting that they were abusive to those who were right all along, meaning they are actually the bad guy.
And if we definitely know anything as good guys, it’s that the majority is always right, because that is what we were taught is the democratic way. The majority is always right and you always have to trust the minority that are experts! Right? Right!
I don't think this is true. Text is a lossy form of communication. There's no way to get the sum of my knowledge from my brain over to your brain purely through text.
Also, anyone who has ever had to deal with incomplete documentation knows that humans did not, in fact, write it all down.
Communication builds on simplified shared maps over ineffable territory of human experience. It always presents a particular model—a necessarily wrong one (as all models are), good for one purpose but neutral or harmful for another.
However, models and maps is not the only way in which humans attend to reality. Even though it is compelling to talk only about this way—talking is communication (see above)—we also have the impossible to convey direct experience. Over the past thousand or two years, as humanity becomes more of an interconnected anthill, this experiencing arguably increasingly takes a backseat to map-driven communication-driven frame of attention, but it still exists and is part of what makes us human.
LLMs, as correctly noted, build only on our communication. What I don’t think is noted, is that this means they build on those (inevitably faulty) models and maps; LLMs fundamentally have no access to the experiencing aspect, and the territory-to-map workflow is inaccessible to them. What happens when wrong maps overstay their welcome?
If any thing they've made my job much much more stressful because I'm just dealing with 10xthe amount of code to reason about than before, and gain, if I wasn't a proficient programmer I'd be in the shit.
Like in many scenarios, they always think the victims will be the other ones.
Why hasn‘t it happened yet? Why hasn‘t the job market imploded? What‘s missing? Why do my colleagues, I and my friends still have their bullshit jobs? Why didn‘t my companies output explode through our unlimited claude access? What about all the other companies?