2 pointsby signa113 hours ago2 comments
  • techblueberry3 hours ago
    I like Ed, and I do think there's tons of fishy and downright illegal behavior in the AI ecosystem, and while I want Ed to be right (a blindspot I'm trying to mitigate) I think Ed is missing two things.

    1. I don't know exactly what it is, maybe the circumstances are just different, but it feels like after 2008 somehow the financial system learned not to collapse. And this isn't a good thing. Inflation, rising unemployments rates, low investment and few companies going public, maybe these are all symptoms of an economic system that needs to be cleaned out. And this is on top of the general advice that the market can stay irrational longer than you can stay solvent.

    It’s probably true that as soon as we get too comfortable and this expectation sets in, we’ll have the collapse, but that could be years off.

    2. There's loosely speaking two ways for a tool to be successful. One is to apply the tool to a problem, which is the lens Ed looks through, the second is to apply a problem to a tool. Even if loosely speaking LLMs "suck and are useless" as a Ed's thesis goes, you can sort of set the expectation that you'll deal with the problem space in a shitty way. Lower expectations.

    Well trained humans will always be better than LLMs at customer service. Who cares? Just lower the quality of customer service. LLMs aren't quite there in writing code? Just keep looping over the problem until they get something 98% of the way there. You can redefine the problem space in a way that makes LLMs the solution, and then undercut the competition on price.

    We do this in manufacturing all the time. It’s really hard to build a machine that can assemble a whole car the way a small team of people can, but it’s relatively easy to build a machine that can put a door on a car.

  • kwertyoowiyop2 hours ago
    [2024]