5 pointsby archy_5 hours ago6 comments
  • 1970-01-015 hours ago
    Superintelligence beta. He is speaking in terms of market, not science.
  • archy_5 hours ago
    Full tweet (since it wouldn't fit in the title): Sam Altman: Superintelligence probably by end of 2028. So we got roughly 2 years left. Enjoy your job while you still can. Time is ticking.

    Non-Nitter link: https://x.com/kimmonismus/status/2024502735584780593

    • rvz5 hours ago
      Of course.

      Just after the OpenAI IPO (Which that is AGI) and still plenty of time for everyone else to IPO right before another market crash.

      Why did he choose 'end of 2028' after the 2028 election when Trump will then leave office in early 2029?

  • chrisjj4 hours ago
    From someone who has yet to deliver Adequate Intelligence.
  • AnimalMuppet4 hours ago
    Yeah... I seem to remember seeing this before, only it was by the end of 2027. So the schedule is slipping by one year per year. (In fairness, it might not have been Altman who made the previous prediction.)

    But one year slip per year is not the pattern of a successful project - it's the pattern of a floundering one. You see this sometimes in projects where they still haven't figured out what the spec is for what they're trying to build. (So, do they know the spec for building a superintelligence? I'm pretty sure that no, they don't.)

    What they have is evidence that they're making progress, and a completely-without-evidence idea of how much further ahead superintelligence might be, and an extrapolation based on progress continuing at the same rate. Well, the part that is the most suspect is the guess as to how far away superintelligence is. If that's wrong, the whole estimate is worthless.

    • techblueberry4 hours ago
      I think what's amazing to me is, we used to have Steve Jobs' exaggerations, the "reality distortion field" and correct me if wrong, but he basically delivered on the visions he had. Then Musk started to ratchet up the lies more and more, but I think Trump coming into the office essentially supercharged this idea in Silicon Valley that actually, lying was more profitable than truthtelling.

      As someone who's trusting, I do sort of listen to Altman or Amodei (who I think has been a bit more truthful in his predictions, a year from AI writing all software actually ended up being more truthful than I think people though, even if it isn't technically true) and like have this nagging voice in the back of my head that these people know something I don't, but then, just looking at this clear leadership trend seeming to suggest that lying is more profitable than trying to tell the truth, and the whole picture definitely does not look clear.

  • butterNaN4 hours ago
    However! "Financial experts warn OpenAI may go bankrupt by mid-2027": https://finance.yahoo.com/news/financial-experts-warn-openai...
  • jqpabc1234 hours ago
    Translation: We need $billions more because we're literally lighting it on fire.