7 pointsby Bluestein3 days ago2 comments
  • MountainMan13123 days ago
    AI will create more jobs because here in a few years we're all going to have to unfuck all this middle-manager-vibe-coded "software"
    • andsoitis3 days ago
      > middle-manager-vibe-coded "software"

      Isn’t it overwhelmingly engineers vibe-coding, rather than managers? Isn’t it engineers creating the vibe coding tools and advocating for it?

  • hammyhavoc3 days ago
    Ah yes, Mark Cuban—the same guy who said in 2017 that people shouldn't go and learn computer science as AI will be writing all the code within a few years and that people should instead go and get a liberal arts degree—on the basis that automating math is supposedly easier than automating critical thinking.

    The same dude who said he would rather have bananas than Bitcoin then became another crypto chud in 2020 with crypto and NFT nonsense galore, because despite being a billionaire, fell for the FOMO.

    The same dude who claimed that banks would collapse due to crypto and make it irrelevant—then the 2022 crypto Winter hit.

    The same dude who also sold his Facebook shares shortly after the IPO.

    The same dude who said that if Donald Trump won the 2016 election then the markets would crash.

    Remember letting Steve Nash go in 2004? The same Steve Nash who then won two MVPs?

    I think we can ignore Mark Cuban, and he should go and enjoy his billions. Quietly.

    • andsoitis3 days ago
      Do you think he is wrong on this?
      • hammyhavoc21 hours ago
        I don't think it matters whether he's right or wrong when he's not good at making tech-adjacent predictions, period. He's only still put on a pedestal because he's a billionaire, but is mostly irrelevant to any conversation of 2025.

        Anybody amassing billions of dollars in personal wealth has a non-desirable character flaw and is a potential danger to the human race. You know what would create jobs? Circulating wealth instead of trying to pump-and-dump crypto shit and AI vaporware.

        That we value the thoughts of billionaires with zero relevance to the conversation at hand is exactly what is wrong with this. Mark Cuban having any thoughts on AI is celeb gossip. It's like Justin Bieber telling you that you're not clocking the grass is green and the sky doesn't exist—who cares? Total brainrot.

        The billionaire-class who will never be affected by the things they comment on from their ivory towers are visibly uncomfortable on a level that multiple billions of dollars does not comfort. These are the people building underground lairs in Hawaii, armoured weenie wagons with laughable bulletproof window stage demos, and the same people who will tell you that you can empower yourself by buying the thing that's going to make them wealthier.

        Mark Cuban is really saying "when those with less than me decide to each the rich, please don't eat me for investing in this thing that I told you wouldn't hurt a bit, I tried to comfort you. I'm not a bad dude. You want to be wealthy and powerful too, right?"

        If it was someone more reputable with a knack for making predictions in this specific socioeconomic capacity regarding new technologies then they are perhaps more worthwhile to value the opinion of—when considered alongside many other experts and what they say, but not individually.

        People shouldn't ask me about volcanoes because I'm not a volcanologist, so we shouldn't ask billionaires with very few relevant skills and not much in the way of relevant thoughts to predict what will happen. It's like being a serf and asking a king if it's all going to be okay for the peasants—of course they will say it will be.

        Mark Zuckerberg was adamant that the Metaverse was going to be the next big thing and burned billions on it. Even renamed Facebook to Meta. Zuck and Cuban are saying AI is going to `xyz`, so I'd take what either of them say with a colossal pinch of salt.

        Personally? I think life will largely carry on as it was. Like crypto, the niche becomes increasingly full of snake oil, vaporware, conmen et al, the public eventually starts treating the buzzword negatively, and the hype cycle begins all over again with something else. Most of what people are throwing LLMs at is inappropriate.

        I'm personally more interested in machine vision and machine learning. I personally hate the term "AI" being used to refer to LLMs and generative AI. AI is a big field and all the interest is on what's basically a gimmick to me. The term is now so poisoned that it's like people believing anyone who works with cryptography has something to do with blockchain grifting. Likewise, people immediately assume decentralization must refer to blockchain. It's been hugely unhelpful with little in the way of meaningful technological breakthroughs.

        Okay, LLMs. Cool. What's the value here? They're shit for customer support. They're shit for writing new code that hasn't been done before, and still pretty shit for things that no-code solutions could knock out far more reliably. Their output is trite drivel in terms of fiction. Their output is diminishing the signal-to-noise ratio.

        People like Mark Cuban are worsening the signal-to-noise ratio too. We don't need to hear from Mark Cuban anymore. He's sat on billions of dollars in personal wealth and still wants more, just like Elon Musk, and has a long history of being the loudest and most unjustifiably over-confident dude in the room.

        Like with cryptoshit, the "AI" buzzwordery is another king-making grift for the most part. The days of any semblance of good intentions or actual substance are long behind us. We are into the era of "we will reach AGI" borderline-religious bullshiting of ourselves as a species because it is profitable to peddle lies for people to believe in. This is all without actually admitting that what they're calling AGI is not what is classically defined as AGI.

        Elon Musk supposedly desires to make an LLM that finds "truth" (Grok), yet it fails spectacularly at that. If he ever did succeed in that pursuit, either he would destroy it or it would destroy him one way or another, figuratively or literally. It can't reason—it's a next word prediction model. To believe otherwise is unscientific and makes someone irrelevant in their thoughts.

        When the dust settles, history books will see it for what it is—digital pet rocks and hand puppets. Some people are talented ventriloquists, but you shouldn't trust what the puppet LLMs tell you when their pre-prompting makes them follow a specific agenda. You only need to converse with the average LLM (ab)user on HN who has an LLM summarise an article who argues something non-sequitur or false because their AI summary was so off the mark and usually admit they only skimmed the article (bollocks), or had it summarised.

        Personally, I think increasing the scale of tokens in a model leading to worse output means that AGI will not be reached via LLMs. It is a technological dead-end and gimmick, IMO. It's a cool trick, but it's all sizzle and no substance.

        If there was truly substance, where is it? Why aren't all the bugs and roadmap of issues sorted for major FOSS projects? Why are companies who implement AI customer services not uncommonly ditching it after a while? And no, companies that offer AI-adjacent products don't count—selling pickaxes in an illusory gold rush, IMO.

        Copilot is on GitHub, but the quality of code is increasingly dogshit. Claude Code can read and write repositories directly, but vibe coded slop is now a meme.