170 pointsby ChrisArchitect14 hours ago17 comments
  • incognito12414 hours ago
    Watched it a while ago. Made me seriously think about AI and what we should use it for. I feel like all the entertainment use cases (image and video gen) are a complete waste.
    • mattlondon14 hours ago
      The chatbots and image editors are just a side-show. The real value is coming in e.g. chemistry (Alpha fold etc all), fusion research, weather prediction etc.
      • epolanski7 hours ago
        ML is used in weather prediction since the 80s and is the backbone of it since almost a decade.

        Not sure what are LLMs supposed to do there.

        • danpalmer7 hours ago
          No one is suggesting using LLMs for weather. DeepMind is making significant progress on weather prediction with new AI models.
          • neumann2 hours ago
            oh god - please tell BoM in Australia. Either ML is not keeping up with clime change unpredictability, or SOTA is worse than what we had 10 years ago.
      • poszlem12 hours ago
        The real value is coming in warfare.
        • awaythrow99912 hours ago
          Right. More accurate predictions for meta-data based killings which as championed by US in their war on terror
          • walletdrainer11 hours ago
            Metadata based killings are most likely a huge improvement from the prior state of affairs
            • modeless10 hours ago
              Yeah. Let the leaders assassinate each other with drone strikes instead of indiscriminately bombing whole cities as they used to.
              • dylan6049 hours ago
                what gov't in modern day would fall because the leader was assassinated? the next in line would just step up, and now have a pissed population that will be in favor of ratcheting up beyond assassinations.
          • 12 hours ago
            undefined
      • echelon13 hours ago
        None of that has reached the market yet. If it was up to the sciences alone, AI couldn't bear the weight of its own costs.

        It also needs to be vertically integrated to make money, otherwise it's a handout to the materials science company. I can't see any of the AI companies stretching themselves that thin. So they give it away for goodwill or good PR.

        • incognito12413 hours ago
          That's not really true. Commercial weather prediction has reached the market, and a drug (sorry, can't find the new s link) that was found by AI-accelerated drug discovery is now in clinical testing
          • aoeusnth13 hours ago
            The reason why vertical integration is important for AI investment is that if AI is commoditized, then that AI-acceleration will costs pennies for drugs that are worth billions.

            I don't see how OpenAI or Google can profit from drug discovery. It's nearly pure consumer surplus (where the drug companies and patients are the consumers).

    • cultofmetatron5 hours ago
      unfortunately all this work on sora has very real military use case. I personally think all this investment in sora by open AI is largely to create a digital fog of war. Now when a rocket splatters a 6 year old palestinian girl's head across the pavement like a jackson polock painting, They will be able to claim its AI generated by state sponsored actors in order to prevent disruption to the manufactured consent aperatus.
    • modeless13 hours ago
      You might have said the same thing about GPUs for 20 years when they were mostly for games, before they turned out to be essential for AI. All the entertainment use cases were directly funding development of the next generation of computing all along.
    • threethirtytwo13 hours ago
      Why are images and video a complete waste? This makes no sense to me.

      Right now the generators aren’t effective but they are definitely stepping stones to something better in the future.

      If that future thing produces video, movies and pictures better than anything humanity can produce at a rate faster than we can produce things… how is that a waste?

      It can arguably be bad for society but definitely not a waste.

      • incognito12413 hours ago
        Let me phrase it a bit differently, then: AI generated cats in Ghibli style are a waste, we should definitely do less of that. I did not hold that opinion before the documentary

        Education-style infographics and videos are OK.

        • danielbln12 hours ago
          I'm glad you're not the sole arbiter for what is wasteful and what isn't.
          • dylan6049 hours ago
            Just because you disagree does not make them wrong though
          • 10 hours ago
            undefined
        • threethirtytwo10 hours ago
          I’m not even talking about this. Those cat videos are just stepping stones for academy award winning masterpieces of cinema like dune. All generated by AI on a click in one second.
      • QuantumGood12 hours ago
        Parent said "entertainment use cases" are a complete waste, not all uses of images and video. I don't agree, but do particularly find educational use cases of AI video are becoming compelling.

        I help people turn wire rolling shelf racks into the base of their home studio, and AI can now create a "how to attach something to a wire shelf rack" without me having to do all the space and rack and equipment and lighting and video setup, and just use a prompt. It's not close to perfect yet, but it's becoming useful.

        • dylan6049 hours ago
          > particularly find educational use cases of AI video are becoming compelling.

          compelling graphics take a long time to create. for education content creators, this can be too expensive as well. my high school physics teacher would hand draw figures on transparencies on an overhead projector. if he could have produced his drawings as animations cheap and fast using AI, it would have really brought his teaching style (he really tried to make it humorous) to another level. I think it would be effective for his audience.

          imagine the stylized animations for things like the rebooted Cosmos, NOVA, or even 3Blue1Brown on YT. there is potential for small teams to punch above their weight class with genAI graphics

        • threethirtytwo10 hours ago
          If AI can produce movies, video and art better aka “more entertaining” then humans than how is it a waste?
          • youngNed7 hours ago
            Because vast amounts of people find Coldplay entertaining. That doesn't mean it's a good thing.
            • threethirtytwo3 hours ago
              You lack imagination. When ChatGPT just came out people were saying it can never code. Now if you aren’t using ai in your coding you’re biting the dust.

              Stop talking about the status quo… we are talking about the projected trendline. What will AI be when it matures?

              Second you’re just another demographic. Smaller than fans of Coldplay but equally generic and thus an equal target for generated art.

              Here’s a prompt that will one day target you: “ChatGPT, create musical art that will target counter culture posers who think they’re better than everyone just because they like something that isn’t mainstream. Make it so different they will worship that garbage like they worship Pearl Jam. Pretend that the art is by a human so what when they finally figure out they fell for it hook line and sinker they’ll realize their counter culture tendencies are just another form of generic trash fandom no different than people who love cold play or, dare I say it, Taylor swift.”

              What do you do then when this future comes to pass and all content even for posers is replicated in ways that are superior?

              • plastic316940 minutes ago
                ”What a way to show them. You rock! Unfortunately I can’t create the musical art you requested as you reference multiple existing musical acts by name. How about rephrasing your request in a way that is truly original and unique to you”
          • wasmainiac9 hours ago
            But it’s not. I think most can agree that there really has not been any real entertainment from genAI beyond novelty crap like seeing Lincoln pulling a nice track at a skate park. No one wants to watch genAI slop video, no one wants to listen to genAI video essays, most people do not want to read genAI blog posts. Music is a maybe, based on leaderboards, but it is not like we ever had a lack of music to listen to.
            • threethirtytwo2 hours ago
              Bro. You and your cohorts said the exact same thing about LLMs and coding when ChatGPT just came out. The status quo is obvious. So no one is talking about that.

              Draw the trendline into the future. What will happen when the content is indistinguishable and AI is so good it produces something moves people to tears?

            • CamperBob24 hours ago
              Eventually it will be good enough that you won't know the difference.

              I have a feeling that's already happened to me.

    • tim33312 hours ago
      Practical things are probably treating diseases and more abundance of physical goods. More speculative/sci-fi is merging in some form with AI and maybe immortality which I think is the more interesting bit.
    • jeffbee14 hours ago
      DeepMind's new [edit: apparently now old] weather forecast model is similar in architecture to the toys that generate videos of horses addressing Congress or cats wearing sombreros. The technology moves forward and while some of the new applications are not important, other applications of the same technology may be important.
      • incognito12413 hours ago
        Is it really similar? I was under the impression it's a GNN of a (really dense) polyhedron, not a diffusion model
    • potatonagger9 hours ago
      [flagged]
  • someguy10101012 hours ago
    reposting this from youtube comment

    From 1:14:55-1:15:20, within the span of 25 seconds, the way Demis spoke about releasing all known sequences without a shred of doubt was so amazing to see. There wasn't a single second where he worried about the business side of it (profits, earnings, shareholders, investors) —he just knew it had to be open source for the betterment of the world. Gave me goosebumps. I watched that on repeat for more than 10 times.

    • mNovak9 hours ago
      My interpretation of that moment was that they had already decided to give away protein sequences as charity, it was just a decision of all as a bundle vs fielding individual requests (a 'service').

      Still great of them to do, and as can be seen it's worth it as a marketing move.

      • dekhn8 hours ago
        (as an aside, this is a common thing that comes up when you have a good model: do you make a server that allows people to do one-off or small-scale predictions, or do you take a whole query set and run it in batch and save the results in a database; this comes up a lot)
    • dekhn11 hours ago
      Another way to interpret this (and I don't mean it pejoratively at all): Demis has been optimizing his chances for winning a nobel prize for quite some time now. Releasing the data increased that chance. He also would have been fairly certain that the commercial value of the predictions was fairly low (simply predicting structures accurately was never the rate-limiting step for downstream things like drug discovery). And that he and his team would have a commercial advantage by developing better proprietary models using them to make discoveries.
      • tim33310 hours ago
        Also since selling Deepmind to Google, it's Google's shareholder's money really.
      • sgt1018 hours ago
        I think that's a rather conspiratorial way of framing it.

        I think it's more about someone trying to do the most good that was possible at that time.

        I doubt he cares much about prizes or money at this point.

        • dekhn8 hours ago
          It's hardly a conspiracy to use strategy and intelligence to maximize the probability of achieving the outcome you desire.

          He doesn't have to care much about prizes or money at this point: he won his prize and he gets all the hardware and talent he needs.

    • potsandpans10 hours ago
      I also noticed this as well. Actually went back and watched it several times. It's an incredible moment. I keep thinking, "if this moment is real, this is truly a special person."
  • nightski13 hours ago
    In my experience all DeepMind content ends up being a puff piece for Dennis Hassabis. It's like his personal marketing engine lol.
    • ainch9 hours ago
      Perhaps they need more advertising around the correct spelling of his name.
    • stevenjgarner13 hours ago
      Is that a good thing or a bad thing? Demis is after all a co-founder and CEO.
      • Hacker_Yogi13 hours ago
        Makes it seem that AI is a one-man show while also feeding the hype cycle
    • ipnon5 hours ago
      He's the leading AI researcher at the 3rd largest company in the world in the middle of an AI boom. He's naturally going to have quite the marketing budget behind him!
  • ilaksh11 hours ago
    Greg Kohs and his team are brilliant. For example, the way it captured the emotional triumph of the AlphaFold achievement. And a lot of other things.

    One of the smart choices was that it omitted a whole potential discussion about LLMs (VLMs) etc. and the fact that that part of the AI revolution was not invented in that group, and just showed them using/testing it.

    One takeaway could be that you could be one of the world's most renowned AI geniuses and not invent the biggest breakthrough (like transformers). But also somewhat interesting is that even though he had been thinking about this for most of his life, the key technology (transformer-type architecture) was not invented until 2017. And they picked it up and adapted it within 3 years of it being invented.

    Also I am wondering if John Jumper and/or other members of the should get a little bit more credit for adapting transformers into Evoformer.

  • quirino11 hours ago
    Watched it this week. Pretty good.

    There are a couple parts at the start and the end where a lady points her phone camera at stuff and asks an AI about what it sees. Must have been mind-blowing stuff when this section was recorded (2023), but now it's just the bare minimum people expect of their phones.

    Crazy times we're living in.

    • HarHarVeryFunny8 hours ago
      I was ok with that as "fledgling AI" at the start of the movie/documentary, but thought that going back to it and having the chatbot suggest a chess book opening to Hassabis at the end was cheesy and misleading.

      They should have ended the movie on the success of AlphaFold.

  • stevenjgarner13 hours ago
  • dwroberts13 hours ago
    I want to watch it, but at the same time, it’s basically going to be an advert for Google. I’m not sure if I can put up with the uncritical fluff.

    I would love to see a real (ie outsider) filmmaker do this - eg an updated ‘Lo and behold’ by Werner Herzog

    • ilaksh11 hours ago
      It was directed by Greg Kohs, who is a real filmmaker and does not work for Google.
      • dwroberts10 hours ago
        Yeah I don’t mean to say they’re not a real filmmaker or untalented etc, I mean more the context they’re doing it. That they’ve chosen to cover this topic themselves, and that they would show critical angles of it and not just promo + hagiography
      • lysace11 hours ago
        Are you saying this movie production wasn't paid for by Google? If it was, surely he did?
    • dist-epoch12 hours ago
      It's an advert for Demis Hassabis, not Google.
    • actionfromafar13 hours ago
      Where he speaks french
  • jnwatson12 hours ago
    I caught it on the airplane a few days ago. I would have loved a little more technical depth, but I guess that's pretty much standard for a puff piece.

    It is interesting that Hassabis has had the same goal for almost 20 years now. He has a decent chance of hitting it too.

  • redbell13 hours ago
    Just watched it yesterday and enjoyed every second of it, the director put more focus on Demis Hassabis which turns out to be a true superhero and I have to confess that I am probably admiring him more that any other human in the tech industry.
  • dwa359213 hours ago
    Loved this documentary. People complaining - WTFV first.
  • dwarfpagent14 hours ago
    I find it funny that the YouTube link takes you to the film, but like an hour into it.
    • vmilner10 hours ago
      Yes, it made me think I'd already watched it and had forgotten about it...
  • ChrisArchitect14 hours ago
  • circadian6 hours ago
    There's some funny comments going on in this thread. Understandably so. What could be more divisive an issue than AI on a silicon valley forum!?

    As a brit, I found it to be a really great documentary about the fact that you can be idealistic and still make it. There are, for sure, numerous reasons to give Deepmind shit: Alphabet, potential arms usage, "we're doing research, we're not responsible". The Oppenheimer aspect is not to be lost, we all have to take responsibility for wielding technology.

    I was more anti-Deepmind than pro before this, but the truth is as I get older it's nicer to see someone embodying the aspiration of wanton benevolence (for whatever reason) based on scientific reasoning, than to not. To keep it away from the US and acknowledge the benefits of spreading the proverbial "love" to the benefit of all (US included) shows a level of consideration that should not be under-acknowledged.

    I like this documentary. Does AGI and the search for it scare me? Hell yes. So do killer mutant spiders descending on earth post nuclear holocaust. It's all about probabilities. To be honest: disease X freaks me out more than a superintelligence built by an organisation willing to donate the research to solve the problems of disease X. Google are assbiscuits, but Deepmind point in the right direction (I know more about their weather and climate forecasting efforts). This at least gave me reason to think some heart is involved...

  • ChrisArchitect14 hours ago
    Hard to discount the impact of AlphaFold in science work but submitting this to a number of film festivals like Tribeca seems a bit AI-washing.
    • llbbdd9 hours ago
      What is AI-washing?
  • beginnings12 hours ago
    i tried to watch it but like AI in general, it was extraordinarily boring. neural nets are really cool technically, but the whole AI thing is just getting old and I couldnt care less where its going

    we can guarantee that whether its the birth of superintelligence or just a very powerful but fundamentally limited algorithm, it will not be used for the betterment of mankind, it will be exploited by the few at the top at the expense of the masses

    because thats apparently who we are as a species

    • hbarka11 hours ago
      Hi, I’m genuinely curious about your writing style. I’m seeing this trend of no proper casing and no punctuation becoming vogue-ish. Is there a particular reason you prefer to write this way or is this writing style typical for a generation? Sincere question, not snark, coming from an older generation guy.
      • mystifyingpoi11 hours ago
        This is the writing style of this generation. I've just scrolled 6 months of my conversation with a friend in his twenties. Not a single comma or period to be seen. I mean on his side.
      • aswegs811 hours ago
        If you grew up in the internet of early 2000s, that's how we wrote online.
        • querez11 hours ago
          I grew up in the Internet at that time, and it's certainly not how I type. So you might want to be more specific about which sites or subcultures you think this style is representative of?
          • luma11 hours ago
            I’m certainly no authority but i tend to write the same way for casual communication, came from the 90s era BBS days. It was (and still is) common on irc nets too. Autocorrect fixes up some of it, but sometimes i just have ideas i’m trying to dump out of my head and the shift key isn’t helping that go faster. Emails at work get more attention, but bullshittin with friends on the PC? No need.

            I’ll code switch depending on the venue, on HN i mostly Serious Post so my post history might demonstrate more care for the language than somewhere i consider more causal.

        • 11 hours ago
          undefined
    • tim33311 hours ago
      If you watch on there's a bit where they decide to give away all the protein folding results for free when they could have charged (https://youtu.be/d95J8yzvjbQ?t=4497). Not everything is exploitation rather than the betterment of mankind.
    • AndrewKemendo11 hours ago
      Correct! I’m glad people are finally starting to get it
      • verisimi11 hours ago
        weekends are always better on hn
  • DrierCycle14 hours ago
    AlphaFold is optimization, not thinking. Propaganda 'r us.
    • fredoliveira14 hours ago
      Did you watch the documentary? Would probably fare better if you did, because it'd give you the context for the film title.
      • DrierCycle14 hours ago
        I'm an hour into it, unconvinced.

        The illusion that agency 'emerges' from rules like games, is fundamentally absurd.

        This is the foundational illusion of mechanics. It's UFOlogy not science.

        • fredoliveira14 hours ago
          Well, two things: it's the last sentence of the film; being on hour into something you're calling propaganda is brave.

          Anyways. I thought the documentary was inspiring. Deepmind are the only lab that has historically prioritized science over consumer-facing product (that's changing now, however). I think their work with AlphaFold is commendable.

          • DrierCycle14 hours ago
            It's science under the creative boundary of binary/symbols. And as analog thinkers, we should be developing far greater tools than these glass ceilings. And yes, having finished the film, it's far more propagandic than it began as.

            Science is exceeding the envelop of paradox, and what I see here is obeying the envelope in order to justify the binary as a path to AGI. It's not a path. The symbol is a bottleneck.

            • Zigurd13 hours ago
              Everything between your ears is an electrochemical process. It's all math and there is no "creative boundary." There's plenty to criticize in AI hype that we're going to get to machine intelligence very soon. I suspect a lot of the hype is oriented towards getting favorable treatment from the government if not outright subsidies. But claiming that there are fundamental barriers is a losing bet.
              • DrierCycle13 hours ago
                It doesn't happen "btwn ears" and math is an illusion of imprecision. The fundamental barrier is frameworks and computers will not be involved. There will be software obviously. But it will never be computed.
          • amitport14 hours ago
            Plenty *commercial* labs frequently prioritized pure science over *immediate* consumer products, but none done so out of charity. Deepmind included.
        • Zigurd13 hours ago
          Your mind emerges from a network of neurons. Machine models are probably far from enabling that kind of emergence, but if what's going on between our ears isn't computation, it's magic.
          • DrierCycle13 hours ago
            It's not magic. It's neural syntax. And nothing trapped by computation is occurring. It's not a model, it is the world as actions.

            The computer is a hand-me-down tool under evolution's glass ceiling. This should be obvious: binary, symbols, metaphors. These are toys (ie they are models), and humans are in our adolescent stage using these toys.

            Only analog correlation gets us to agency and thought.

          • 13 hours ago
            undefined
        • MattRix14 hours ago
          Is there a fundamental difference between it and true agency/thought? I’m not so sure.
          • DrierCycle14 hours ago
            Agency will emerge from exceeding the bottleneck of evolution's hand-me-down tools: binary, symbols, metaphors. As long as these unconscious sportscasters for thought "explain" to us what thought "is", we are trapped. DeepMind is simply another circular hamster wheel of evolution. Just look at the status-propaganda the film heightens in order to justify the magic.
        • dboreham12 hours ago
          Why is it absurd? Because believing that would break some deep delusion humans have about themselves?
          • youngNed7 hours ago
            Quite honestly, it's about time the penny dropped.

            Look around you, look at the absolute shit people are believing, the hope that we have any more agency than machines... to use the language of the kids, is cope.

            I have never considered myself particularly intelligent, which, I feel puts me at odds with many of HN readership, but I do always try to surround myself with myself with the smartest people I can.

            The amount of them that have fallen down the stupidest rabbit holes i have ever seen really makes me think: as a species, we have no agency

    • HarHarVeryFunny8 hours ago
      Sure, but AlphaFold is still probably the most impactful and positive thing to have come out of "Deep Learning" so far.
      • theturtletalks7 hours ago
        Didn’t the transformer model come from AlphaFold? I feel like we wouldn’t have had the LLMs we use today if it wasn’t for AlphaFold.
        • HarHarVeryFunny5 hours ago
          The Transformer was invented at Google, but by a different team. AFAIK the original AlphaFold didn't use a transformer, but AlphaFold 2.0 and 3.0 do.
    • Rochus12 hours ago
      Not sure why this is downvoted. The comment cuts to the core of the "Intelligence vs. Curve-Fitting" debate. From my humble perspective as a PhD in the molecular biology /biophysics field you are fundamentally correct: AlphaFold is optimization (curve-fitting), not thinking. But calling it "propaganda" might be a slight oversimplification of why that optimization is useful. If you ask AlphaFold to predict a protein that violates the laws of physics (e.g. a designed sequence with impossible steric clashes), it will sometimes still confidently predict a folded structure because it is optimizing for "looking like a protein", not for "obeying physics". The "Propaganda" label likely comes from DeepMind's marketing, which uses words like "Solved"; instead, DeepMind found a way to bypass the protein folding problem.
      • dekhn10 hours ago
        If there's one thing I wish DeepMind did less of, it's conflating the protein folding problem with static structure prediction. The former is a grand challenge problem that remains 'unsolved' while the latter is an impressive achievment that really is optimization using a huge collection of prior knowledge. I've told John Moult, the organizer of CASP this (I used to "compete" in these things), and I think most people know he's overstating the significance of static structure prediction.

        Also, solving the protein folding problem (or getting to 100% accuracy on structure prediction) would not really move the needle in terms of curing diseases. These sorts of simplifications are great if you're trying to inspire students into a field of science, but get in the way when you are actually trying to rationally allocate a research budget for drug discovery.

        • smj-edison7 hours ago
          I'm really curious about this space: what types of simulation/prediction (if any) do you see as being the most useful?

          Edit to clarify my question: What useful techniques 1. Exist and are used now, and 2. Theoretically exist but have insurmountable engineering issues?

          • dekhn6 hours ago
            Right now techniques that exist and used now are mostly around target discovery (identifying proteins in humans that can be targeted by a drug), protein structure prediction and function prediction. Identifying sites on the protein that can be bound by a drug is also pretty common. I worked on a project recently where our goal was to identify useful mutations to make to an engineered antibody so that it bound to a specific protein in the body that is linked to cancer.

            If your goal is to bring a drug to market, the most useful thing is predicting the outcome of the FDA drug approval process before you run all the clinical trials. Nobody has a foolproof method to do this, so failure rates at the clinical stage remain high (and it's unlikely you could create a useful predictive model for this).

            Getting even more out there, you could in principle imagine an extremely high fidelity simulation model of humans that gave you detailed explanations of why a drug works but has side effects, and which patients would respond positively to the drug due to their genome or other factors. In principle, if you had that technology, you could iterate over large drug-like molecule libraries and just pick successful drugs (effective, few side effects, works for a large portion of the population). I would describe this as an insurmountable engineering issue because the space and time complexity is very high and we don't really know what level of fidelity is required to make useful predictions.

            "Solving the protein folding problem" is really more of an academic exercise to answer a fundamental question; personally, I believe you could create successful drugs without knowing the structure of the target at all.

            • smj-edison6 hours ago
              Thank you for the detailed answer! I'm just about to start college, and I've been wanting to research molecular dynamics, as well as building a quantitative pathway database. My hope is to speed up the research pipeline, so it's heartening to know that it's not a complete dead end!
      • HarHarVeryFunny8 hours ago
        It seems that to solve the protein folding problem in a fundamental way would require solving chemistry, yet the big lie (or false hope) of reductionism is that discovering the fundamental laws of the universe such as quantum theory doesn't in fact help that much with figuring out the laws/dynamics at higher levels of abstraction such as chemistry.

        So, in the meantime (or perhaps for ever), we look for patterns rather than laws, with neural nets being one of the best tools we have available to do this.

        Of course ANNs need massive amounts of data to "generalize" well, while protein folding only had a small amount available due to the months of effort needed to experimentally discover how any protein is folded, so DeepMind threw the kitchen sink at the problem, apparently using a diffusion like process in AlphaFold 3 to first determine large scale structure then refine it, and using co-evolution of proteins as another source of data to address the paucity.

        So, OK, they found a way around our lack of knowledge of chemistry and managed to get an extremely useful result all the same. The movie, propaganda or not, never suggested anything different, and "at least 90% correct" was always the level at which it was understood the result would be useful, even if 100% based on having solved chemistry / molecular geometry would be better.

        • dekhn6 hours ago
          We have seen some suggestion that the classical molecular dynamics force fields are sufficient to predict protein folding (in the case of stable, soluble, globular proteins), in the sense that we don't need to solve chemistry but only need to know a coarse approximation of it.
      • DrierCycle12 hours ago
        I'm concerned that coders and the general public will confuse optimization with intelligence. That's the nature of propaganda, substituting sleight of hand to create a false narrative.

        btw an excellent explanation, thank you.

        • autonomousErwin9 hours ago
          What's the difference between optimisation and intelligence?
          • HarHarVeryFunny8 hours ago
            For a start optimization is a process, and intelligence is a capability.
      • tim33310 hours ago
        I think if you watch the actual film you'd find they don't claim AlphaFold is thinking.
        • BanditDefender7 hours ago
          There is quite a bit of bait-and-switch in AI, isn't there?

          "Oh, machine learning certainly is not real learning! It is a purely statistical process, but perhaps you need to take some linear algebra. Okay... Now watch this machine learn some theoretical physics!"

          "Of course chain-of-thought is not analogous to real thought. Goodness me, it was a metaphor! Okay... now let's see what ChatGPT is really thinking!"

          "Nobody is claiming that LLMs are provably intelligent. We are Serious Scientists. We have a responsibility. Okay... now let's prove this LLM is intelligent by having it take a Putnam exam!"

          One day AI researchers will be as honest as other researchers. Until then, Demis Hassabis will continue to tell people that MuZero improves via self-play. (MuZero is not capable of play and never will be)

    • dwa359214 hours ago
      what is thinking?
      • DrierCycle14 hours ago
        Sharp wave ripples, nested oscillations, cohering at action-syntax. The brain is "about actions" and lacks representations.
      • __patchbit__14 hours ago
        Creatively peeling the hyper dimensional space in the scope of simplectic geometry, markhov blanket and helmholtz invariance????