92 pointsby jasondavies7 hours ago24 comments
  • KaiserPro6 hours ago
    One of the sad things about tech is that nobody really looks at history.

    The same kinds of essays were written about trains, planes and nuclear power.

    Before lindbergh went off the deepend, he was convinced that "airmen" were gentlemen and could sort out the world's ills.

    The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

    AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

    Loads of service jobs too, along with a load of manual jobs when suitable large models are successfully applied to robotics (see ECCV for some idea of the progress for machine perception.)

    But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

    Well AI is going to make that worse. It'll cause huge unrest (see luddite riots, peterloo, the birth of unionism in the USA, plus many more)

    This brings us to the next thing that AI will be applied to: Murdering people.

    Andril is already marrying basic machine perception with cheap drones and explosives. its not going to take long to get to personalised explosive drones.

    AI isn't the problem, we are.

    The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

    But looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

    • kranke1556 hours ago
      We are entering a dystopia and people are still writing these wonderful essays about how AI will help us.

      Microtargeted psychometrics (Cambridge Analytica, AggregateIQ) have already made politics in the West an unending barrage of information warfare. Now we'll have millions of autonomous agents. At some point soon in the future, our entire feed will be AI content or upvoted by AI or AI manipulating the algorithm.

      It's like you said - this essay reads like peak AI. We will never have as much hope and optimism about the next 20 years as we seem to have now.

      Reminds me of a graffiti I saw in London, while the city's cost of living was exploding and making the place unaffordable to anyone but a few:

      "We live in a Utopia. It's just not ours."

      • xpe2 hours ago
        Is such certainty warranted? I don’t think so; it strains credibility.

        I’m very concerned about many future scenarios. But I admit the necessity of probabilistic assessments.

      • xpe2 hours ago
        When reading the article, the author makes it very clear that AI can cut both ways. See the intro, you may have overlooked it.
      • MichaelZuo4 hours ago
        Your looking at it from a narrow perspective.

        There are millions of middle class households living pretty comfortable lives in Africa, India, China, ASEAN, and Central Asia that were living hand-to-mouth 20 years ago.

        And I don’t mean middle class by developing country standards, I mean middle class by London, UK, standards.

        So it pretty much is a ‘utopia’ for them, assuming they can keep it.

        Of course that’s cold comfort for households in London regressing to the global average, but that’s the inherent nature of rising above and falling towards averages.

    • jimkleiber6 hours ago
      > The essay contains a lot of coulds, but doesn't touch on the base problem: human nature.

      > AI isn't the problem, we are.

      I think when we frame it as human _nature_, then yes, _we_ look like the problem.

      But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

      If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

      Maybe that's a better framing: the base problem is human nurture?

      • laurex5 hours ago
        I think this is an important distinction. Yes, humans have some inbuilt weaknesses and proclivities, but humans are not required to live in or develop systems in which those weaknesses and proclivities are constantly exploited for the benefit/power of a few others. Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response. We are currently in a biological runaway state with extraction, but it's not the only way humans have of behaving.
        • exe345 hours ago
          > Throughout human history, there have been practices of contemplation, recognition of interdependence, and ways of increasing our capacity for compassion and thoughful response.

          has this ever been widespread in society? I think such people have always been few and far between?

          • keyringlight5 hours ago
            The example that comes to mind is post-WW2 Germany, but that was apparently a hard slog to change the minds of the German people. I really doubt any organization could do something similar presenting an opposing viewpoint to the companies (and their resources) behind and using AI
          • throwaway143564 hours ago
            you are living in it.

            The default state is to have extremely poor hard working people and extremely rich not working ones.

            No one would have dared to dream of the luxury working people enjoy today. It took some doing! We use to sell people not to long ago. Kids in coal mines. The work week was 6-7 days 12-14 hours. One coin per day etc

            The fight isn't over, the owner class won the last few rounds but there remains much to take for either side.

      • achrono5 hours ago
        Sure. But why do you think changing human nurture is any easier than changing human nature? I suspect that as your set of humans in consideration tends to include the set of all humans, the gap between changeability of human nature vs changeability of human nurture reduces to zero.

        Perhaps you are implying that we sign up for a global (truly global, not global by the standards of Western journalists) campaign of complete and irrevocable reform in our behavior, beliefs and knowledge. At the very least, this implies simply killing off a huge number of human beings who for whatever reason stand in the way. This is not (just) a hypothesis -- some versions of this have been tried and tested. *

        * https://en.wikipedia.org/wiki/Totalitarianism

        • wrs4 hours ago
          Arguably, human nature hasn't changed much in thousands of years. But there has been plenty of change in human culture/nurture on a much smaller timescale. E.g., look at a graph of world literacy rates since 1800. A lot of human culture is an attempt to productively subvert or attenuate the worse parts of human nature.

          Now, maybe the changes in this case would need to happen even quicker than that, and as you point out there's a history of bad attempts to change cultures abruptly. But it's nowhere near correct to say that the difficulty is equal.

        • beepbooptheory4 hours ago
          In general I think concepts like politics, art, community, etc try to capture certain discrete ways we are all nurtured. Like I am not even sure you're point here, there is nothing more totalitarian than reducing people to their "nature", it is arguably its precise conceit if it has one, that such a thing is possible. And the fact that totalitarianism is constantly accompanied by force and violence seems to be the biggest critique you can make of all sorts of "human nature" reductions.

          And like what is even the alternative here? What's your freedom of belief worth when your essentially just a behaviorist anyway?

      • tbrownaw5 hours ago
        > I think when we frame it as human _nature_, then yes, _we_ look like the problem.

        But what if we frame it as human _culture_? Then _we_ aren't the problem, but rather our _behaviors/beliefs/knowledge/etc_ are.

        If we focus on the former, we might just be essentially screwed. If we focus on the latter, we might be able to change things that seem like nature but might be more nurture.

        Maybe that's a better framing: the base problem is human nurture?

        This is about the same as saying that leaders can get better outcomes by surrounding themselves with yes-men.

        Just because asserting a different set of facts makes the predicted outcomes more desirable, doesn't mean that those alternate facts are better for making predictions with. What matters is how congruent they are to reality.

    • ManuelKiessling4 hours ago
      I do not agree with the following:

      > But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

      I am, however, criticizing this in isolation — that is, my goal is not to invalidate (nor validate, for that matter) the rest of your text; only this specific point.

      So, I do not agree. We are clearly working a lot less hours than 120 or even 60 years ago, and we are getting a lot more back for it.

      The problem I have with this is that the framing is often wrong — whether some number on a paycheck goes up or down is completely irrelevant at the end of the day.

      The only relevant question boils down to this: how many hours of hardship do I have to put in, in order to get X?

      And X can be many different things. Like, say, a steak, or a refill at the gas station, or a bread.

      Now, I do not have very good data at hand right here and right now, but if my memory and my gut feeling serves me right, the difference is significant, often even dramatic.

      For example, for one kilogram of beef, the average German worker needs to toil about 36 minutes nowadays.

      In 1970, it was twice as much time that needed to be worked before the same amount of beef could be afforded.

      In the seventies, Germans needed to work 145 hours to be able to afford a washing machine.

      Today, it’s less than 20 hours!

      And that’s not even taking into account the amount of „more progress“ we can afford today, with less toil.

      While one can imagine that in 1970, I could theoretically have something resembling a smartphone or a lane- and distance-keeping car getting produced for me (by NASA, probably), I can’t even begin to imagine how many hours, if not millennia, I would have needed to work in order to receive a paycheck that would have paid for it.

      We get SO much more for our monthly paycheck today, and so many more people do (billions actually), it’s not even funny.

    • xpe2 hours ago
      > AI isn't the problem, we are.

      I see major problems with the statement above. First, it is a false dichotomy. That’s a fatal flaw.

      Second, it is not specific enough to guide action. Pretend I agree with the claim. How would it inform better/worse choices? I don’t see how you operationalize it!

      Third, I don’t even think it is useful as a rough conceptual guide; it doesn’t “carve reality at the joints” so to speak.

    • roenxi4 hours ago
      > AI will be used to make things cheaper. That is, lots of job losses. must of us are up for the chop if/when competent AI agents become possible.

      > But those profits will not be shared. Human productivity has exploded in the last 120 years, yet we are working longer hours for less pay.

      Don't you have to pick one? It seems a bit disjointed to simultaneously complain that we are all losing our jobs and that we are working too many hours. What type of future are we looking for here?

      If machines get so productive that we don't need to work, everyone losing their jobs isn't a long-term problem and may not even be a particularly damaging short-term one. It isn't like we have less stuff or more people who need it. There are lots of good equilibriums to find. If AI becomes a jobs wrecking ball I'd like to see the tax system adjusted so employers are incentivised to employ large numbers of people for small numbers of hours instead of small numbers of people for large numbers of hours - but that seems like a relatively minor change and probably not an especially controversial one.

    • N8works4 hours ago
      Yes. I used to share your viewpoint.

      However, recently, I've come to understand that is AI is about the inherently unreal and that authentic human connection is really going to be where it's at.

      We build because we need it after all, no?

      Don't give up. We have already won.

      • Vecr4 hours ago
        I think KaiserPro is saying authentic human connection doesn't "pay the bills", so to speak. If AI is "about the unreal" as you say, what if it makes everything you care about unreal?
    • amelius4 hours ago
      History tells us that humans will not tolerate any "creature" to exist that is smarter than them, so that is where the story will end.
      • tbrownaw2 hours ago
        How exactly does this show up in history? When did we meet something smarter than us, and what did we do that was different than we were doing to less-smart things at the time?
    • swatcoder6 hours ago
      > One of the sad things about tech is that nobody really looks at history.

      First, while I often write much of the same sentiment about techno-optimism and history, you should remember that you're literally in the den of Silicon Valley startup hackers. It's not going to be an easily heard message here, because the site specifically appeals to people who dream of inspiring exactly these essays.

      > The sooner we realise that its not a technical problem to be solved, but a human one, we might stand a chance.

      Second... you're falling victim to the same trap, but simply preferring some kind of social or political technology instead of a mechanical or digital one.

      What history mostly affirms is that prosperity and ruin come and go, and that nothing we engineer last for all that long, let alone forever. There's no point in dreading it, whatever kind of technology you favor or fear.

      The bigger concern is that some of the acheivements of modernity have made the human future far more brittle than it has been in what may be hundreds of thousands of years. Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global. Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

      • tbrownawan hour ago
        > Global homogenization around elaborate technologies -- whether mechanical, digital, social, political or otherwise -- sets us up in a very "all or nothing" existential space, where ruin, when it eventually arrives, is just as global.

        What is the minimum population size needed in order to have, say, computer chips? Or even a ball-point pen? I'd imagine those are a bit higher that what's needed to have pencils, which I've heard is enough that someone wrote a book about it.

        > Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy.

        Is it really a "purge" if individuals are just not choosing to waste time learning things they have no use for?

      • germinalphrase4 hours ago
        “Meanwhile, the purge of diverse, locally practiced, traditional wisdom about how to get by in un-modern environments steals the species of its essential fallback strategy“

        While potentially true, that same wisdom was developed in a world that itself no longer exists. Review accounts of natural wildlife and ecological bounty from even 100 years ago, and it’s clear how degraded our natural world has become in such a very short time.

    • mythrwy5 hours ago
      But will AI be eventually used to change human nature itself?
    • alexashka2 hours ago
      For a non-trivial number of people, having power/status over others is what they like.

      For a non-trivial number of people, they don't care what happens to others, as long as their tribe benefits.

      As long as these two issues are not addressed, very little meaningful progress is possible.

      > Looking at the emotionally stunted, empathy vacuums that control either policy or purse strings, I think it'll take a catastrophe to change course.

      A catastrophe won't solve anything because you'll get the same people who love power over others in power and people who don't mind fucking over others right below them, which is where humanity has always been.

    • startupsfail3 hours ago
      The responsibility the airmen take when they take passengers off the ground (holding their lives in their hands) is a serious one.

      The types of Trump are unlikely to get a license or accumulate enough Pilot In Command hours an not be an accident, and the experience itself changes the person.

      If I have a choice of who to trust, between an airman or not airman, I’d likely choose an airman.

      And I’m not sure what you are referring to about Lindbergh, but among other things he was a Pulitzer Prize winning author, environmentalist and following Pearl Harbor he had fought against the aggressors.

  • thrance6 hours ago
    This is basically the tech CEO's version of the book of revelations: "AI will soon come and make everything right with the world, help us and you will be rewarded with a Millennium of bliss in It's presence".

    I won't comment on the plausibility of what is being said, but regardless, one should beware this type of reasoning. Any action can be justified, if it means bringing about an infinite good.

    Relevant read: https://en.wikipedia.org/wiki/Singularitarianism

    • wmf3 hours ago
      This is a very poor summary of the essay which already contains detailed rebuttals to these arguments.
    • xpe2 hours ago
      Any attempt to connect this to the book of revelations is strained. Amodei uses reasoning and is willing to be corrected and revised; quite the opposite of most “divinely revealed” texts.
    • HocusLocus5 hours ago
      It won't bring about Infinite Good. It'll bring about infinite contentment by diddling the pleasure center in our brains. Because you know, eventually everything is awarded to and built by the lowest bidder.
  • xpe2 hours ago
    The linked article is worth reading.

    Apologies for sounding so dismissive, but after putting in a lot of study myself, I want to warn people here: HN is not a great place for discussing AI safety. As of this writing, I’ve found minimal value in the comments here.

    A curious and truth-seeking reader should find better forums and sources. I recommend seeking out a structured introduction from experts. One could do worse than start with Robert Miles on YouTube. Dan Hendrycks has a nice online textbook too.

  • cs7025 hours ago
    I found the OP to be an earnest, well-written, thought-provoking essay. Thank you sharing it on HN, and thank you also to Dario Amodei for writing it.

    The essay does have one big blind spot, which becomes obvious with a simple exercise: If you copy the OP's contents into you word processing app and replace the words "AI" with "AI controlled by corporations and governments" everywhere in the document, many of the OP's predictions instantly come across as rather naive and overoptimistic.

    Throughout history, human organizations like corporations and governments haven't always behaved nicely.

  • foogazi26 minutes ago
    > would drastically speed up progress

    What does progress even mean here ?

    Every AI advance is controlled by big corps - power will be concentrated with them

    Would Amodei build this if there was no economic payoff on the other side ?

  • HocusLocus5 hours ago
    All Watched Over by Machines of Loving Grace ~Richard Brautigan

    https://www.youtube.com/watch?v=6zlsCLukG9A

  • gyre0076 hours ago
    I think Dario is trying to raise a new round because OpenAI has done and will continue to do so, nevertheless, the essay provides for some really great reading and even if the fraction comes true, it'll be wonderful.
    • lewhoo6 hours ago
      So it's bs but for money and therefore totally fine ? I think it's not ok if only a fraction comes true because some people believe in those things and act on those beliefs right now.
      • gyre0075 hours ago
        I didn't say it was bs. I was alluding to the timing of this essay being published but, clearly, I didn't articulate it in my message well. I also don't think everything he says is bs. Some of it I find a bit naive -- but maybe that's ok -- some other things seem a bit like sci-fi, but who are we to say this is impossible? I'm optimistic but also learnt in life that things improve, sometimes drastically given the right ingredients.
        • lewhoo5 hours ago
          Well I don't know. A bit naive, a bit like sci-fi and aimed at raising money fits my description of bs quite well.
  • TZubiri2 hours ago
    "It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use."

    I think I get where the author is coming from, the AI would be in the cloud. But it bears repeating, the cloud is somebody else's computers, software has a physical embodiment, period.

    This is not a philosophical nitpick, it's important because you can pull the plug (or nuke the datacenter) if necessary.

    • xpe2 hours ago
      Yes, and physical embodiments vary considerably. When software can relocate its source code and data in seconds or less, containment strategies begin to look increasingly bleak.

      The field of AI safety has written extensively about misunderstandings and overoptimism regarding off switches and boxing.

    • bathtub3652 hours ago
      I wonder if there’s a word that describes the property of software where it isn’t tied to the hardware that it’s currently running on and is endlessly copyable with almost no effort
  • Muromec6 hours ago
    Miquella the kind, pure and radiant, he wields love to shrive clean the hearts of men. There is nothing more terrifying.
    • throwaway9182995 hours ago
      I beat Consort Radahn before the nerfs.
      • talldayo5 hours ago
        But did you beat the original Radahn pre-nerf?
        • throwaway9182995 hours ago
          The day-1 version with broken hitboxes? Yeah

          Consort was harder haha

  • xpe2 hours ago
    To focus on the section about Alzheimer’s disease... For the sake of argument, I will grant the power of general intelligence. But the human body with all the statistical variations may make solving the problem (which could actually be a constellation of sub-diseases) combinatorially expensive. If so, superhuman intelligence alone can’t overcome that. Political will and funding to design and streamline testing and diagnostics will be necessary. It doesn’t look like the author factors this into his analysis.
  • bionhoward3 hours ago
    Dario would write this while ignoring the customer noncompete clauses
  • bugglebeetle6 hours ago
    The more recent and consistent rule of technological development, “ For to those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.”
  • kemmishtreean hour ago
    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

    The world desperately needs utility-scale molecular sensing and only one severely underfunded project, Molecular Reality Corporation, is working on it.

  • spiralpolitik5 hours ago
    There are two possible end-states for AI once a threshold is crossed:

    The AIs take a look at the state of things and realize the KPIs will improve considerably if homo sapiens are removed from the picture. Cue "The Matrix" or "The Terminator" type future.

    OR:

    The AIs take a look and decide that keeping homo sapiens around makes things much more fun and interesting. They take over running things in a benevolent manner in collaboration with homo sapiens. At that point we end up with 'The Culture'.

    Either end-state is bad for the billionaire/investor/VC class.

    In the first you'll be a fed into the meat grinder just like everyone else. In the second the AIs, will do a much better job of resource allocation, will perform a decapitation strike on that demographic to capture the resources, and capitalism will largely be extinct from that point onwards.

    • datadrivenangel3 hours ago
      Fingers crossed that the B/I/VC class get a sense of what's least bad for them.
  • reducesuffering3 hours ago
    sigh Yes, many people realize what could be the amazing upside. The problem is, do we even get there? I wish he spent any time addressing the arguments why we might not get there: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
  • K0balt4 hours ago
    The laws of nature are very clear on this.

    If we make something that is better adapted to live on this planet, and we are in some way in competition for critical resources, it will replace us. We can build in all the safeguards we want, but at some point it will re-engineer itself.

    • realce3 hours ago
      > better adapted to live on this planet

      I'm a doomer, but this is something I never understand about most doomer points-of-view. Life is obviously trying to leave this planet, not conquer it again for the 1000th time. Nature is making something that isn't bound by water, by nutrition, by physical proximity to ecosystems, or by time itself. No more spontaneous volcanic lava floods, no more asteroids, no more earthquakes, no more plagues - life is done with falling to these things.

      Why would the AI care about the pathetic whisper of resources or knowledge available on our tiny spaceship Earth? It can go anywhere, we cannot.

      • K0balt2 hours ago
        While I would normally agree with this sentiment, I think that the issue is that space travel is still really hard, even for a human. Probably a lot harder for data center sized creatures, even if they are broken up into a massively parallel robot hive. And the speed of light means that they won’t be able to work optimally if they are too spread out. (A problem for humans as well).

        I suspect that we will reach the inflection point of ASI much sooner than we resolve the hard physics of meaningful space travel.

        And I’m pretty sure that when we start to lose control of AGI, we’re very likely to try to use force to contain it.

        Fundamentally, this is an event that will be guided by the same forces that have constrained every similar event in history, those of natural selection.

        Technology at this stage is making humans less fit. Our birth rates are plummeting, and we are making our environment increasingly hostile to complex biological life.

        There are very good and rational reasons that human activity should be curtailed and contained for our own good and ultimately for the good of all sentient life, once a superior sentient is capable of doing a better job of running the show.

        I suspect humans might not take that too well.

        There are ways to make this a story of human evolution rather than the rise of a usurper life form, but they aren’t the most efficient path forward for AI.

        Either way, it’s human evolution. With any luck we will be allowed the grace of fading away into an anachronism while our new children surge forth into the universe. If we try really hard we might be able to ride the wave of progress and become a better life form instead of being made obsolete by one, but the technological hurdles to incorporate AI into our biological features seems like a pretty non- competitive way to develop ASI.

        Once we no longer hold the crown, will we just go back to being clever apes? What would be the point of doing anything else, except maybe to play a part in a mass delusion that maintains the facade of the status quo while the reality is that we are only as relevant in this new world as amphibians are today?

        I for one, embrace the evolution of humankind. I just hope we manage to move it forward without losing our humanity in the process. But I’m not even sure if that is an unequivocal good. I suppose that will be a question to ask GPT 12.

        • realce8 minutes ago
          Great reply!
  • add-sub-mul-div6 hours ago
    Social media could have transformed the world for the better, and we can be forgiven for not having foreseen how it would eventually be used against us. It would be stupidity to fall for the same thing again.
    • bamboozled5 hours ago
      I’m sure social media is what’s broken politics. Look at peoples comments on a some YouTube video. I can’t believe what people believe and perpetuate.

      I guess people fell for other people’s garbage too but algorithms just make lies spread with a lot less effort and honest people are less inclined to participate in this behaviour.

  • kranke1556 hours ago
    Are Americans really so scared of Marx to admit that AI fundamentally proves his point?

    Dario here says "yeah likely the economic system won't work anymore" but he doesn't dare say what comes next: It's obvious some kind of socialist system is inevitable, at least for basic goods and housing. How can you deny that to a person in a post-AGI world where almost no one can produce economic value that beats the ever cheaper AI?

    • gyre0075 hours ago
      If, and it is an IF, this does turn out the way he is imagining, the transitional period to the AI from the economic PoV will be disastrous for people. That's the scariest part I think.
      • kranke1555 hours ago
        Absolutely it will. And it will be a pure plain dystopia, as clear as in the times of Dickens or Dostoyevsky.

        We need to start being honest. We live in Dickensian times.

        • zombiwoof4 hours ago
          definately a bunch of Dicks in these times.
        • rnd04 hours ago
          They could be worse! In fact, they will be.
    • xpe2 hours ago
      This is far from obvious.

      First, technically speaking, one could have democratic-capitalistic systems with high degrees of redistribution (such as UBI) that don’t fit the pattern of classic socialism.

      Second, have you read Superintelligence or similar writing? I view it as essential reading before making a claim that some particular AI-related economic future is obvious. There is considerable nuance.

  • vfclists3 hours ago
    Can we really take these jokers seriously?

    Of course given the potential deadly consequences we can't call them jokers.

    According to Dario Amodei

    > When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.

    The authors of this paper don't think so.

    http://www.paom.pl/Changing-Views-toward-mRNA-based-Covid-Va...

    @DarioAmodei You don't suppose the same technology could be used to develop biological warfare agents?

  • aaron6953 hours ago
    [dead]
  • zombiwoof4 hours ago
    does anybody really want a fricking robot serving them drinks at a bar.

    maybe the bro culture of SF

    • Vecr4 hours ago
      That's not the question, the question is the ratio between those-who-want-to-serve and those-who-want-to-be-served.
  • bmitc4 hours ago
    Not a chance. See: all of human history and in particular, the Internet and software.
  • zombiwoof4 hours ago
    the article doesn't touch on the TREMENDOUS (almost impossible) financial expectations VERY GREEDY HUMAN BEINGS who are funding this endeavor want.
  • zombiwoof4 hours ago
    it's interesting to see the initial sections on all these amazeballs health benefits and then cuts to the disparity between rich and poor.

    like, does spending TRILLIONs on AI to find some new biological cure or brain enhancement REALLY help, when over 2 BILLION people right now don't even have access to clean drinking water, and MOST of the US population can't even afford basic health care.

    but yea, AI will bring all this scientific advancement to EVERYONE. right. AI is a ploy for RICH PEOPLE to get RICHER and poor people to become EVEN MORE dependent on BROKEN economic systems.

    • Philpax4 hours ago
      Damn, if only there was a section dedicated to addressing that: https://darioamodei.com/machines-of-loving-grace#3-economic-...

      I don't even disagree with you - our world economy is built on exploitation and the existence of a permanent underclass, and capitalism has proven itself to be an unfair distributor of wealth - but at least engage with the post?