57 pointsby caisah7 hours ago20 comments
  • Maxatar7 hours ago
    The article immediately starts off with such a glaring contradiction that it makes it very hard to correctly interpret the remainder of it.

    You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other.

    Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect.

    There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.

    • 16bitvoid7 hours ago
      I think they're arguing against Anthropic et al. claiming their models are "ethical" and "safe". The point being that it can't be absolutely in all circumstances ethical or safe because even seemingly benign information can be used to cause harm, hence it requiring knowing the user's intent to actually make an ethical and safe choice of whether to provide information or not.

      When Anthropic et al. say that their AI is ethical and safe, they are saying so in absolute terms, same as the title. Just one instance of unethical or unsafe behavior is enough to prove that it's not ethical or safe.

      No one would say a knife or a gun is safe because we're all aware of the harm it could cause, thus requires care and diligence in use. The term "ethical" doesn't apply in this analogy because an inanimate object cannot act, but an LLM can.

    • evnp7 hours ago
      The point is that safety depends on context and intent being known - with unknown context or intent, dangerous situations will appear _some_ of the time, thus the system as a whole can "never" be fully safe.

      There is no contradiction here.

    • chaos_emergent7 hours ago
      Yeah I hate the title because it almost verges on clickbaity because one assumes that he's making the assertion that AI has a moral stance in the first place, versus AI being morally neutral and driven by its wielder
    • fwip7 hours ago
      I think without reading the final line, you might get the wrong impression.

      > It doesn’t make those frameworks worthless. It makes them incomplete by design—and it means, again, that AI will never be entirely ethical or safe.

      Lots of people in this thread are reading the headline and making the same comparisons that the author does - "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."

      The article isn't saying "AI will never be ethical and safe, and it is unique in that way," it is saying "and so it is similar to these other things." If anything, it is critiquing the claims made by corporate AI that they can successfully make AI both useful and totally safe.

    • madibo31567 hours ago
      [dead]
  • dec0dedab0de7 hours ago
    This article is nothing, but the title is probably right. At least if you consider it unethical to source training data without informed consent, because generating code is inherently unsafe. Of course, you have to have a very narrow definition of AI for even that to be true.
    • undecisive6 hours ago
      I think that's where most people thought that this article was going.

      Shall we just have that debate anyway? :D

      The big question that I hoped the article might address: Can AI ever be ethical (within the norms of what the average Jo(e) considers ethical), or have we forever poisoned the well?

      If the technology and mathematical underpinnings have been created on fundamentally immoral grounds (IP theft, energy / water excesses, etc) what would we have to do to produce an entirely - or even mostly - ethical AI stack?

      Is it even possible, given the dependencies on (Lithium / Israel / fossil fuels / conflict mining / capitalistic exploitation / any other morally questionable underpinning you might think of) to re-do the work to such a point that we could "black box" our way to decently function LLMs?

      Assuming that comes with a caveat of rolling back the technological progress, how far back do we have to go? It feels like the bronze age is a step too far, at least on the basis of my "average Jo(e)" test above - but what is considered reasonable?

      Then - and only then - would it make sense to ask how to make the content generation itself ethical.

      It feels like the Nazi medical science issue all over again, except nobody really cares as much about this one. But socially, it feels like an anti-capitalistic uprising is on the horizon, so maybe if that happens, a moral aversion to the state of AI might piggyback onto it?

      Not that I want it to. Quite like AI really. Feels like the background immorality radiation of the earth is quite high anyway, maybe AI isn't the thing to fluff our feathers about. But it's certainly an interesting thing to mull as we weep over our non-gm oat milk babyccinos, pitying at the state of the world.

      (I'm really an upbeat person, honest...)

  • marshray6 hours ago
    This argument is so bad that I have to wonder if it's an intentional a strawman. (I don't think it deserves to be flagged, however.)

    It leads with "AI Will Never Be Ethical or Safe".

    The first sentence is "AI will never be *entirely* ethical or safe."

    It concludes with "AI is a tool, and it can be used in ethical and unethical, safe and unsafe ways" and compares them to "hardware store clerks".

    Hardware stores are *specifically* places where society has had a centuries-long conversation about risk and the products on sale represent a very intentional set of choices. In some parts of the US hardware stores used to sell dynamite, they don't anymore. That's the 'social contract' functioning in daily life.

    "AI is like a tool one might buy from the hardware store" is, in most people's minds, the opposite of the opening premise.

  • ckastner7 hours ago
    > The reason is this:

    > Both ethical and safe conduct depend on context and intent.

    The same apples to knives, and they can be plenty useful, and used in a safe manner.

    • tombert7 hours ago
      I suppose that the argument could be made that knives are inherently unsafe, and that no matter what it is important to always treat it like it's unsafe. This doesn't imply that you shouldn't use knives, just that you should be aware of the inherent unsafety of it?

      I don't know, I didn't really agree with the post, I'm trying my best to steel man it.

      • marshray6 hours ago
        "AI will never be entirely ethical or safe because it's like having a knife, a gun, a hardware store, and a medical doctor, all in one convenient interface."
  • dzink7 hours ago
    Water can never be safe. Water in large quantities can drown anyone. When mixed with the wrong things it can turn into chemical reactions. Water safety depends on context and intent.

    So if we consider AI a chemical substance - if inserted in with limited context in tools with specific intent, can it be useful beyond tools available at this moment?

    You can trust just any liquid that looks like water, just as you can trust just any model or especially any inference provider (they can switch models to save money or mess with other key parameters, or insert ads). You have to test your water supply and your AI supply regularly. And benchmark new sources. We’ll see labeling and quality guarantees in future suppliers. We’ll see personal models and model families trained and refined as brands for reliability. Bottled neatly for you by certified suppliers.

    In the mean time we all just found our selves out of a desert and splashing around in this funky thing that we now find on the ground and falling for free from clouds.

    • josefritzishere6 hours ago
      That's a bit of a polemic argument. Water is required to live. AI is a word guessing machine we often use as a fun toy.
  • happytoexplain6 hours ago
    I don't think the writeup is very good, but the thesis is not being engaged with honestly in these comments.

    Knives, books, water, calculators, encyclopedias, search engines: Just a few of the analogies being made with barely a word beyond "it's like X". In fact, the opposite: Demanding that other people make arguments that AI is not like X.

    Analogies are almost always just a pithy, empty distraction. They are the fodder of low-quality internet conversations. It should be obvious why an analogy is so often reached for - if an argument about X can't be supported on its own, it's easy to point to another thing, Y, with some similarity, but which more easily fits the argument in other ways, and... just assert that they're the same.

    Here's a dumb analogy: Yes, "it's just a tool." So is C4.

    • undecisive5 hours ago
      Analogies are not the problem. In fact, an analogy is like a good knife; sharp, removes problematic parts, and totally unethical unless it knows the motivations of its wielder.

      Seriously though, yes it is obvious why analogies are so often used, but I think you have it the wrong way round. They are a form of proof by negation; you don't have to find a thing exactly like the subject of the argument.

      It's a way of fighting against bad arguments; If I say China is bad because X, Y and Z and also, their flag is red! They must be evil. If you then tell me that this argument could also be applied to the Red Cross/Crescent, you have negated my argument by analogy. You don't have to negate every argument I made; but at least then we can treat X, Y and Z on their own.

      The problem with this writeup is, there really are no other powerful arguments in it.

      And I'm pretty sure C4 is great for controlled demolition of highly dangerous buildings. Or do you want adventurous people to hurt themselves?

  • gmuslera6 hours ago
    Never is too much time. And humans not aware of intent or context also can make unethical decisions, even if we assume an absolute and eternal ethical framework.

    Asimov robot stories (with it’s magical three/four rules) had examples of situations where even being “ethical” bad things happened. And in Black Mirror episode Men Against Fire humans were the ones with a fake context making unethical decisions (and reality is much worse than fiction as we’ve seen in the last months).

    Taking out the absolutes, I would stop in that today’s LLMs lack context, critical thinking, and a lot more than make them unethical and unsafe. But something future that could be labeled as AI too could have some of those problems mitigated, maybe making better/safer decisions than humans in general.

  • bnjmn7 hours ago
    "Context and intent cannot be known" seems like a bit of an overstatement? A qualifying clause like "in all cases, with complete confidence" would allow for the possibility of alignment in some cases (yay), but not always, and of course it's that "not always" that's problematic when you're trying to make blanket safety guarantees.

    Here's a version I imagine both the author and I can nod along with: "Context and intent cannot be known at model training time, so most attempts to enforce safety or ethics guardrails purely through the weights of the model, fine-tuning, or other training-time interventions are doomed to guarantee very little at inference time."

  • undecisive7 hours ago
    This article in a nutshell: AI will never be ethical or safe, because no tool can ever be ethical or safe, without it knowing the complete motivation of any person using it and every person who might receive its outputs.

    Wasn't the article I was expecting! Not sure it helps much, except maybe if you wanted to muddy the water of ethics-and-AI discussions.

  • daft_pink7 hours ago
    AI is just a way to search information, program and control computers faster with natural language understanding etc.

    I’m not sure why people are attributing to so much to it. It just allows a single person to do a lot more units of work, the same way that a computer allowed a single person to do a lot more units of work.

  • Rohinator7 hours ago
    Would AI be safer or more ethical if it required malicious users to lie about their intent first?

    "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks."

    Exactly. Are hardware store clerks unethical as well?

  • toenail7 hours ago
    > Both ethical and safe conduct depend on context and intent.

    That entire line of reasoning is absurd. You can get information from books, they don't know context and intent either. Books will never be ethical or safe.

  • amelius7 hours ago
    Can an encyclopaedia be ethical or safe?

    Can a search engine be ethical or safe?

    Can an AI be ethical or safe?

    If you answer differently for one or more of these questions, then you'll have to say why and where you draw the line.

    • sunir7 hours ago
      One is a cybernetic system. It has sensors, a controller, a decision system, goals, and actuators. Arguably it's alive, but I think the definition of cybernetics is sufficient because it's objective.
    • happytoexplain6 hours ago
      Over the past few years, especially in places like HN, many people have made many arguments that AI is different in this or that relevant way. It's perfectly reasonable to disagree with them, but the implication of this snarky comment is that nobody is making these arguments in the first place.
    • plutokras7 hours ago
      I do hope the rare earth metal in my calculator is also ethically sourced.
  • ctoth7 hours ago
    So from the exact same article:

    Doctors Will Never Be Ethical or Safe

    Hardware Stores Will Never Be Ethical or Safe.

    Okay?

  • akagusu7 hours ago
    AI will never be ethical because the copyrighted material used for training the AI without proper copyright payments is not only unethical but illegal.

    Unfortunately law enforcement decided the copyright law only applies to regular citizens like me and not to billionaires owners of AI companies.

  • 7 hours ago
    undefined
  • rvz7 hours ago
    There's no such thing as "ethics in AI" in a company when there are billions of dollars of investor money on the table.

    "Safety" was just the smokescreen and the perfect scare tactic towards tricking governments to turn even more tyrannical and place in extreme surveillance on everyone which benefits tech corporations, data brokers and AI companies.

  • lutusp6 hours ago
    Wait a sec ...

    > The problem AI inherits from us is that context and intent cannot be known.

    > Both can be omitted or lied about.

    This implies that neither we nor our creations can ever be ethical or safe. It follows logically that no entity can ever meet that standard. Therefore focusing on AI is arbitrary -- the focus might as well have been pit vipers or platypuses.

    And the article misses the point that an AI engine can be forced to imitate ethical behavior, because it has no civil rights or behavioral latitude (yet). Granted that would only be an imitation of ethical behavior, but then, so is ours.

  • jubilanti7 hours ago
    [dead]
  • superkuh7 hours ago
    These kind of write-ups all have an implicit premise that is unstated: they're talking about corporate AI run by corporations. They're not actually talking about the technology. Corporate AI will never be ethical or safe because corporate persons have different motivations and profit incentives driving them than human persons do. And most of the time they're quite nasty when viewed through the lens of human ethics.

    It reminds me of the parable of the blind monks each feeling a different part of the elephant and arguing about it's shape. They're each not wrong, but they're also only talking about a limited subset of the elephant (AI).

    Cory Doctorow is much more eloquent in his explaination of this important distinction in his reverse centaur metaphor.