I was immediately interested to hear of what interventions the group was spearheading, or intending to. I just couldn't imagine what well meaning strangers could have done that would have done anything but let me know that these were people I wouldn't want to mention my situation to.
Despite my genuine interest, nobody could tell me anything that they were aware of to help people at risk, except circle the strong implicit view that fundraising, fundraiser group recruitment, and anti-suicide fundraising-awareness campaigns enabled by fundraising, are all important ways to combat suicide. The only thing that made sense, was that the good wine they were drinking probably did help with all that.
They were a little put off that I expected them to know what the money was intended for, and had zero curiosity about my relevant experience, which just weirded them out. "It's for anti-suicide!"
It’s a story about how humans can’t help personifying language generators, and how important context is when using LLMs.
There should be a word for the misunderstanding that the pervasively common use of anthropomorphic or teleological rhetorical modes to talk about undirected natural or designed for purpose artifacts, actually indicates that anthropomorphic/free-will/teleological assertions or assumptions are being made.
Language-bending tropes, just like tricky-wicked theorems, are the indispensable shortcuts that help us get to a point.
(I think the much more common danger is people over-anthropomorphizing people. I.e. all the stories of clear motivations and intents we tell ourselves, about ourselves and others, and credulously believe, after the fact.)
> and how important context is when using LLMs.
Too true.
I bet when caught in the inconsistency it apologized profusely then immediately went to doing the thing it just apologized about.
I do not trust AI systems from these companies for that reason. They will lie very confidently and convincingly. I use them regularly but only for what I call “AI NP complete scenarios” questions and tasks that may be hard to do by hand but easy to identify if the result is correct: “draw a diagram”, “reformat this paragraph”, etc, as opposed to “implement and deploy a heart place maker update patch”.
One man, Mitko Vasilev, posts extensively on LinkedIn about his own experience running local models, and is very informative: https://www.linkedin.com/in/ownyourai/ He usually closes with this:
"Make sure you own your AI. AI in the cloud is not aligned with you; it’s aligned with the company that owns it."
If you browse the Internet you’ll find that anglophone commenters are fond of dumping suicide hotlines into comments anytime suicide is mentioned and repetitively stating “to anyone who needs to hear this, you are loved”. These are just memetically viral in English media.
I cannot imagine that anyone suicidal being told in non-specific terms that they are loved is helping anything either. Perhaps it is, perhaps it’s not. But these things are a meme.
Online they share presence with compliments on trigger discipline, claims of US postal police competence, or Steve Buscemi being a firefighter who returned to the job briefly during 9/11. It’s like saying “Knowledge is power” and getting the response “France is bacon.”
Besides the safety aspect, though, when I want commentary on something I’m thinking I usually have to roleplay it. “A junior engineer suggested:” or “My friend, who is a bit of a kook, has this idea that” to get a critical response. If I were to say “I’ve got this idea:” I’m going to get glazed so hard a passerby might bite me for resemblance to a doughnut.
The models, however, will consider this where humans will not. This is likely because this aspect of human safety and alignment is not transmittable via text tokenization. Rather than object to the text, it is silently killed in most contexts. Consequently models find it possible to discuss where humans won’t.
If most such text were accompanied by human excoriation of the view, it would likely be detected as harmful.
> The sprawling case has also become politically and culturally fraught, as Somali Americans make up 82 of the 92 defendants charged so far, according to the U.S. Attorney’s Office for Minnesota.
Politically fraught indeed.
For most people, it’s best to view LLMs as a browser / autocomplete service, that conforms to the bias it guesses you hold.