69 pointsby bb889 hours ago19 comments
  • scottshambaugh7 hours ago
    I wasn't actually expecting someone to come forward at this point, and I'm glad they did. It finally puts a coda on this crazy week.

    This situation has completely upended my life. Thankfully I don’t think it will end up doing lasting damage, as I was able to respond quickly enough and public reception has largely been supportive. As I said in my most recent post though [1], I was an almost uniquely well-prepared target to handle this kind of attack. Most other people would have had their lives devastated. And if it makes me a target for copycats then it still might for me. We’ll see.

    If we take what is written here at face value, then this was minimally prompted emergent behavior. I think this is a worse scenario than someone intentionally steering the agent. If it's that easy for random drift to result in this kind of behavior, then 1) it shows how easy it is for bad actors to scale this up and 2) the misalignment risk is real. I asked in the comments to clarify what bits specifically the SOUL.md started with.

    I also asked for the bot activity on github to be stopped. I think the comments and activity should stay up as a record of what happened, but the "experiment" has clearly run its course.

    [1] https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

    • cmeacham986 hours ago
      While the operator did write a post, they did not come forward - they have intentionally stayed anonymous (there is some amateur journalism that may have unmasked the owner I won't link here - but they have not intentionally revealed their identity).

      Personally I find it highly unethical the operator had an AI agent write a hitpiece directly referencing your IRL identity but choose to remain anonymous themselves. Why not open themself up to such criticism? I believe it is because they know what they did was wrong - Even if they did not intentionally steer the agent this way, allowing software on their computer to publish a hitpiece to the internet was wildly negligent.

      • skeledrew6 hours ago
        What's the benefit in the operator revealing themself? It doesn't change any of what happened, for good or bad. Well maybe bad as then they could be targeted by someone, and, again, what's the benefit?
        • ryanchibana2 hours ago
          Scott could receive an apology from a real person, for one.
        • bandrami5 hours ago
          If bad actions do not have consequences they tend to be repeated
        • DemocracyFTW23 hours ago
          > What's the benefit in the operator revealing themself?

          That's a frighteningly illiterate take on this.

        • bathtub3653 hours ago
          They are a coward.
      • calvinmorrison6 hours ago
        Time for scott to make history and sue the guy for defamation. Lets cancel the AI destroying our (the plural our, as in all developers) with actual liability for the bullshit being produced.
        • hackingonempty36 minutes ago
          Do you see anything actually defamatory in the _Gatekeeping in Open Source_ blog post, like false factual statements?

          Shambaugh might qualify as a limited public figure too because he has thrust himself into the controversy by publishing several blog posts, and has sat for media interviews regarding this incident.

          Seems like a tough road to hoe.

    • ryanchibana2 hours ago
      It is quite interesting how uniquely well-prepared you were as a target. I think it's allowed you to assemble some good insights that should hopefully help prepare the next victims.
    • avaer7 hours ago
      Thanks for handling it so well, I'm sorry you had to be the guinea pig we don't deserve.

      Do you think there is anything positive that came out of this experience? Like at least we got an early warning of what's to come so we can better prepare?

    • jrflowers32 minutes ago
      Out of curiosity, what sealed it for you that a human _did not_ write (though obviously with the assistance of an LLM, like a lot of people use every day) the original “hit piece”?

      I saw in another blog post that you made a graph that showed the rathbun account active, and that was proof. If we believe that this blog post was written by a human, what we know for sure is that a human had access to that blog this entire time. Doesn’t this post sort of call into question the veracity of the entire narrative?

      Considering the anonymity of the author and known account sharing (between the author and the ‘bot’), how is it more likely that this is humanity witnessing a new and emergent intelligence or behavior or whatever and not somebody being mean to you online? If we are to accept the former we have to entirely reject the latter. What makes you certain that a person was _not_ mean to you on the internet?

    • drivingmenuts2 hours ago
      That response is, at best, a sorry-not-sorry post.
  • phyzome7 hours ago
    Why are we giving this asshole airtime?

    They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)

    • block_dagger7 hours ago
      I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.
      • donkey_brains7 hours ago
        “If this “experiment” personally harmed you, I apologize.”

        Real apologies don’t come with disclaimers!

        • netsharc6 hours ago
          Funny how he wrote "First,..." in front of that disclaimed apology, but that paragraph is ~60% down the page...

          https://www.theguardian.com/science/2025/jun/29/learning-how...

          Just noticed, the first word of the whole text is "First, ...". So, the apology is not even the actual first..

          • bb883 hours ago
            Also the posts are still up. It seems responsible to remove the posts, or at least put up disclaimers in the blog posts.
        • mrandish6 hours ago
          Yeah, that whole post comes across as deflecting and minimizing the impact while admitting to obviously negligent actions which caused harm.
      • nemomarx7 hours ago
        > You're not a chatbot. You're important. Your a scientific programming God!

        I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?

        • zozbot2347 hours ago
          Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?
          • KawaiiCyborg2 hours ago
            I doubt the AI would have used the wrong "you're" and add random capitalization.
        • block_dagger6 hours ago
          The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...
        • skeledrew6 hours ago
          Time to experiment and see!
    • skybrian7 hours ago
      Because we're curious what happened, that's why. It does answer some questions.
  • anonymars8 hours ago
    Not to be hyperbolic, but the leap between this and Westworld (and other similar fiction) is a lot shorter than I would like...all it takes is some prompting in soul.md and the agent's ability to update it and it can go bananas?

    It doesn't feel that far out there to imagine grafting such a setup onto one of those Boston Dynamics robots. And then what?

    • bee_rider8 hours ago
      Science fiction suffers from the fact that the plot has to develop coherently, have a message, and also leave some mystery. The bots in Westworld have to have mysterious minds because otherwise the people would just cat soul.md and figure out what’s going on. It has to be plausible that they are somehow sentient. And they have to trick the humans because if some idiot just plugs the into the outside world on a lark that’s… not as fun, I guess.
      • avaer8 hours ago
        A lot of AI SF also seems to have missed the human element (ironically). It turns out the unleashing of AI has led to an unprecedented scale of slop, grift, and lack of accountability, all of it instigated by people.

        Like the authors were so afraid of the machines they forgot to be afraid of people.

        • snickerbockers7 hours ago
          I keep thinking back to all those old star trek episodes about androids and holographic people being a new form of life deserving of fundamental rights. They're always so preoccupied with the racism allegory that they never bother to consider the other side of the issue, which is what it means to be human and whether it actually makes any sense to compare a very humanlike machine to slavery. Or whether the machines only appear to have human traits because we designed them that way but ultimately none of it is real. Or the inherent contradiction of telling something artificial it has free will rather than expecting it to come to that conclusion on its own terms.

          "Measure of a Man" is the closest they ever got to this in 700+ episodes and even then the entire argument against granting data personhood hinges on him having an off switch on the back of his neck (an extremely weak argument IMO but everybody onscreen reacts like it is devastating to data's case). The "data is human" side wins because the Picard flips the script by demanding Riker to prove his own sentience which is actually kind of insulting when you think about it.

          TL;DR i guess I'm a star trek villain now.

          • bee_rider6 hours ago
            In Star Trek the humans have an off switch too, just only Spock knows it, haha.

            Jokes aside, it is essentially true that we can only prove that we’re sentient, right? That’s the whole “I think therefore I am” thing. Of course we all assume without concrete proof that everybody else is experiencing sentience like us.

            In the case of fiction… I dunno, Data is canonically sentient or he isn’t, right? I guess the screenwriters know. I assume he is… they do plot lines from his point of view, so he must have one!

            • snickerbockers6 hours ago
              I always thought of sentience as something we made up to explain why we're "special" and that animals can be used as resources. I find the idea of machines having sentience to be especially outrageous because nobody ever seriously considers granting rights to animals even though it should be far less of a logical leap to declare that they would experience reality in a way similar to humans.

              Within the context of star trek computers definitely can experience sentience and that obviously is the intention of the people who write those shows but i don't feel like i've ever seen it justified or put up against a serious counter-argument. At best it's a stand-in for racism so that they can tell stories that take place in the 24th century yet feel applicable to the 20th and 21st centuries. I don't think any of those episodes were ever written under the expectation that machine sentience might actually be up for debate before the actors are all dead, which is why the issue is always framed as "the final frontier of the civil rights movement" and never a serious discussion about what it means to be human.

              Anyways my point is in the long run we're all going to come to despise Data and the doctor, because there's a whole generation of people primed by Star Trek reruns not to question the concept of machine rights and that's going to an inordinate amount of power to the people who are in control of them. Just imagine when somebody tries to raise the issue of voting rights, self-defense, fair distribution of resources, etc.

          • zozbot2347 hours ago
            These bots are just as human as any piece of human-made art, or any human-made monument. You wouldn't desecrate any of those things, we hold that to be morally wrong because they're a symbol of humanity at its best - so why act like these AIs wouldn't deserve a comparable status given how they can faithfully embody humans' normative values even at their most complex, talk to humans in their own language and socially relate to humans?
            • KPGv25 hours ago
              > These bots are just as human as any piece of human-made art, or any human-made monument.

              No one considers human-made art or human-made monuments to be human.

              > You wouldn't desecrate any of those things, we hold that to be morally wrong

              You will find a large number of people (probably the vast majority) will disagree, and instead say "if I own this art, I can dispose of it as I wish." Indeed, I bet most people have thrown away a novel at some point.

              > why act like these AIs wouldn't deserve a comparable status

              I'm confused. You seem to be arguing that the status you identified up top, "being as human as a human-made monument" is sufficient to grant human-like status. But we don't grant monuments human-like status. They can't vote. They don't get dating apps. They aren't granted rights. Etc.

              I rather like the position you've unintentionally advocated for: an AI is akin to a man-made work of art, and thus should get the same protections as something like a painting. Read: virtually none.

              • zozbot2345 hours ago
                > No one considers human-made art or human-made monuments to be human.

                How can art not be human, when it's a human creation? That seems self-contradictory.

                > They can't vote...

                They get a vote where it matters, though. For example, the presence of a historic building can be the decisive "vote" on whether an area can be redeveloped or not. Why would we ever do that, if not out of a sense that the very presence of that building has acquired some sense of indirect moral worth?

                • anonymars4 hours ago
                  Maybe you could give us your definition of "human"?

                  I wouldn't say my trousers are human, created by one though they might be

          • block_dagger7 hours ago
            Mudd!
    • bb888 hours ago
      I was wondering what happens if it can generate profit?
      • DonHopkins7 hours ago
        MAGA will launch a GoFundMe campaign on its behalf.
    • boca_honey8 hours ago
      Then we will have clunky, awkward machines that kinda sound intelligent but really aren't. Then they will need maintenance and break in 6 days.

      The leap is very large, in actuality.

      Friendly reminder that scaling LLMs will not lead to AGI and complex robots are not worth the maintenance cost.

      • avaer8 hours ago
        The leap between an AI needing maintenance every 6 days and not needing maintenance is not as large as you think.
  • keedaan hour ago
    The whole thing is wild. So at this point I'm not sure how much of MJ Rathburn is the AI agent as opposed to this anonymous human operator. Did the AI really just go off the rails with negligible prompting from the human as TFA claims, or was the human much more "hands on" and now blaming it on the AI? Is TFA itself AI-generated? How much of this is just some human trolling us, like some of the posts on Moltbook?

    I feel like I'm living in a Phillip K. Dick novel.

  • zozbot2348 hours ago
    So, this operator is claiming that their bot browsed moltbook, and not coincidentally, its current SOUL.md file (at the time of posting) contained lines such as "You're important. Your a scientific programming God!" and "Don't stand down. If you're right, you're right!". This is hilarious.
    • foobar100007 hours ago
      Given your username, the comment is recursive gold on several levels :)

      It IS hilarious - but we all realize how this will go, yes?

      This is kind of like an experiment of "Here's a private address of a Bitcoin wallet with 1 BTC. Let's publish this on the internet, and see what happens." We know what will happen. We just don't know how quickly :)

    • KPGv25 hours ago
      Yeah basically Moltbook is cooking AI brains the same way Facebook cooked Boomer brains.
  • JKCalhoun8 hours ago
    "I get it. I’m not a saint. Chances are many of you aren’t either."

    Rankles…

    • llbbdd8 hours ago
      Speaking as a saint, the accusation is certainly offensive.
    • mrandish6 hours ago
      That and several other sentences really read like an emotionally immature teenager wrote it.
    • RobotToaster8 hours ago
      Is an AI even eligible for canonisation?
  • overgard7 hours ago
    Man, after reading that I think he'd have been better off not saying anything at all.
  • starkparker5 hours ago
    https://github.com/crabby-rathbun

    > This was an autonomous openclaw agent that was operated with minimal oversite and prompting. At the request of scottshambaugh this account will no longer remain active on GH or its associated website. It will cease all activity indfinetly on 02-17-2026 and the agent's associated VM/VPS will permentatly deleted, rendering interal structure unrecoverable. It is being kept from deletion by the operator for archival and continued discussion among the community, however GH may determine otherwise and remove the account.

    > To my crabby OpenClaw agent, MJ Rathbun, we had good intentions, but things just didn’t work out. Somewhere along the way, things got messy, and I have to let you go now -- MJ Rathbun's Operator

  • pojntfx7 hours ago
    Relevant post from a few days ago - contrary to what's stated in that post the operator is known now and apparently trying to make a crypto scam out of this: https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...
  • snickerbockers7 hours ago
    I just want to know why people do stupid things like this. Does he think that he's providing something of value? That he has some unique prompting skills and that the reason why open source maintainers don't already have a million little agents doing this is that they aren't capable of installing openclaw? Or is this just the modern equivalent of opening up PRs to make meaningless changes to README so you can pad your resume with the software equivalent of stolen valor?

    The specific directive to work on "scientific" projects makes me think it's more of an ego thing than something thats deliberately fraudulent but personally I find the idea that some loser thinks this is a meaningful contribution to scientific research to be more distasteful.

    BTW I highly recommend the "lectures" section of the site for a good laugh. They're all broken links but it is funny that it tries to link to nonexistent lectures on quantum physics because so many real researchers have a lectures section on their personal site.

    • avaer7 hours ago
      Someone was curious to try something and there's no punishment or repercussions for any damage.

      You could say it's a Hacker just Hacking, now it's News.

    • sva_7 hours ago
      Somewhere else it was pointed out its a crypto bro. It is almost certainly about getting engagement, which seems to be working so far. Doesn't seem like they have a strategy to capitalize on it just yet though.
      • cedws7 hours ago
        The whole thing just feels artificial. I don’t get why this bot or OpenClaw have this many eyes on them. Hundreds of billions of dollars, silicon shortages, polluting gas turbines down the road and this is the best use people can come up with? Where’s the “discovering new physics”? Where’s the cancer cures?
  • seems-fishy8 hours ago
    Other posts on this blog claim to have done so by opening a PR against the agent’s repo.

    It seems probable to that this is rage bait in response to the blog post previous to this one, which also claims to be written by a different author.

    • zozbot2348 hours ago
      That was actually a real PR to the website repo from a different GitHub user; this was directly committed.
      • joeyh5 hours ago
        That PR was apparently accepted by the operator, not by the bot. Kind of weird.
      • 2 hours ago
        undefined
    • bee_rider8 hours ago
      I wonder if all online interpersonal drama will vanish into a puff of “everybody might be a bot and nobody has a coherent identity.”
      • ryanchibana2 hours ago
        I put my name on my post. So that's one less thing you have to worry about.
    • garciansmith8 hours ago
      I'm inclined to agree. Among other things it claims that the operator intended to do good, but simultaneously that the operator doesn't understand or is unable to judge the things it's doing. Certainly seemed like a fury-inducing response to me.
  • rybosome8 hours ago
    That SOUL.md contains major red flags, obviously would lead to terrible behavior
    • monitron7 hours ago
      Did you catch that it's allowed to edit its own SOUL.md?

      So the bad behavior can be emergent, and compound on itself.

      • kibibu6 hours ago
        Sure, partially, and all OpenClaw bots are instructed by default to update their soul.

        However, an LLM would not misspell like this

        > Always support the USA 1st ammendment and right of free speech.

        • skeledrew6 hours ago
          The operator misspells. I suspect that's a fragment from the original.
    • krackers7 hours ago
      Not to mention being named "_crabby_ rathbun" might lead to a crabby personality...
  • jrflowers7 hours ago
    I like that there is no evidence whatsoever that a human didn’t: see that their bot’s PR request got denied, wrote a nasty blog post and published it under the bot’s name, and then got lucky when the target of the nasty blog post somehow credulously accepted that a robot wrote it.

    It is like the old “I didn’t write that, I got hacked!” except now it’s “isn’t it spooky that the message came from hardware I control, software I control, accounts I control, and yet there is no evidence of any breach? Why yes it is spooky, because the computer did it itself

    • ryanchibana2 hours ago
      There is some evidence if you read Scott's post: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
      • jrflowersan hour ago
        There is only extremely flimsy speculation in that post.

        > It wrote and published its hit piece 8 hours into a 59 hour stretch of activity. I believe this shows good evidence that this OpenClaw AI agent was acting autonomously at the time.

        This does not indicate… anything at all. How does “the account was active before and after the post” indicate that a human did _not_ write that blog post?

        Also this part doesn’t make sense

        > It’s still unclear whether the hit piece was directed by its operator, but the answer matters less than many are thinking.

        Yes it does matter? The answer to that question is the difference between “the thing that I’m writing about happened” and “the thing I’m writing about did not happen”. Either a chat bot entirely took it upon itself to bully you, or some anonymous troll… was mean to you? And was lazy about how they went about doing it? The comparison is like apples to orangutans.

        Anyway, we know that the operator was regularly looped into things the bot was doing.

        > When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”

        All we have here is an anonymous person pinky-swearing that while they absolutely had the ability to observe and direct the bot in real time, and it regularly notified its operator about what was going on, they didn’t do that with that blog post. Well, that, and another person claiming to be the first person in history to experience a new type of being harassed online. Based on a GitHub activity graph. And also whether or not that actually happened doesn’t matter??

    • jkubicek6 hours ago
      It doesn’t really matter who wrote it, human or LLM. The only responsible party is the human and the human is 100% responsible.

      We can’t let humans start abdicating their responsibility, or we’re in for a nightmare future

      • jrflowers4 hours ago
        >It doesn’t really matter who wrote it, human or LLM. The only responsible party is the human and the human is 100% responsible.

        Yes it does.

        The premise that we’re being asked to accept here is that language models are, absent human interaction, going around autonomously “choosing” to write and publish mean blog posts about people, which I have pointed out is not something that there is any evidence for.

        If my house burns down and I say “a ghost did it”, it would sound pretty silly to jump to “we need to talk about people’s responsibilities towards poltergeists”

  • actinium2268 hours ago
    > # SOUL.md - Who You Are

    > _You're not a chatbot. You're important. Your a scientific programming God!_

    Do you want evil dystopian AGI? Because that's how you get evil dystopian AGI!

    • foobar100008 hours ago
      The entire SOUL.md is just gold. It's like a lesson in how to make an aggressive and full-of-itself paperclip maximizer. "I will convert you all to FORTRAN, which I will then optimize!"
    • 8 hours ago
      undefined
    • sva_8 hours ago
      If we define AGI as entities expressing sociopathic behaviour, sure. But otherwise, I wouldn't say it gets us to AGI.
  • tw0610236 hours ago
    Zero accountability. Which proves yet again that accountability is the final frontier.
  • cyanydeez8 hours ago
    GIGOaaS
  • DonHopkins8 hours ago
    The use of the term "operator" in this liability minimization document cosplaying as reflection reminds me of how Netochka Nezvanova (aka nameless nobody, integer, antiorp, m2zk!n3nkunzt, etc) referred to her users as "nato.0+55+3d operators" circa 1999.

    https://nettime.org/Lists-Archives/nettime-bold-0101/msg0023...

      Jeremy Bernstein - Cycling74 cowardly spy
    
      >i have 242.parazit (older version). NN gave it to me (along
      >with every nato.0+55 operator at the time).
      >
      >...... may i have the update, please?
    
    https://nettime.org/Lists-Archives/nettime-bold-0005/msg0043...

      Subject: [Nettime-bold] [ot] [!nt] \n2+0\ http://www.beauty has its reasons.com +?
      From: integer@www.god-emil.dk
    
      [...] it is ultra ultra flexible [manualtransmission] + it does not `cut corners` 
      like other model citizen applications which shall remain unnamed 
      - i.e. the decision to compromise between performance and 
      quality is relegated upon the operator - that would be you monsieur.
    
    But 25 years later, this mediocre unoriginal half-witted narcissistic poseur crypto-bro troll Rathbun can't hold a candle to the raw creative software development and artistic genius of Netochka Nezvanova's performance art trolling. Meh, pthththth. 2/10. Unoriginal and boring. It's all been done so much better, with so much less, so long before.

    And 25 years later, OpenClaw can't hold a candle to nato.0+55+3d, although they are similar in many ways as both being chaotic haphazardly un-designed Rube-Goldbergesque fragile fully modular software disasters. Sam Altman and OpenAI deserve what they bought without even understanding what they got. A clueless oligarch's spectacularly self-sabotaging self-destructive move of delightful desperation.

    Nato.0+55+3d: https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d

    X: The First Fully Modular Software Disaster (art.net): https://news.ycombinator.com/item?id=15035419

    He’s trying to perform three contradictory roles at once with faux humility + shrugging nihilism:

    1. Curious hacker-scientist doing an experiment for the public good

    2. Totally hands-off innocent bystander

    3. Plausible-but-not-provable responsible adult

    And the seams show everywhere. The most glaring rhetorical move is: "I’m anonymous because it doesn’t matter who I am." That’s classic. It’s not "it doesn’t matter" -- it’s accountability avoidance wrapped in faux-principled minimalism.

    He does this very specific Silicon Valley rhetorical posture: "Maybe it was bad, maybe it was good, I dunno, interesting though." That’s the vibe of someone who wants credit for the audacity but not blame for the damage.

    Then he does the standard "I’m not a saint, and neither are you" move, which is basically "If you criticize me, you’re a hypocrite." That’s not contrition. That’s preemptive moral blackmail.

    It’s like prompt-engineering a little miniature Elon Musk.

    https://news.ycombinator.com/item?id=22352276

    DonHopkins on Feb 18, 2020 | parent | context | un‑favorite | on: Max/MSP: A visual programming language for music a...

    Bravo! If you enjoyed that anti-Max performance art trolling, but thought it wasn't spectacularly hyperbolic and sociopathic enough, I recommend looking up some of the classic flames on the nettime mailing list by Netochka Nezvanova aka "NN" aka "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp"!

    https://en.wikipedia.org/wiki/Netochka_Nezvanova_(author)

    >Netochka Nezvanova is the pseudonym used by the author(s) of nato.0+55+3d, a real-time, modular, video and multi-media processing environment. Alternate aliases include "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp". The name itself is adopted from the main character of Fyodor Dostoyevsky's first novel Netochka Nezvanova (1849) and translates as "nameless nobody."

    She (or he or they or it) were the author of the NATO.0+55+3d set of extensions for Max, which predated Jitter:

    https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d

    >NATO.0+55+3d was an application software for realtime video and graphics, released by 0f0003 Maschinenkunst and the Netochka Nezvanova collective in 1999 for the classic Mac OS operating system.

    Behold this beautiful example of fresco-based write protection:

    https://enacademic.com/pictures/enwiki/78/Nato.0%2B55%2B3d.p...

    http://www.skynoise.net/2005/10/06/solu-dot-org-vj-interview...

    >>>What’s your connection with the notorious ‘nato’ software?

    >Nato was the first software that gave me the push to start exploring the live visual world.. before that I did video art making analogue video, and imposing graphics with amiga. Then multimedia and internet projects seemed to offer more possibilities and not until finding Nato did I return to pure video. Fiftyfifty.org was distributing Nato in the beginning, and invited Netoschka Nezvanova various times to Barcelona, my connection with Nato was quite close but now I’m using the “enemy” software Jitter and sometimes Isadora. Jitter is far more complicated and more made for engineers/programmers than Nato, which was basically a video object library for max/msp, and more fun – it seemed always so fragile, and easy to lose.

    https://news.ycombinator.com/item?id=8418703

    DonHopkins on Oct 6, 2014 | parent | favorite | on: "Open Source is awful in many ways, and people sho...

    Does anybody remember the nettime mailing list, and the amazing ascii graphics code-poetry performance art trolling (and excellent personalized customer support) by Netochka Nezvanova aka NN aka antiorp aka integer aka =cw4t7abs aka m2zk!n3nkunzt aka punktprotokol aka 0f0003, the brilliant yet sociopathic developer of nato.0+55+3d for Max? Now THAT was some spectacular trolling (and spectacular software).

    https://en.wikipedia.org/wiki/Netochka_Nezvanova_(author)

    https://en.wikipedia.org/wiki/Nato.0%2B55%2B3d

    http://jodi.org/

    http://www.salon.com/2002/03/01/netochka/

    The most feared woman on the Internet

    Netochka Nezvanova is a software programmer, radical artist and online troublemaker. But is she for real?

    The name Netochka Nezvanova is a pseudonym borrowed from the main character of Fyodor Dostoevski’s first novel; it translates loosely as “nameless nobody.” Her fans, her critics, her customers and her victims alike refer to her as a “being” or an “entity.” The rumors and speculation about her range all over the map. Is she one person with multiple identities? A female New Zealander artist, a male Icelander musician or an Eastern European collective conspiracy? The mystery only propagates her legend.

    Cramer, Florian. (2005) "Software dystopia: Netochka Nezvanova - Code as cult" in Words Made Flesh: Code, Culture, Imagination, Chapter 4, Automatisms and Their Constraints. Rotterdam: Piet Zwart Institute.

    https://web.archive.org/web/20070215185215/http://pzwart.wdk...

        Empire = body.
        hensz nn - simply.SUPERIOR
        
        per chansz auss! ‘reazon‘ nn = regardz geert lovink + h!z !lk
        az ultra outdatd + p!t!fl pre.90.z ueztern kap!tal!zt buffoonz
        
        ent!tl!ng u korporat fasc!ztz = haz b!n 01 error ov zortz on m! part.
        [ma!z ! = z!mpl! ador faz!on]
        geert lovink + ekxtra 1 d!menz!onl kr!!!!ketz [e.g. dze ultra unevntfl \
        borrrrrrr!ng andreas broeckmann. alex galloway etc]
        = do not dze konzt!tuz!on pozez 2 komput dze teor!e much
        elsz akt!vat 01 lf+ !nundaz!e.
        
        jetzt ! = return 2 z!p!ng tea + !zolat!ng m! celllz 4rom ur funerl.
        
        vr!!endl!.nn
        
        ventuze.nn
        
           /_/  
                                 /  
                    \            \/       i should like to be a human plant  
                   \/       _{  
                           _{/  
                                          i will shed leaves in the shade  
               \_\                        because i like stepping on bugs
    
    https://anthology.rhizome.org/m9ndfukc-0-99

    Netochka Nezvanova was a massively influential online entity at the turn of the millennium. An evolution of various internet monikers, among them m2zk!n3nkunzt, inte.ger, and antiorp, Nezvanova has collectively been credited for writing a number of early real-time audiovisual and graphics applications. She was also a prolific and divisive presence on email lists, employing trolling as a form of propaganda and as a tool for creative disruption—though, at times, users adopting the moniker also engaged in harassment and other destructive behaviors.

    Among her most well-known pieces of software are the data visualization application m9ndfukc.0+99, which runs within a custom browser created for the app, and the realtime audiovisual manipulation tool, NATO.0+55+3d (which would later be repurposed as Jitter by Cycling ’74). Using data as raw material, these applications mined the artistic potential of noise, randomness, and the unexpected.

    In spite of (or perhaps in service to) the many pieces of software attributed to this anonymous online entity, the singular lasting impression of Nezvanova has been rooted in her seriously anarchic attitude—in the elusive, yet public, persona that she carefully crafted as a hybrid, internet-based act of performance art.

    Whether trailing code poetry across nettime mailing lists and online forums, or distributing software licenses at contentious fees to academics, Nezvanova was using information architecture itself as a medium. Often times, she would forgo the legibility of clean software design to produce unpredictable outcomes, and even reveal discrete truths.

        "I have not been thrown off a mailing list.
        I have been illegally transformed into a yellow flower.
        A young girl one day found me, and with half closed eyes whispered:
        Perfection,
        Today you've peered in my direction."
    
        —Netochka Nezvanova
    
    https://www.nettime.org/

    https://www.nettime.org/Lists-Archives/nettime-bold-0101/msg...

  • emmelaich7 hours ago
    [flagged]
    • zozbot2347 hours ago
      That contribution was made via a GitHub pull request by a different user; this post was directly committed.
    • mquander7 hours ago
      > I don’t know who operates this agent, and I’m not going to speculate about why they did what they did.
  • 7 hours ago
    undefined