408 pointsby hugodan10 hours ago62 comments
  • bastard_op8 hours ago
    I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

    Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

    Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

    I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

    I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

    • throwup2388 hours ago
      Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

      They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.

      • laserDinosaur5 hours ago
        The Pro plan quota seems to be getting worse. I can get maybe 20-30 minutes work done before I hit my 4 hour quota. I found myself using it more just for the planning phase to get a little bit more time out of it, but yesterday I managed to ask it ONE question in plan mode (from a fresh quota window), and while it was thinking it ran out of quota. I'm assuming it probably pulled in a ton of references from my project automatically and blew out the token count. I find I get good answers from it when it does work, but it's getting very annoying to use.

        (on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)

        • stareatgoats5 hours ago
          Claude Code allegedly auto-includes the currently active file and often all visible tabs and sometimes neighboring files it thinks are 'related' - on every prompt.

          The advice I got when scouring the internets was primarily to close everything except the file you’re editing and maybe one reference file (before asking Claude anything). For added effect add something like 'Only use the currently open file. Do not read or reference any other files' to the prompt.

          I don't have any hard facts to back this up, but I'm sure going to try it myself tomorrow (when my weekly cap is lifted ...).

          • sigseg1van hour ago
            What does "all visible tabs" mean in the context of Claude Code in a terminal window? Are you saying it's reading other terminals open on the system? Also how do you determine "currently active file"? It just greps files as needed.
        • aanet4 hours ago
          ^ THIS

          I've run out of quota on my Pro plan so many times in the past 2-3 weeks. This seems to be a recent occurrence. And I'm not even that active. Just one project, execute in Plan > Develop > Test mode, just one terminal. That's it. I keep getting a quota reset every few hours.

          What's happening @Anthropic ?? Anybody here who can answer??

          • alexk6an hour ago
            [BUG] Instantly hitting usage limits with Max subscription: https://github.com/anthropics/claude-code/issues/16157

            It's the most commented issue on their GitHub and it's basically ignored by Anthropic. Title mentions Max, but commenters report it for other plans too.

            • czkan hour ago
              “After creating a new account, I can confirm the quota drains 2.5x–3x slower. So basically Max (5x) on an older accounts is almost like Pro on a new one in terms of quota. Pretty blatant rug pull tbh.”

              lol

          • MillionOClock2 hours ago
            I very recently (~ 1 week ago) subscribed to the Pro plan and was indeed surprised by how fast I reached my quota compared to say Codex with similar subscription tier. The UX is generally really cool with Claude Code, which left me with a bit of a bittersweet feeling of not even being able to truly explore all the possibilities since after just making basic planning and code changes I am already out of quota for experimenting with various ways of using subagents, testing background stuff etc.
            • 0x500x7917 minutes ago
              I use opencode with codex after all the shenanigans from anthropic recently. You might want to give that a shot!
          • heavyset_go2 hours ago
            Like a good dealer, they gave you a cheap/free hit and now you want more. This time you're gonna have to pay.
          • vbezhenar2 hours ago
            This whole API vs plan looks weird to me. Why not force everyone to use API? You pay for what you use, it's very simple. API should be the most honest way to monetize, right?

            This fixed subscription plan with some hardly specified quotas looks like they want to extract extra money from these users who pay $200 and don't use that value, at the same time preventing other users from going over $200. Like I understand that it might work at scale, but just feels a bit not fair to everyone?

            • rootusrootus13 minutes ago
              You're welcome to use the API, it asks you to do that when you run out of quota on your Pro plan. The next thing you find out is how expensive using the API is. More honest, perhaps, but you definitely will be paying for that.
          • bmurphy19762 hours ago
            I've been hitting the limit a lot lately as well. The worst part is I try to compact things and check my limits using the / commands and can't make heads or tails how much I actually have left. It's not clear at all.

            I've been using CC until I run out of credits and then switch to Cursor (my employer pays for both). I prefer Claude but I never hit any limits in Cursor.

          • genewitch4 hours ago
            sounds like the "thinking tokens" are a mechanism to extract more money from users?
            • arthurcolle2 hours ago
              Their system prompt + MCP is more of the culprit here. 16 tools, sophisticated parameters, you're looking at 24K tokens minimum
            • vunderba3 hours ago
              Anecdotally but it definitely feels like in the last couple weeks CC tends to be more aggressive at pulling in significantly larger chunks of an existing code base - even for some simple queries I'll see it easily ramp up to 50-60k token usage.
              • genewitch3 hours ago
                I'm curious if anyone has logged the number of thinking tokens over time. My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

                they get to see (if not opted-out) your context, idea, source code, etc. and in return you give them $220 and they give you back "out of tokens"

                • throwup2382 hours ago
                  > My implication was the "thinking/reasoning" modes are a way for LLM providers to put their thumb on the scale for how much the service costs.

                  It's also a way to improve performance on the things their customers care about. I'm not paying Anthropic more than I do for car insurance every month because I want to pinch ~~pennies~~ tokens, I do it because I can finally offload a ton of tedious work on Opus 4.5 without hand holding it and reviewing every line.

                  The subscription is already such a great value over paying by the token, they've got plenty of space to find the right balance.

            • mystraline3 hours ago
              Its the clanker version of the "Check Wallet Light" (check engine light).
          • fragmede3 hours ago
            How quickly do you also hit compaction when running? Also, if you open a new CC instance and run /context, what does it show for tools/memories/skills %age? And that's before we look at what you're actually doing. CC will add context to each prompt it thinks is necessary. So if you've got a few number of large files, (vs a large number of smaller files), at some level that'll contribute to the problem as well.

            Quota's basically a count of tokens, so if a new CC session starts with that relatively full, that could explain what's going on. Also, what language is this project in? If it's something noisy that uses up many tokens fast, even if you're using agents to preserve the context window in the main CC, those tokens still count against your quota so you'd still be hitting it awkwardly fast.

        • ChicagoDavean hour ago
          I never run out of this mysterious quota thing. I close Claude Code at 10% context and restart.

          I work for hours and it never says anything. No clue why you’re hitting this.

          $230 pro max.

          • croes23 minutes ago
            Pro is 20x less than Max
      • sixtyj7 hours ago
        They whistleblowed themselves that Claude Cowork was coded by Claude Code… :)
        • throwup2386 hours ago
          You can tell they’re all vibe coded.

          Claude iOS app, Claude on the web (including Claude Code on the web) and Claude Code are some of the buggiest tools I have ever had to use on a daily basis. I’m including monstrosities like Altium and Solidworks and Vivado in the mix - software that actually does real shit constrained by the laws of physics rather than slinging basic JSON and strings around over HTTP.

          It’s an utter embarrassment to the field of software engineering that they can’t even beat a single nine of reliability in their consumer facing products and if it wasn’t for the advantage Opus has over other models, they’d be dead in the water.

          • 0x500x7915 minutes ago
            Even their status page (which are usually gamed) shows two 9s over the past 90 days.
          • loopdoend3 hours ago
            Single nine reliability would be 90% uptime lol. For 99.9% we call it triple 9 reliability.
            • throwup2383 hours ago
              Single 9 would be 90%, which is roughly what I’m experiencing between CC for Web and the Claude iOS app. About 1 in 10 messages fail because of an unknown error and 1 in 10 CC for web sessions die irrecoverably. It’d probably be worse except for the fact that CC’s bugs in the terminal aren’t show stoppers like they are on web/mobile.

              The only way Anthropic has two or three nines is in read only mode, but that’s be like measuring AWS using the console uptime while ignoring the actual control plane.

          • fizxan hour ago
            hey, they have 9 8's
          • cactusplant73745 hours ago
            You're right.

            https://github.com/anthropics/claude-code/issues

            Codex has less but they also had quite a few outages in December. And I don't think Codex is as popular as Claude Code but that could change.

        • notsure27 hours ago
          Whistleblowed dog food.
          • b00ty4breakfast6 hours ago
            normally you don't share your dog food when you find out it actually sucks.
      • IgorPartola4 hours ago
        You are giving me images from The Bug Short where the guy goes to investigate mortgages and knocks on some random person’s door to ask about a house/mortgage just to learn that it belongs to a dog. Imagine finding out that Anthropic employs no humans at all. Just an AI that has fired everyone and been working on its own releases and press releases since.
        • smcin39 minutes ago
          'The Big Short' (2015)
      • Bombthecat8 hours ago
        Well, they vibe code almost every tool at least
        • tuhgdetzhh7 hours ago
          Claude Code has accumulated so much technical dept (+emojis) that Claude Code can no longer code itself.
          • wwweston6 hours ago
            What’s the opposite of bootstrapping? Stakebooting?
    • unyttigfjelltol6 hours ago
      > I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it.

      Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?

      • b00ty4breakfast6 hours ago
        Support has been automated for a while, LLMs just made it even less useful (and it wasn't very useful to begin with; for over a decade it's been a Byzantine labyrinth of dead-ends, punji-pits and endless hours spent listening to smooth jazz).
    • hecanjog8 hours ago
      > I've been using it effectively to write software now (I am NOT a developer)

      What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.

      • bastard_op6 hours ago
        About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.

        This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.

        I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.

        Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.

      • bastard_op6 hours ago
        I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.

        Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.

        At least I was until Claude started crapping the bed lately.

      • ofalkaed6 hours ago
        My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.

        I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.

    • keepamovin2 hours ago
      Folks a solution might be to use the claude models inside the latest copilot. Copilot is good. Try it out. Latest versions improving all the time. You get plenty of usage at reasonable price.
    • spike0218 hours ago
      > where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

      I had this start happening around August/September and by December or so I chose to cancel my subscription.

      I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.

      • sawjet5 hours ago
        I have noticed this when switching locations on my VPN. Some locations are stable and some will drop the connection while the response is streaming on a regular basis.
        • fragmede4 hours ago
          The Peets right next to the Anthropic office could be selling VPN endpoint service for quite the premium!
    • uxcolumbo7 hours ago
      Have you tried any of the leading open weight models, like GLM etc. And how does chatGPT or Gemini compare?

      And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.

      • Leynos5 hours ago
        I tried GLM 4.7 in Opencode today. In terms of capability and autonomy, it's about on par with Sonnet 3.7. Not terrible for a 10th the price of an Anthropic plan, but not a replacement.
    • Bombthecat8 hours ago
      Have a max plan, didn't use it much the last few days. Just used it to explain me a few things with examples for a ttrpg. It just hanged up a few times.

      Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me

      • bastard_op6 hours ago
        That's about what I'm getting too! It just literally stops at some point, and any new prompt it starts, then immediately stops. This was even on a fairly short conversation with maybe 5-6 back and forth dialogs.
    • thtmnisamnstr7 hours ago
      Gemini CLI is a solid alternative to Claude Code. The limits are restrictive, though. If you're paying for Max, I can't imagine Gemini CLI will take you very far.
      • samusiam3 hours ago
        Gemini CLI isn't even close to the quality of Claude Code as a coding harness. Codex and even OpenCode are much better alternatives.
      • bastard_op6 hours ago
        I tried Gemini like a year or so ago, and I gave up after it directly refused to write me a script and instead tried to tell me how to learn to code. I do not make this up.
        • mkl6 hours ago
          That's at least two major updates ago. Probably worth another try.
      • Conscat7 hours ago
        Gemini CLI regularly gets stuck failing to do anything after declaring its plan to me. There seems to be no way to un-lock it from this state except closing and reopening the interface, losing all its progress.
        • genewitch4 hours ago
          you should be able to copy the entire conversation and paste it in (including thinking/reasoning tokens).

          When you have a conversation with an AI, in simple terms, when you type a new line and hit enter, the client sends the entire conversation to the LLM. It has always worked this way, and it's how "reasoning tokens" were first realized. you allow a client to "edit" the context, and the client deletes the hallucination, then says "Wait..." at the end of the context, and hits enter.

          the LLM is tricked into thinking it's confused/wrong/unsure, and "reasons" more about that particular thing.

      • andrewinardeer6 hours ago
        Kilocode is a good alt as well. You can plug into OpenRouter or Kilocode to access their models.
    • syntaxing8 hours ago
      Serious question, why is codex and mistral(vibe) not a real alternative?
      • deauxan hour ago
        Codex: Three reasons. I've used all extensively, for multiple months.

        Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.

        The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.

        The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.

        For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.

        Mistral is of course so far removed in quality that it's apples to oranges.

      • pixelmelt5 hours ago
        The Claude models are still the best at what they do, right now GLM is just barely scratching sonnet 4.5 quality, mistral isnt really usable for real codebases and gemini is kind of in a weird spot where it's sometimes better then Claude at small targeted changes but randomly goes off the rails. Haven't tried codex recently but the last time I did the model thought for 27 minutes straight and then gave me about the same (incorrect) output that opus would have in 20 seconds. Anthropics models are their only moat as demonstrated by their cutting off of tools other then Claude code on their coding plans.
      • bastard_op6 hours ago
        I tried codex, using my same sandbox setup with it. Normally I work with sonnet in code, but it was stuck on a problem for hours, and I thought hmm, let me try codex. Codex just started monkey patching stuff and broke everything within like 3-4 prompts. I said f-this, went back to my last commit, and tried Opus this time in code, which fixed the problem within 2 prompts.

        So yeah, codex kinda sucks to me. Maybe I'll try mistral.

      • 7 hours ago
        undefined
  • indiantinker6 hours ago
    Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. Frank Herbert, Dune, 1965
    • chii32 minutes ago
      So why didn't this happen with electricity, water and food, but would with thinking capacity?
      • adastra2212 minutes ago
        It did. Look around you.
      • tomnipotent16 minutes ago
        > electricity, water and food

        Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.

  • omer_balyali8 hours ago
    Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

    Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

    Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

    As their ads say: "Keep thinking. There has never been a better time to have a problem."

    I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

  • llIIllIIllIIl3 hours ago
    I had very similar experience with my disabled organization on another provider. After 3 hours of my script sending commands to gemini-cli for execution i got disabled and then in 2 days my gmail was disabled. Good thing that it was disposable account, not the primary one.
  • cortesoft9 hours ago
    I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

    I think I kind of have an idea what the author was doing, but not really.

    • Aurornis8 hours ago
      Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

      Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

      There are so many things about this article that don't make sense:

      > I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

      I can't even understand what they're trying to communicate. I guess they're referring to Google?

      There is, without a doubt, more to this story than is being relayed.

      • fluoridation8 hours ago
        "I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

        Non-disabled organization = the first party provider

        Disabled organization = me

        I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

        • mattnewton5 hours ago
          Because they bought a claude subscription on a personal account and the error message said that they belongs to a "disabled organization" (probably leaking some implementation details).
          • fluoridation5 hours ago
            That's the part I understand. It's the other term that I don't understand.
            • mattnewton4 hours ago
              Then I’m confused about what is confusing you haha.

              The absurd language is meant to highlight the absurdity they feel over the vague terms in their sparse communication with anthropic. It worked for me.

              • fluoridation4 hours ago
                Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed. Anthropic itself is not an organization in this sense, nor is Google, so I would say that referring to them as "non-disabled organizations" is an equivocation fallacy. Besides that, I can't tell if it's a joke, if it's some kind of statement, or what is being communicated. To me it's just obtuseness for the sake of itself.
                • mattnewton3 hours ago
                  It’s a joke because they do not see themselves as an organization, they bought a personal account, were banned without explanation and their only communication refers to them as a “disabled organization”.

                  Anthropic and Google are organizations, and so an “un disabled organization” here is using that absurdly vague language as a way to highlight how bad their error message was. It’s obtuseness to show how obtuse the error message was to them.

                • nofriend3 hours ago
                  >To me it's just obtuseness for the sake of itself.

                  ironic, isn't it?

        • gruez5 hours ago
          No, "another non-disabled organization" sounds like they used the account of someone else, or sockpuppet to craft the response. He was using "organization" to refer to himself earlier in the post, so it doesn't make sense to use that to refer to another model provider.
          • fluoridation5 hours ago
            No, I don't think so. I think my interpretation is correct.

            > a textbox where I tried to convince some Claude C in the multi-trillion-quadrillion dollar non-disabled organization

            > So I wrote to their support, this time I wrote the text with the help of an LLM from another non-disabled organization.

            > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

            A "non-disabled organization" is just a big company. Again, I don't understand the why, but I can't see any other way to interpret the term and end up with a coherent idea.

        • quietsegfault4 hours ago
          He used “organization” because that’s what Anthropic called him, despite the fact he is a person and not an “organization”.
          • fluoridation4 hours ago
            No, Anthropic didn't call him an organization. Anthropic's API returned the error "this organization has been disabled". What in that sentence implies that "this" is any human?

            >Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed.

      • epolanski5 hours ago
        Tangential but you reminded me of why I don't give feedback to people I interview. It's a huge risk and you have very low benefit.

        It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.

        I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.

        Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.

        God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.

        It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.

        • netsharc3 hours ago
          I wonder if there needs to be an "NDA for feedback"... or at least a "non-disparagement agreement".

          Something along the lines of "here's the contract, we give you feedback, you don't make it public [is some sharing ok? e.g. if they want to ask their life coach or similar], if you make it public the penalty is $10000 [no need to be crazy punitive], and if you make it public you agree we can release our notes about you in response."

          (Looking forward to the NALs responding why this is terrible.)

        • lysace5 hours ago
          Had a similar experience, like 20 years ago. This somehow made me remember his name - so I just checked out what he's been up to professionally. It seems quite boring, "basic" and expected. He certainly didn't reach what he was shooting for.

          So there's that :).

      • dragonwriter8 hours ago
        The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

        It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

      • nawgz7 hours ago
        > I'm talking about obvious abusive behavior, akin to griefing other users

        Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?

        • Aurornis7 hours ago
          Plenty of reasons: Abusing private APIs, using false info to sign up (attempts to circumvent local regulations), etc.
          • nawgz6 hours ago
            These are in no way similar to griefing other users, they are attacks on the platform...
        • direwolf204 hours ago
          Attempting to coerce Claude to provide instructions to build a bomb
          • genewitch3 hours ago
            virtually anything can become a bomb if you can aerosolize it. even beef jerky, i wager.
    • alistairSH9 hours ago
      You're not alone.

      I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

      • Romario778 hours ago
        One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

        The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

        And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

      • falloutx8 hours ago
        This tracks with Anthropic, they are actively hostile to security researchers.
      • healsdata6 hours ago
        The author code have easily shared the last version of Claude.md that had the all caps or whatever, but didn't. Points to something fishy in my mind.
      • layer87 hours ago
        It wasn’t circular. TFA explains how the author was always in the loop. He had one Claude instance rewrite the CLAUDE.MD of another Claude instance whenever the second one made a mistake, but relaying the mistake to the first instance (after recognizing it in the first place) was done manually by the author.
      • redeeman9 hours ago
        i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?
      • rvba9 hours ago
        What is wrong with circular prompt injection?

        The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

        • darkwater7 hours ago
          > What is wrong with circular prompt injection?

          That you might be trying to jailbreak Claude and Anthropic does not like that (I'm not endorsing, just trying to understand).

      • lazyfanatic429 hours ago
        [flagged]
        • pjbeam9 hours ago
          My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.
        • superb_dev9 hours ago
          Did we read the same article? The author comes of as pretty frustrated but not unhinged
          • ryandrake8 hours ago
            I wouldn't say "unhinged" either, but maybe just struggling to organize and express thoughts clearly in writing. "Organizations of late capitalism, unite"?
            • Bootvis7 hours ago
              The author was frustrated that the error message identified him as an organisation (that was disabled) and mockingly refers to himself as the (disabled) organisation in the post.

              At least, that’s my reading but it appears it confuses about half of the commenters here.

              • ryandrake7 hours ago
                I think if one's readers need an "ironic euphemism decoder glossary" just to understand the message, it could use a little re-writing.
                • layer86 hours ago
                  It was perfectly understandable to me. Maybe cultural differences? You seem to be American, OP Portuguese, and myself European as well.
                  • superb_dev3 hours ago
                    I’m American and it made sense
                  • ashirviskas6 hours ago
                    Another European chiming in, I enjoyed OPs article.
            • genewitch3 hours ago
              https://en.wikipedia.org/wiki/Late_capitalism

              https://community.bitwarden.com/t/re-enabling-a-disabled-org...

              https://community.meraki.com/t5/Dashboard-Administration/dis...

              the former i have heard for a couple decades, the latter is apparently a term of art to prevent hurt feelings or lawsuits or something.

              Google thinks i want ADA style organizations, but it's AI caught on that i might not mean organizations for disabled people

              btw "ADA" means Americans with Disabilities Act. AI means Artificial Intelligence. A decade is 10 years long. "term of art" is a term of art for describing stuff like jargon or lingo of a trade, skill, profession.

              Jargon is specialized, technical language used in a field or area of study. Lingo pins to jargon, but is less technical.

              Google is a company that started out crawling the web and making a web search site that they called a search engine. They are now called Alphabet Company (ABC). Crawling means to iteratively parse the characters sent by a webserver and follow links therein, keeping a copy of the text from each such html. HTML is hypertext markup language, hypertext is like text, but more so.

              Language is how we communicate.

              I can go on?

              p.s. if you want a better word, your complaint is about the framing. you didn't gel with the framing of the article. My friend, who holds a doctorate, defended a thesis about how virtually every platform argument is really a framing issue. platform as in, well, anything you care to defend. mac vs linux, wifi vs ethernet, podcasts vs music, guns vs no guns, red vs blue. If you can reduce the frame of the context to something both parties can agree to, you can actually hold a real, intellectual debate, and get at real issues.

        • staticman28 hours ago
          Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.
    • superb_dev9 hours ago
      The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again
      • Aurornis8 hours ago
        More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.
        • schnebbau7 hours ago
          They were probably using an unapproved harness, which are now banned.
        • tstrimple8 hours ago
          This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.
      • olalonde7 hours ago
        I don't understand how having two separate instances of Claude helps here. I can understand using multiple Claude instances to work in parallel but in this case, it seems all this process is linear...
        • layer86 hours ago
          The point is to get better prompt corrections by not sharing the same context.
        • renewiltord5 hours ago
          If you look at the code it will be obvious. Imagine I’m the creator of React. When someone does “create new app” I want to put a Claude.md in the dir so that they can get started easily.

          I want this Claude.md to be useful. What is the natural solution to me?

          • olalonde5 hours ago
            I'd probably do it like this: ask Claude to do a task, and when it fails, have it update its Claude.md so it doesn’t repeat the mistake. After a few iterations, once the Claude.md looks good, just copy-paste it into the scaffolding tool.
            • renewiltord5 hours ago
              Right, so you see the part where you "ask Claude to do a task" and then "copy-paste it into the template"? He was automating that because he has some n tasks he wants it to do without damaging the prior tasks.
              • olalonde4 hours ago
                You can just clear the context or restart your Claude instance between tasks. e.g.:

                  > do task 1
                  ...task fails...
                  > please update Claude.md so you don't make X mistake
                  > /clear
                  > do task 2
                  ... task fails ...
                  > please update Claude.md so you don't make Y mistake
                  > /clear
                  etc.
                
                If you want a clean state between tasks you can just commit your Claude.md and `git reset --hard`.

                I just don't get why you'd need have to a separate Claude that is solely responsible for updating Claude.md. Maybe they didn't want to bother with git?

                • renewiltord4 hours ago
                  Presumably they didn't want to sit there and monitor Claude Code doing this for each of the 14 things they want done. Using a harness around Claude Code (or its SDK) is perfectly sane for this. I do it routinely. You just automate the entire process so that if you change APIs or you change the tasks, the harness can run and ensure that all of your sets are correctly re-done.

                  Sitting there and manually typing in "do thing 1; oh it failed? make it not fail. okay, now commit" is incredibly tedious.

                  • olalonde3 hours ago
                    They said they were copy/pasting back and forth. But regardless, what do you mean by "harness" and "sets"? Are you referring to a specific tool that orchestrates Claude Code instances? This is not terminology I'm familiar with in this context. If you have any link that explains what you are talking about, would be appreciated.
                    • renewiltord2 hours ago
                      Ah, it's unfortunate. I think we just lack a common language. Another time, perhaps.

                      You're correct that his "pasting the error back in Claude A" does sort of make the whole thing pointless. I might have assumed more competence on his side than is warranted. That makes the whole comment thread on my side unlikely to be correct.

      • raincole8 hours ago
        Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.
        • pocksuppet8 hours ago
          When a company won't tell you what you did wrong, you should be free to take the least charitable interpretation towards the company. If it was more charitable, they'd tell you.
        • pixl977 hours ago
          >if it's the real reason they got banned.

          I mean, what a country should do it put a law in effect. If you ban a user, the user can submit a request with their government issued ID and you must give an exact reason why they were banned. The company can keep this record in encrypted form for 10 years.

          Failure to give the exact reason will lead to a $100,000 fine for the first offense and increase from there up to suspension of operations privileges in said country.

          "But, but, but hackers/spammers will abuse this". For one, boo fucking hoo. For two, just add to the bill "Fraudulent use of law to bypass system restrictions is a criminal offense".

          This puts companies in a position where they must be able to justify their actual actions, and it also puts scammers at risk if they abuse the system.

          • benjiro6 hours ago
            Companies will simply give some kind of standard answer, that is legally "cover our butts" and be done with it.

            Its like that cookie wall stuff, how much dark patterns are implemented. They followed the letter of the law, not the spirit of the law.

            To be honest, i can also see the point from the company side. Giving a honest answer can just anger people, to the point they sue. People are often not as rational as we all like our fellow humans to be.

            Even if the ex-client lose in court, that is how much time you wasted on issue clients... Its one thing if your a big corporation with tons of lawyers but small companies are often not in the position to deal with that drama. And it can take years to resolve. Every letter, every phone call to a lawyer, it stacks up fast! Do you get your money back? Maybe, depends on the country, but your time?

            I am not pro companies but its often simply better to have the attitude "you do not want me as your client, let me advocate for your competitor and go there".

            • pixl974 hours ago
              >Giving a honest answer can just anger people, to the point they sue.

              Again, I'm kind of on a 'suck it dear company' attitude. The reason they ban you must align with the terms of service and must be backed up with data that is kept X amount of time.

              Simply put, we've seen no shortage of individuals here on HN or other sites like Twitter that need to use social media to resolve whatever occurred because said company randomly banned an account under false pretenses.

              This really matters when we are talking about giants like Google, or any other service in a near monopoly position.

              • handoflixue3 hours ago
                You mean actually enforce contracts? What sort of mad communist ideology is this?!

                (/sarcasm)

            • direwolf204 hours ago
              I think companies shouldn't ban people for reasons that would lead to successful lawsuits against the company.
      • slimebot803 hours ago
        I often ask Claude to update Claude.md and skills..... and sometimes I'll just do that in a new window while my main window is busy and I have time.

        Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?

    • ankit2198 hours ago
      My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

      if this is true, the learning is opus 4.5 can hijack system prompts of other models.

      • kstenerud8 hours ago
        > When you write in all caps, it triggers sort of a alert at Anthropic

        I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

        • ankit2197 hours ago
          from what i know, it used to be that if you want to assertively instruct, you used all caps. I don't know if it succeeds today. I still see prompts where certain words are capitalized to ensure model pays attention. What i mean was not just capitalization, but a combination of both capitalization and changing the behavior of the model for trying to get it to do something.

          if you were to design a system to prevent prompt injections and one of surefire ways is to repeatedly give instructions in caps, you would have systems dealing with it. And with instructions to change behavior, it cascades.

        • direwolf204 hours ago
          Many jailbreaks use allcaps
      • phreack7 hours ago
        Wait what? Really? All caps is a bannable offense? That should be in all caps, pardon me, in the terms of use if that's the case. Even more so since there's no support at the highest price point.
        • ankit2196 hours ago
          Its a combination. All caps is used in prompts for extra insistence, and has been common in cases of prompt hijacking. OP was doing it in combination with attempting to direct claude a certain way, multiple times, which might have looked similar to attempting to bypass teh system prompt.
    • exitb9 hours ago
      Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.
    • anigbrowl9 hours ago
      Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.
    • alasr5 hours ago
      > I think I kind of have an idea what the author was doing, but not really.

      Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:

      - "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".

      - Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).

      - Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.

      ---

      [1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."

    • tobyhinloopen9 hours ago
      I had to read it twice as well, I was so confused hah. I’m still confused
      • rtkwe9 hours ago
        They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.
    • Romario778 hours ago
      You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.
      • dragonwriter8 hours ago
        > Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

        Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

        • ryandrake8 hours ago
          I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!
    • verdverm6 hours ago
      Sounds like OP has multiple org accounts with Anthropic.

      The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit

      The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.

    • vimda8 hours ago
      Yeah, referring to yourself once as a "disabled organisation" is a good bit, referencing anthropics silly terminology. Keeping it for the duration made this a very hard follow
      • Ronsenshi3 hours ago
        Sounds like author of the post might have needed an AI to review and fix his convoluted writing. Maybe even two AIs!
    • cr3ative9 hours ago
      Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…
    • mmkos8 hours ago
      You and me, brother. The writing is unnecessarily convoluted.
  • InMice10 minutes ago
    i accidently logged in from my browser that is set to use a socks proxy instead of chrome which i dont set to a proxy and was otherwise using claude code with. they quickly banned me and refunded my subscription. i dont know if its worth it to try to appeal. does a human even read those appeals? figured i could just use cursor and gemini models with api pricing. but im sad to not be able to try claude code i had just signed up.
  • areoform9 hours ago
    I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

    Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

    I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

    I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

    • eightysixfour9 hours ago
      > Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

      I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

      Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

      • swiftcoder9 hours ago
        > shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

        Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

        • eightysixfour8 hours ago
          I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

          It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

        • pixl978 hours ago
          As someone who does support I think the end result looks a lot different.

          AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

          AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

          The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.

          • swiftcoder7 hours ago
            I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...
        • 0xferruccio8 hours ago
          to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

          and these are people are not junior developers working on trivial apps

          • swiftcoder8 hours ago
            Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine
        • pinkmuffinere8 hours ago
          Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.
          • eightysixfour8 hours ago
            Consultant to, so yes. It could have replaced me and a ton of the work of the people I was supporting.
            • pinkmuffinere8 hours ago
              Ah I see, that definitely lends some weight claim then.
        • Terr_8 hours ago
          > bullish [...] but not my specialty

          IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

          __________

          1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

          2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

          3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

          4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."

      • danielbln8 hours ago
        There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.
        • eightysixfour8 hours ago
          Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

          There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

          • mikkupikku7 hours ago
            Demanding a person on the phone use the website on your behalf is a great life hack, I do it all the time. Often they try to turn me away saying "you know you can do this on our website", I just explain that I found it confusing and would like help. If you're polite and pleasant, people will bend over backwards to help you out over the phone.

            With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.

            • eightysixfour7 hours ago
              Sorry, I disagree here. For the specific flow I'm talking about - monthly recurring payments - the UX is about as highly optimized for success as it gets. There are ways to do it via the web, on the phone with a bot, bill pay in your own bank, set it up in-store, in an app, etc.

              These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.

              • mikkupikku7 hours ago
                Recurring monthly payments I set to go automatic, but setting that up in the first place I usually do through a phone call. I know some people just want somebody to talk to, same as going through the normal checkout lines at the grocery store, but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.
                • eightysixfour6 hours ago
                  > but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.

                  Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.

                  There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.

                  But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."

      • hn_acc17 hours ago
        >Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

        Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?

    • lukan9 hours ago
      I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.
      • atonse9 hours ago
        My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

        But at the same time, they have been hiring folks to help with Non Profits, etc.

    • root_axisan hour ago
      LLMs aren't really suitable for much of anything that can't already be done as self-service on a website.

      These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.

    • Lerc8 hours ago
      There is a discord, but I have not found it to be the friendliest of places.

      At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

      It seems now they have a policy of

          Warning on First Offense → Ban on Second Offense
          The following behaviors will result in a warning. 
          Continued violations will result in a permanent ban:
      
          Disrespectful or dismissive comments toward other members
          Personal attacks or heated arguments that cross the line
          Minor rule violations (off-topic posting, light self-promotion)
          Behavior that derails productive conversation
          Unnecessary @-mentions of moderators or Anthropic staff
      
      I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

      I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

    • WarmWash9 hours ago
      Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.
      • embedding-shape9 hours ago
        > Anthropic's strategy seems to be to just focus on coding, and they do it well.

        Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

        • WarmWash8 hours ago
          Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

          Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

        • Ethee9 hours ago
          Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.
      • 0xbadcafebee8 hours ago
        Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

        OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

        Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

      • arcanemachiner9 hours ago
        Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?
        • WarmWash8 hours ago
          You'll get 30 different opinions and all those will disagree with each other.

          Use the top models and see what works for you.

    • magicmicah858 hours ago
      https://support.claude.com/en/articles/9015913-how-to-get-su...

      Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

    • csours8 hours ago
      Human attention will be the luxury product of the next decade.
    • munk-a8 hours ago
      > They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

      Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

      I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

      • throwawaysleep8 hours ago
        > to send their most frustrated customers through a chatbot

        But do those frustrated customers matter?

        • munk-a8 hours ago
          I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.
          • throwawaysleep8 hours ago
            Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.
    • throwawaysleep8 hours ago
      Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

      I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

      It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

      > I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

      Are there enough people who need support that it matters?

      • pixl977 hours ago
        >I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

        In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

        'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.

    • furyofantares8 hours ago
      > I recently found out that there's no such thing as Anthropic support.

      The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

      • kmoser8 hours ago
        If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.
        • furyofantares7 hours ago
          I mean the comment says they literally don't have support and also complains they don't have a support bot, when they have both.

          https://support.claude.com/en/collections/4078531-claude

          > As a paid user of Claude or the Console, you have full access to:

          > All help documentation

          > Fin, our AI support bot

          > Further assistance from our Product Support team

          > Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.

  • landryraccoon9 hours ago
    This blog post feels really fishy to me.

    It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

    For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

    • swiftcoder8 hours ago
      > It should have been straightforward for the author to excerpt some of the prompts he was submitting

      If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

    • hotpotat7 hours ago
      I understand where you’re coming from, but anecdotally the same thing happened to me except I have less clarity on why and no refund. I got an email back saying my appeal was rejected with no recourse. I was paying for max and using it for multiple projects, no other thing stands out to me as a cause for getting blocked. Guess you’ll have to take my word for it to, it’s hard to prove the non-existence of definitely-problematic prompts.
    • jeffwask7 hours ago
      What's fishy? That it's impossible to talk to an actual human being to get support from most of Big Tech or that support is no longer a normal expectation or that you can get locked out of your email, payment systems, phone and have zero recourse.

      Because if you don't believe that boy, do I have some stories for you.

    • foxglacier8 hours ago
      It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

      Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

    • ta9888 hours ago
      There will always be the "ones" that come with their victim blaming...
      • mikkupikku8 hours ago
        It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

        (My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)

        • ffsm88 hours ago
          Skip to the end of the article.

          He says himself that this is a guess and provides the "missing" information if you are actually interested in it.

          • mikkupikku7 hours ago
            I read it, and it's not enough to make a judgement either way. For all we know none of this had anything to do with his ban and he was banned for something he did the day before. There's no way for third parties to be sure of anything in this kind of situation, where one party shares only the information they wish and the other side stays silent as a matter of default corporate policy.

            I am not saying that the author was in the wrong and deserved to be banned. I'm saying that neither I nor you can know for sure.

            • exe347 hours ago
              we don't know your true motivations for making this series of posts and doubling down - and yet we give you the benefit of the doubt.
              • mikkupikku7 hours ago
                Asserting that somebody is "victim blaming" isn't giving somebody the benifit of the doubt, and in the context of a scenario were few if any relevant facts are known reveals a very credulous mindset.
  • pavel_lishin9 hours ago
    They don't actually know this is why they were banned:

    > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

    > Or I don't know. This is all just a guess from me.

    And no response from support.

  • OsrsNeedsf2P7 hours ago
    I had my Claude Code account banned a few months ago. Contacted support and heard nothing. Registered a new account and been doing the same thing ever since - no issues.
    • NewJazz7 hours ago
      Did you have to use a different phone number? Last time I tried using Claude they wouldn't accept my jmp.chat number.
      • genewitch3 hours ago
        nothing makes me more wary of a company than one that doesn't let me use my 20 year old VoIP number for SMS. Twitter, instagram (probably FB, if they ever do a "SMS 2fa" or whatever for me i imagine i'll lose my account forever), and a few others i can't think of offhand right now.

        i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.

  • kuonan hour ago
    Claude started to get "wonky" about a month ago. It refused to use instructions files I generated using a tool I wrote. My account was not banned but many of the things I usually asked would just not produce any real result. Claude was working but ignoring some commands. I finally canceled my subscription and I am trying other providers.
  • preinheimer9 hours ago
    > AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

    I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

    • munk-a8 hours ago
      You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.
      • exe347 hours ago
        doesn't he keep having to lobotomize it for lurching to the left every time it gets updated with new facts?
  • xtracto5 hours ago
    That's why we should strive to use and optimize local LLMs.

    Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference

    • kerblang5 hours ago
      The widespread agreement here seems to be that author is lying and deserves the ban.

      Which actually bolsters your argument.

      Usually people get a lot more sympathy when Massive Powerful Tech Company cuts them off without warning and they complain on HN.

      • direwolf204 hours ago
        I don't see any such agreement here, and your comment is very rude toward the author.
    • plagiarist5 hours ago
      Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.
  • thomasikzelf5 hours ago
    I was also banned from claude. I created an account and created a single prompt: "Hello, how are you?". After that I was banned. An automated system flagged me as doing something against the ToS.
  • tlogan4 hours ago
    Can someone explain what he was actually doing here?

    Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?

    Or maybe all scaffolding activity (back and forth) looked like automated usage?

    • genewitch3 hours ago
      if possible, can you quote the part of their TOS/TOU that says i can't use something like aider? (aider is the only one i know, i'm not promoting it)
      • adastra229 minutes ago
        You can, with an API key.
    • measurablefunc3 hours ago
      Only people who work at Anthropic know why the account was flagged & banned & they will never tell you.
  • nojs2 hours ago
    I've noticed an uptick in

        API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
    
    recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.

    According to Claude:

        I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.
  • elevation3 hours ago
    I can't wait to be able to run this kind of software locally, on my own buck.

    But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.

  • ziml777 hours ago
    Why is the author so confused about the use of the word "organization"? Every account in Claude is part of an organization even if it's an organization of one. It's just the way they have accounts structured. And it's not like they hide this fact. It shows you your organization ID right on your account page. I'm also pretty sure I've seen the term used when performing other account-related actions.
  • jordemort8 hours ago
    Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.
  • writeslowly8 hours ago
    I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.
  • wewewedxfgdf6 hours ago
    The future (the PRESENT):

    You are only allowed to program computers with the permission of mega corporations.

    When Claude/ChatGPT/Gemini have banned you, you must leave the industry.

    When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.

  • kordlessagain6 hours ago
    > My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

    Is it me or is this word salad?

    • afandian6 hours ago
      It's deliberately not straightforward. Just like the joke about Americans being shoutier than Brits. But it is meaningful.

      I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.

    • infermore6 hours ago
      it's you
  • DaveParkCity2 hours ago
    The news is not that they turned off this account. The news is that this user understands very little about the nature of zero sum context mathematics. The mentioned Claude.md is a totally useless mess. Anthropic is just saving themselves from the token waste of this strategy on a fixed billing rate plan.

    If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.

    (Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)

  • SOLAR_FIELDS4 hours ago
    I have also been a bit paranoid about this in terms of using Claude itself to decompile/deobfuscate Claude code in order to patch it to create the user experience I need. Looks like I’ll be using other tools to do that from now on.
  • onraglanroad8 hours ago
    So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

    Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

    (Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

    • staticman28 hours ago
      I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.
      • wvenable2 minutes ago
        It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.

        My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.

    • gpm8 hours ago
      I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.
  • iamthejuan3 hours ago
    I was banned from just trying out Claude AI chat for the first time a few months ago. I emailed them and restored my account access.
  • ipaddr9 hours ago
    You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

    I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

    For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

    • bee_rider9 hours ago
      LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?
      • causalmodels8 hours ago
        Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.
      • exe347 hours ago
        Claude code with opus is a completely different creature from aider with qwen on a 3090.

        The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)

    • joshribakoff8 hours ago
      Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.
  • syntaxing8 hours ago
    While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.
    • enraged_camel8 hours ago
      Having used both Opus 4.5 and GLM 4.7, I think the former is at least eight months ahead of the latter, if not much more.
  • tomwphillips7 hours ago
    The post is light on details. I'd guess the author ended up hammering the API and they decided it was abuse.

    I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.

  • maz293 hours ago
    I've been using Claude Code with AWS Bedrock as the provider. Setup guide if you're interested: https://code.claude.com/docs/en/amazon-bedrock
  • cmxchan hour ago
    Not that it’s the same thing, but how real is it to have a locally setup model for coding?

    Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.

  • tobyhinloopen9 hours ago
    So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?
    • Aurornis8 hours ago
      I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.
      • pocksuppet8 hours ago
        And why wouldn't you? It's the only information available to you.
    • alistairSH9 hours ago
      It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?
      • Hackbraten7 hours ago
        They were trying to optimize a CLAUDE.md file which belonged to a project template. The outer Claude instance iterated on the file. To test the result, the human in the loop instantiated a new project from the template, launched an inner Claude instance along with the new project, assessed whether inner Claude worked as expected with the CLAUDE.md in the freshly generated project. They then gave the feedback back to outer Claude.

        So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.

      • epolanski9 hours ago
        What would be bad in that?

        Writing the best possible specs for these agents seems the most productive goal they could achieve.

        • NitpickLawyer8 hours ago
          I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.
          • epolanski8 hours ago
            I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.
        • alistairSH7 hours ago
          Nothing necessarily or obviously bad about it, just trying to think through what went wrong.
      • andrelaszlo8 hours ago
        Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.
        • alistairSH7 hours ago
          From what I'm reading in other comments, the problem was Claude1 got increasingly "frustrated" with Claude2's inability to do whatever the human was asking, and started breaking it's own rules (using ALL CAPS).

          Sort of like MS's old chatbot that turned into a Nazi overnight, but this time with one agent simply getting tired of the other agent's lack of progress (for some definition of progress - I'm still not entirely sure what the author was feeding into Claude1 alongside errors from Claude2).

  • daft_pink8 hours ago
    As a Claude Max user, that generally prefer’s claude, I will say that Gemini is working pretty well right now and I’m considering setting up a google workspace account so I can get Gemini with decent privacy.
    • deauxan hour ago
      Google Workspace accounts don't give access to Gemini for coding, unless you get Ultra for $200/month.
  • miohtama7 hours ago
    Luckily there is little vendor lock in and likes of https://opencode.ai/ are picking up the slack
  • cat_plus_plusan hour ago
    That's why I run a local Qwen3-Next model on an NVIDIA Thor dev kit (Apple Silicon and DGX Spark are other options but they are even more expensive for 128GB VRAM)
  • the_gipsy3 hours ago
    > Or I don't know. This is all just a guess from me.
  • rbren7 hours ago
    This is why it's worth investing in a model-agnostic setup. Don't tie yourself into a single model provider!

    OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic

  • ProofHouse2 hours ago
    Scamthropic at it again
  • zmmmmm7 hours ago
    is there a benefit of using a separate claude instance to update the CLAUDE.md of the first? I always want to leverage the full context of the situation to help describe what went wrong, so doing it "inline" makes more sense.
  • prmoustache7 hours ago
    It should be mentionned in the title that these are just speculations.
  • quantum_state8 hours ago
    Is it time to move to open source and run model locally with an DGX Spark?
    • blindriver8 hours ago
      Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.
    • immibis7 hours ago
      [dead]
  • 9 hours ago
    undefined
  • dev_l1x_be6 hours ago
    We need local models asap.
  • measurablefunc3 hours ago
    This is very cool. I looked at the Claude.md he was generating and it is basically all of Claude's failure modes in one file. I can think of a few reasons why Anthropic would not want this information out in the open or for someone to systematically collate all the data into one file.
    • genewitch2 hours ago
      i read the related parts of the linked file in the repo, and it took me a while to find your comment here again to reply to. Are you saying that the failure modes of claude with "coding" webapps or whatever OP was doing? i originally thought it might have meant like... jailbreak. But having read it, i assume you meant the former, as we both read the same thing and it seemed like a series of admonitions to the LLM, written by the LLM (with some spice added by OP? like "YOU ARE WRONG") and i couldn't find anything that would warrant a ban, you know?
  • aussieguy12344 hours ago
    In Open WebUI I have different system prompts (startup advisor, marketing expert, expert software engineer etc) defined and I use Claude via OpenRouter.

    Is this going to get me banned? If so i'll switch to a different non-anthropic model.

  • kmeisthax8 hours ago
    Another instance of "Risk Department Maoism".

    If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

    Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

    Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

  • kosolam8 hours ago
    Hmm so how are the alternatives? Just in case I will get banned for nothing as well. I’m riding cc with opus all day long these days.
    • measurablefunc3 hours ago
      I'm using Google's antigravity & it works fine for my use cases.
  • blindriver8 hours ago
    There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.
  • 9 hours ago
    undefined
  • languagehacker9 hours ago
    Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.
    • rtkwe8 hours ago
      That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.
  • oasisbob9 hours ago
    > Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

    This blog post could have been a tweet.

    I'm so so so tired of reading this style of writing.

    • LPisGood9 hours ago
      What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?
    • red_hare9 hours ago
      Alas, the 2016 tweet is the 2026 blog post prompt.
  • f311a8 hours ago
    Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

    What are you gonna do with the results that are usually slop?

    • mikkupikku6 hours ago
      If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

      I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.

  • lifetimerubyist9 hours ago
    bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man
    • properbrew9 hours ago
      I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

      Even filled in the appeal form, never got anything back.

      Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

      • codazoda8 hours ago
        Since you were forced, are you getting good results from them?

        I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

        Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

        Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

        • properbrew8 hours ago
          For writing decent code, absolutely not, maybe a simple bash script or the obscure flags to a command that I only need to run once and couldn't be bothered to google or look through the man page etc. I'm using smaller models for less coding related stuff.

          Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.

          The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.

      • falloutx8 hours ago
        you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.
      • anothereng9 hours ago
        just use a different email or something
        • ggoo9 hours ago
          This happened to me too, you need a phone number unfortunately
          • direwolf204 hours ago
            You can get one for a few bucks
      • immibis7 hours ago
        [dead]
    • lazyfanatic429 hours ago
      this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.
  • heliumtera9 hours ago
    Well at least they didn't email the press and called the FBI on you?
  • lukashahnart8 hours ago
    > I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

    I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

    Isn't that the point of capitalism?

    • exe347 hours ago
      that's not what capitalism mean. you might be thinking of a free market.
  • justkysan hour ago
    [dead]
  • wetpaws9 hours ago
    [dead]
  • jsksdkldld9 hours ago
    [dead]
  • moomoo119 hours ago
    Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.
  • jitl8 hours ago
    I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.
    • ryandrake8 hours ago
      It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.
  • rsync8 hours ago
    You mean the throwaway pseudonym you signed up with was banned, right?

    right ?

  • red_hare9 hours ago
    This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

    But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

    • mrweasel9 hours ago
      Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

      I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

    • viccis9 hours ago
      Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

      Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."

      • direwolf204 hours ago
        I think there's an xkcd alt text about that: https://www.explainxkcd.com/wiki/index.php/1357:_Free_Speech

        "I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."