2 pointsby nadis6 hours ago3 comments
  • WalterGR5 hours ago
    • nadis2 hours ago
      oh thanks! I'd searched the article related titles and didn't find this; appreciate you sharing.
  • frankfrank136 hours ago
    404
    • nadis2 hours ago
      If you refresh are you still seeing it? I just got a 404 but am now able to access on refresh.

      Copy/pasting below for easier reading in case you still have issues:

      An AI Agent Broke Into McKinsey’s Internal Chatbot and Accessed Millions of Records in Just 2 HoursA red-team experiment found an AI agent could autonomously exploit a vulnerability in McKinsey’s internal chatbot platform, exposing millions of conversations before the issue was patched.

      A security startup said their autonomous AI agent was able to break into McKinsey’s internal generative-AI platform in roughly two hours, gaining access to tens of millions of chatbot conversations and hundreds of thousands of files tied to corporate consulting work.

      Researchers at red-team security firm CodeWall targeted McKinsey as part of a controlled test designed to simulate how modern hackers might use AI agents to probe corporate infrastructure. The experiment ultimately allowed the system to obtain full read-and-write access to the company’s AI chatbot database, according to a report by The Register.

      CodeWall’s AI agent identified a vulnerability in Lilli, McKinsey’s proprietary generative-AI platform introduced in 2023 and now widely used across the firm. The chatbot has become a central tool inside the consulting giant. About 72 percent of McKinsey’s employees—more than 40,000 people—use Lilli, generating over 500,000 prompts every month, according to The Register.

      Within two hours of launching the automated test, the researchers said their AI agent had accessed 46.5 million chatbot messages covering topics such as corporate strategy, mergers and acquisitions, and client engagements. The system also exposed 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts that govern how the chatbot behaves, The Register reported.

      Because the vulnerability allowed both reading and writing data, an attacker could theoretically manipulate the chatbot’s internal prompts, quietly altering how it responds to consultants across the company. That means someone exploiting the flaw could potentially poison the advice generated by the system without deploying new code or triggering standard security alerts.

      “No deployment needed. No code change,” the researchers wrote in their blog post. “Just a single UPDATE statement wrapped in a single HTTP call.”

      How the AI Agent Broke In

      The attack began when CodeWall’s AI agent identified publicly exposed API documentation tied to Lilli. The documentation included 22 endpoints that required no authentication, one of which logged user search queries.

      While analyzing the system, the agent discovered a classic flaw: The software was taking information from users and plugging it directly into its internal database without checking it first—known as SQL injection. That’s like a building security desk automatically letting anyone make their own keycards to get in.

      CodeWall disclosed the vulnerability chain to McKinsey on March 1. By the following day, the consulting firm had patched the exposed endpoints, taken the development environment offline, and restricted access to the API documentation, The Register reported.

      “Our investigation, supported by a leading third-party forensics firm, identified no evidence that client data or client confidential information were accessed by this researcher or any other unauthorized third party,” a McKinsey spokesperson told The Register. “McKinsey’s cybersecurity systems are robust, and we have no higher priority than the protection of client data and information we have been entrusted with.”

      The Autonomous Cybersecurity Threat

      For CodeWall’s CEO, Paul Price, the bigger concern is not this specific vulnerability but the speed and autonomy of the attack itself. The AI agent that conducted the probe operated without human guidance, Price said.

      “We used a specific AI research agent to autonomously select the target,” he told The Register. “Hackers will be using the same technology and strategies to attack indiscriminately.”

      That shift could enable cybercriminals to conduct machine-speed intrusions, automating reconnaissance, vulnerability discovery, and exploitation at a scale traditional attackers couldn’t achieve. And as companies increasingly deploy internal AI systems like McKinsey’s Lilli, those platforms may become some of the most valuable, and vulnerable, targets.

  • george_api_dev6 hours ago
    [flagged]