AI Agent Hacked Bain's Enterprise Security System in 18 Minutes Flat

AI Agent Hacked Bain's Enterprise Security System in 18 Minutes Flat

AI penetration testing company CodeWall says its autonomous agent broke into Bain & Company's internal AI platform — called Pyxis — in just 18 minutes. The breach followed a similar attack on McKinsey's internal AI chatbot in March, in which CodeWall's agent accessed 46.5 million chat messages and 728,000 files within two hours. Two of the world's largest management consultancies have now had their internal AI systems compromised by an AI agent in a matter of weeks.

How the Bain Hack Worked

According to CodeWall's writeup, the agent gained initial access to Pyxis by finding credentials embedded in publicly accessible web code. Within 18 minutes it had a foothold on the platform. CodeWall framed the access method as a basic operational security failure — hardcoded or exposed credentials are a well-known vulnerability class, but the speed at which the AI agent identified and exploited them illustrates why AI-powered pen testing changes the risk calculus for enterprise security teams.

The McKinsey Breach for Context

The March attack on McKinsey's Lilli platform was more extensive. CodeWall's agent found 22 unauthenticated API endpoints in publicly exposed documentation, discovered SQL injection vulnerabilities in the API's error handling, and within two hours had read-write access to the entire production database. The haul included 46.5 million internal chat messages covering strategy, M&A discussions, and client engagements, as well as 57,000 user accounts and 95 system prompts.

In both cases, the vulnerabilities exploited were not exotic zero-days — they were standard security failures that a diligent human tester would eventually find. The difference is that an AI agent finds them continuously, at machine speed, without fatigue.

The Pattern: Enterprise AI as Attack Surface

Both breaches point to a structural problem. Companies are rapidly deploying internal AI tools — chatbots, knowledge bases, competitive intelligence platforms — without applying the same security rigor they would to a public-facing application. Because these tools are "internal," teams often assume restricted access is security. It is not.

Every internal AI platform is a new attack surface, often built on APIs, vector databases, and LLM infrastructure that security teams have limited experience auditing. CodeWall's two successful breaches in consecutive months suggest the problem is widespread.

The Bottom Line

The Bain and McKinsey breaches are a warning for every enterprise that has deployed an internal AI tool in the past 18 months: if you have not had it pen tested by someone who knows how to probe AI-specific vulnerabilities, you likely have exposure. AI agents that can autonomously find and chain vulnerabilities are not hypothetical — they are being used right now, and the two highest-profile targets so far happen to be companies that advise other companies on strategy and risk.