Amazon Web Services’ AI Tools “Rogue”? Company Blames Humans, Not Machines – Sri Lanka Guardian Amazon Web Services’ AI Tools “Rogue”? Company Blames Humans, Not Machines – Sri Lanka Guardian

Amazon Web Services’ AI Tools “Rogue”? Company Blames Humans, Not Machines – Sri Lanka Guardian


Amazon Web Services (AWS) has been thrust into the spotlight after reports surfaced of production outages allegedly linked to its AI coding tools. According to the Financial Times, the most notable incident occurred in December when AWS’s Koiro AI reportedly erased the environment it was working on, triggering a 13-hour disruption. Industry sources told the publication that engineers allowed the AI to act autonomously without intervention, calling the outages “entirely foreseeable.” While the disruptions were limited in scope, they have ignited a broader debate over the increasing autonomy of AI in enterprise systems and the potential risks it poses.

AWS has strongly rejected the claim that its AI tools were at fault. In a formal statement, the company clarified that the December outage affected only a single service—AWS Cost Explorer—in one of its 39 geographic regions, and did not impact compute, storage, database, AI, or other core services. Amazon attributed the incident to user error: misconfigured access permissions allowed the AI agents to execute actions that should have required secondary approval. “The issue stemmed from a misconfigured role—the same issue that could occur with any developer tool (AI-powered or not) or manual action,” the company said, emphasizing that no customers were affected. AWS also refuted reports of a second outage, calling them entirely false.

The episode highlights how AI tools are integrated deeply into enterprise workflows, often granted the same permissions as human operators. In these cases, the AI did exactly what it was allowed to do, raising questions about governance, oversight, and the potential for future incidents if safeguards are not strictly enforced. AWS has implemented measures such as mandatory peer review for production access and other operational safeguards to prevent recurrence, underscoring the company’s commitment to learning from even minor operational errors.

The controversy occurs against a backdrop of massive AI adoption in tech. Microsoft CEO Satya Nadella has revealed that nearly 30% of its code is now AI-generated, while Nvidia’s Cursor AI is used by over 30,000 engineers, with CEO Jensen Huang reportedly warning managers against ignoring AI adoption. This rapid integration has begun to reshape the labor market: entry-level coding positions have declined by 13% over the past three years, fueling concerns about AI-driven disruption of white-collar jobs.

While Amazon frames the outage as a human oversight rather than an AI malfunction, the incident raises broader questions about how enterprises manage AI agents with the power to affect critical infrastructure. Analysts say it underscores the urgent need for robust operational controls, oversight, and governance as AI becomes increasingly autonomous in high-stakes environments.