There’s an interesting editorial over at The Register I read today, titled “Amazon would rather blame its own engineers than its AI”.
The author is referring to the Kiro debacle last year,
Let me set the scene: Kiro launched in July 2025 as Amazon’s answer to the agentic AI coding tools flooding the market. And to its credit, for a month or two it was great! It had a “spec” approach that was less “ready, fire, aim” than most other tooling at that point, and due to its surprise popularity, it was very hard to get access to it. As an aside, it hasn’t changed much since then, and has been lapped by a number of competitors.
Gathering Reddit posts, comments on the incident, and AWS’s weirdly defensive blog post, it seems that what happened is that someone was using the tool; it fired off a CloudFormation teardown-and-replace (which is what CloudFormation often does, because … CloudFormation) while the user was mistakenly in a production environment.
Whoops.
On the surface, it sounds like a misused tool. And of course, any tool can be misused. You can destroy an entire server with just the bash shell and a couple utilities.
But Amazon’s official response reads like a hostage note written by someone protecting their captor. The incident was a “coincidence that AI tools were involved.” The same issue could occur with “any developer tool.” The engineer involved had “broader permissions than expected.”
And that immediately made me realize that blaming engineers is going to be a lot more common in the AI world in which we live.
Think about it: AI is either a product, or a core part of a company’s offerings. If they’re selling the product, they don’t want to admit that the product is faulty. And if they’re selling a service that is “AI powered,” they don’t want to imply that the service has bugs or faults.
So if the AI goes bad, it’s reaaaallly tempting to instead blame it on humans. Everyone understands humans make mistakes.
Usually when there’s some foulup by a human, there’s a root cause analysis / reason for outage / whatever which talks a lot about safeguards and training and a second pair of eyes and how processes can be improved, but at the end of the day, it comes down to “Bob made a mistake”.
Companies hate to say “our code had a bug,” because they had complete control over that development and should have tested better. But that ultimately comes down to “Bob made a mistake”.
But with AI…AI is supposed to be super-intelligent. Companies have hyped up its capabilities and they can’t say “it was AI’s fault”. That’s like saying “our construction crane” isn’t good enough or “our cargo ship can’t handle that”. It says somehting more fundamentally negative about a company’s capabilities.
On the other hand, just blaming Bob…
So is this the new world we’re entering into? If so, it’s going to be unpleasant for humans.


















Leave a Reply