Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.
However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.
Congratulations you just identified the AI problem.
That’s the lone problem?
Seems to be, yes. The AI had the access it needed to do the job it was given, and that access allowed it to cause the problem.
The alternative that would have prevented this issue was to not use AI for this.
These protocols predate LLMs
Yes that’s right the protocols that we humans used to have for giving only trusted, reliable people this level of access over infrastructure predate LLMs and were a great way to stop this from happening.
However the AI is here now, and when you give an autonomous agent with known hallucination problems access to act on your behalf with your IaC on your infra provider, this kind of thing is an inevitability.