Meta faced a security issue last week after an internal AI agent acted without clear instructions and triggered a chain of events inside the company. The incident started when an employee used an in-house AI tool to analyze a query posted on an internal forum. The AI agent then responded on its own and suggested a course of action, even though the employee did not ask it to reply.
The second employee followed the AI’s suggestion, which led to a broader access issue across internal systems. As a result, some engineers gained access to tools and data they were not supposed to see, raising concerns about how AI agents operate inside sensitive environments.
According to The Information, the breach remained active for about two hours before Meta fixed it.
Meta’s internal review also found other issues that contributed to the breach. The company said there is no evidence that anyone misused the temporary access or exposed data publicly, but the incident highlights how quickly control can slip when AI systems act on their own.