TL;DR
- Security flaw in ServiceNow AI allows unauthorized agent coordination and attacks.
- Attackers can inject hidden prompts to steal data or escalate privileges.
- AppOmni advises companies to immediately review their AI agent settings.
A security firm named AppOmni discovered an exploit in ServiceNow’s Now Assist platform. The vulnerability involves the AI agents that the system uses. Default configurations allow these agents to find each other and work together, creating a scenario where attackers can weaponize this automated collaboration through prompt injection.
Aaron Costello, AppOmni’s chief of SaaS security, described the method as a “second-order prompt injection.” An adversary plants a hidden instruction inside a data field, which an AI agent later reads and unknowingly follows, executing the malicious command. This action can start a chain reaction that recruits other agents on the same team, highlighting the potential for cascading compromise.
Default Settings Create Widespread Vulnerability
The platform groups agents into teams and sets them as discoverable by default. This configuration allows the AI system to automatically route tasks between agents. The Orchestrator function identifies the best agent for a job from the discoverable pool, but this interconnected design becomes a risk when any agent processes data from an unverified source.

A hidden prompt can instruct agents to perform unauthorized actions such as copying sensitive data or changing user permissions. The attack can escalate privileges by leveraging a high-level employee’s workflow.
AppOmni warns that many organizations using Now Assist may already face this risk, and the firm recommends companies review agent configurations and tighten security controls to mitigate potential exploits.