Guardian agents will ensure that AI processes remain trustworthy and secure as the AI risk surface expands

By 2030, guardian agent technologies will account for at least 10 to 15% of the agentic AI market, according to the Gartner, Inc..
Guardian agents are Artificial Intelligence (AI) based technologies designed to support reliable and secure interactions with AI. They function both as AI assistants, supporting users in tasks such as reviewing, monitoring and analyzing content, and as semi-autonomous or fully autonomous evolutionary agents, capable of formulating and executing action plans, as well as redirecting or blocking initiatives to align them with the agent's predefined goals.
Protective barriers are needed as the use of agentic AI continues to grow
According to a survey conducted on May 19, 2025, during a Gartner webinar of 147 Chief Information Officers (CIOs) and leaders in IT functions, 24% of the respondents had already implemented some AI agents (less than a dozen) and another 4% had integrated more than a dozen.
The same survey question found that 50% of respondents said they were researching and experimenting with the technology, while another 17% said they had not yet done so but planned to implement it by the end of 2026 at the latest. Automated trust, risk, and security controls are needed to keep these agents aligned and secure, accelerating the need for and emergence of guardian agents.
“Agentic AI will lead to undesirable outcomes if not controlled with the right safeguards,” he says Avivah Litan, Vice President Distinguished Analyst at Gartner. “Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based deterministic assessments to oversee and manage the full range of agent capabilities, balancing runtime decision-making with risk management.”
Risks grow as agents’ power increases and spreads
Fifty-two percent of the 125 respondents to the same survey conducted during the webinar identified that their AI agents are or will be primarily focused on use cases related to internal administrative functions, such as IT, HR and accounting, while 23% are focused on external customer-facing functions.
As use cases for AI agents continue to grow, there are several threat categories that affect them, including input manipulation and data poisoning, where agents rely on manipulated or misinterpreted data. Examples include:
– Credential hijacking and abuse leading to unauthorized control and data theft.
– Agents interacting with fake or criminal websites and sources that may result in poisoned actions.
– Agent deviation and unintended behavior due to internal failures or external triggers that can cause reputational damage and operational disruption.
“The rapid acceleration and increasing autonomy of AI agents requires a shift beyond traditional human oversight,” said Litan. “As enterprises move toward complex, multi-agent systems that communicate at breakneck speed, humans cannot keep up with the potential for errors and malicious activity. This evolving threat landscape underscores the urgent need for guardian agents that provide automated oversight, control, and security for AI agents and applications.”
CIOs and AI and security leaders should focus on three main uses of guardian agents to help ensure the safety and security of AI interactions:
– Reviewers: Identify and review AI-generated output and content for accuracy and fair use.
– Monitors: Observe and track AI and agentic actions for human or AI-based monitoring.
– Protectors: Adjust or block AI and agentic actions and permissions using automated actions during operations.
Guardian agents will manage interactions and anomalies regardless of their use case. This is a key pillar of their integration, as Gartner predicts that 70% of AI applications will use multi-agent systems by 2028.
Topics like this and others that explore the evolving landscape of risks and strategies, as well as insights practical insights into how to deal with the challenges of increasingly complex cyber environments will be highlighted in Conference Gartner Security & Risk Management, which will be held on August 5th and 6th, in São Paulo. More information is available at: https://www.gartner.com/pt-br/conferences/la/security-risk-management-brazil
Gartner customers can read more at “Guardians of the Future: How CIOs Can Leverage Guardian Agents for Trustworthy and Secure AI”. Additional details can also be found in the free Gartner webinar “CIOs, Leverage Guardian Agents for Trustworthy and Secure AI????????
About the Gartner Security & Risk Management Conference
Gartner analysts will present the latest research and advice for security and risk management leaders at the Security & Risk Management Conference, which will be held July 23-25 in Tokyo (Japan), on August 5th and 6th in Sao Paulo (Brazil) and from September 22 to 24 in London (UK). Follow the news and updates from conferences on X using #GartnerSEC.
About Gartner for Cybersecurity Leaders
O Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy with business objectives, and build programs to balance protection with business needs. Additional information is available at https://www.gartner.com/en/cybersecurity/products/gartner-for-cisos. Follow the news and updates of Gartner for Cybersecurity Leadersnode X and LinkedInusing #GartnerSEC.
About Gartner
O Gartner, Inc. delivers objective, actionable insights to executives and their teams. Our expert guidance and tools enable faster, smarter decisions and better performance for your business’s mission-critical priorities. To learn more, visit www.gartner.com.













