Omdia is part of Informa TechTarget

This website is owned and operated by Informa TechTarget, part of a global network that informs, influences and connects the world’s technology buyers and sellers. All copyright resides with them. Informa PLC’s registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered office is 275 Grove St. Newton, MA 02466.

navy background image

Mitigating Risks, Maximizing Potential: The Agentic AI Challenge

May 27, 2025 | Eden Zoller

blue black and white image of AI cube

Agentic AI stands as a powerful force that not only automates complex tasks with unprecedented autonomy but also intelligently adapts to changing conditions. Industry expert Eden Zoller shares Omdia's robust framework for defining, understanding, and assessing its true value. 

This blog provides you with insights from the Omdia report series, which identifies the most pressing issues facing agentic AI, and how to address them.

Agentic AI Revolution

 

Agentic AI agents can automate tasks to a high degree, and streamline processes in targeted and adaptive ways, with the potential to increase productivity and efficiency, and contribute to cost savings. But like other advanced, transformative technologies, agentic AI brings significant complexity. Agentic AI, particularly multi-agent systems, can place greater demands on infrastructure than more traditional AI deployments, which adds to complexity and costs. AI agents interact with diverse tools and data sources, heightening integration, security, and privacy challenges. Agentic AI’s capacity for reasoning and independent task execution raises concerns for reliability, transparency, and security risks. Overcoming these challenges is crucial for solution providers and enterprises to fully leverage agentic AI. In a comprehensive report series*, Omdia identifies the most pressing issues facing agentic AI, and how to address them.

Let’s dig deeper into two of the biggest, connected hurdles for agentic AI: reliability issues and security vulnerabilities. AI agents interact with a wide range of touch points (e.g., external APIs and data sources), creating more potential entry points for bad actors to access system infrastructure and sensitive data for malicious purposes. Multi-agent frameworks also amplify attack vectors through their interconnected operations and decision chains. 

AI agents can be manipulated into causing harm, for example, by manipulating agent goals with a malicious prompt injection or by poisoning data the AI agent uses to make decisions and interact with its environment. There is also the more fundamental issue of harm being caused by an AI agent’s goals not being properly aligned with human values or business objectives. Another security issue could be caused by a version of shadow IT, where employees use unsanctioned AI agents that lead to security breaches.  

Data privacy is an aspect of security, and it faces some specific challenges in context of agentic AI. AI agents can autonomously access external datasets that can contain sensitive and/or proprietary data. Agents can independently make decisions about data usage and share data with other systems, which raises the risk of data leakage, and/or unauthorized data access. 

Certain security risks are linked to reliability issues that stem from agentic AI’s capacity for sophisticated reasoning and high levels of autonomy. In contrast to deterministic rule-based agents that perform a specific, well-defined task, agentic AI involves autonomous agents making sequences of decisions over time. Unpredictability at each decision point can compound, leading to highly divergent and potentially unreliable long-term behaviors that are much harder to anticipate and control than a single, isolated prediction.  

The multi-step reasoning used by AI agents can be opaque, complicating reliability assessment. Agentic AI involves chains of reasoning and actions across time, making it necessary to trace an agent's entire decision history and interactions to understand why it reached an unreliable state. This increased opacity intensifies the black box problem associated with advanced AI and makes assigning responsibility for errors or harm considerably more difficult. 

While adaptable, AI agents can be vulnerable to disruptions in their environment, with negative outcomes.  Multi-agent systems engaged in complex tasks typically operate within dynamic and heterogeneous environments. Even minor, unanticipated environment changes can propagate through interconnected agents, triggering cascading failures.  

Verifying and validating agentic AI presents unique challenges. While testing traditional AI models often involves evaluating performance on fixed datasets, agentic AI requires assessment of agents acting autonomously over extended periods across various scenarios, including complex interactions with other agents and the environment. This significantly increases the complexity of ensuring reliable operation. 

Agentic systems, particularly multi-agent frameworks, can exhibit emergent behaviors arising from dynamic interactions between individual agents and various external tools and data sources. These system-level behaviors can emerge unexpectedly and present significant challenges for debugging and ensuring reliability.  

 

Seven Measures for Enterprise Implementation

 

However, enterprises and solution providers are not helpless in facing these challenges and there are a range of practical, actional steps they can take to address them. These include:

 

  • Adopt security by design practices, for example AI specific threat modelling, robust vulnerability testing (e.g., red-teaming), and zero-trust architectures.

  • Robust verification to strongly ground assumptions and outputs.

  • Unlike traditional software agents that follow strict rules, agentic AI agents might not verify outputs or check assumptions unless explicitly instructed to do so.

  • Implement strong authentication mechanisms to control access to AI agents, and to control actions performed by agents.

  • Continuous, adaptive monitoring can bolster the reliability of agentic AI systems by enabling real-time detection of errors, anomalies and distributional shifts.

  • Secure Multi-Party Computation (SMPC) could be useful for preserving privacy in multi-agent systems.

  • SMPC enables agents to collaboratively compute results based on their data but without revealing that data to each other.

  • Keep humans in the loop with mandated human approvals and checks for critical and/or sensitive consequential decisions and actions.

  • Explainable AI (XAI) tools and audit trails provide transparency into an AI agent's decision-making processes make it easier to understand actions and outcomes.

Understanding Agentic AI: Attributes, Architecture, and the Ecosystem   (May 2025)

Maximizing Agentic AI: Capturing Benefits, Addressing Challenges  (May 2025)

Omdia’s deep expertise in AI provides the strategic insights you need to stay ahead. Connect with our analysts to learn how our research can support your AI strategy and help you navigate the evolving landscape.

 

 
More from author
Eden Zoller
Chief Analyst, Applied AI

Eden has been immersed in digital media services and tech for over 20 years, focusing on strategy, innovation, monetization, and future trends. Eden’s primary focus is applied AI, specializing in generative AI, AI impacts on consumer services and industry verticals, responsible AI, and AI governance. Initiatives driven by Eden include the development of Omdia’s AI Maturity Assessment tool, a Big Tech Benchmark, an AI Innovation Tracker, Omdia’s rolling consumer AI survey program, an ongoing assessment of AI impacts across key verticals including games, TV & video, and commerce. Eden manages workshops and consultancy projects in the above areas, providing tailored advice and market intelligence.


Eden is a frequent media commentator and speaker at industry events, is a long-standing judge for the GSMA’s Global Mobile Awards, is an independent advisory board member for AI4ME, and is studying for a Masters in AI ethics at the University of Cambridge. Before her career as an analyst, Eden was a journalist and editor for a string of respected industry publications.


More from author
assess banner

Register here for full complimentary research reports and content.

Get ahead in your business and receive industry insider news, findings and trends from Omdia analysts.

Register
More from our experts View All
Let's Connect

More insights

Assess the marketplace with our extensive insights collection.

More insights

Hear from analysts

When you partner with Omdia, you gain access to our highly rated Ask An Analyst service.

Hear from analysts

Omdia Newsroom

Read the latest press releases from Omdia.

Omdia Newsroom

Solutions

Leverage unique access to market leading analysts and profit from their deep industry expertise.

Solutions
Person holding infinity symbol Contact us infinity symbol
Did you find what you were looking for?

If you require further assistance, contact us with your questions or email our customer success team.

Contact us