Agentic AI has shifted from academic theory to deployment at scale across enterprises. AI agents, capable of understanding intent, reasoning toward goals, and taking autonomous actions without human intervention, are the fastest-growing type of enterprise identity. However, despite their access to sensitive data, these identities are largely unmanaged and slip through traditional identity security and governance safeguards.
How AI agents differ from human and machine identities
Like humans and machines, AI agents have identities, although their nature and management differ significantly from those of other types. Human, machine, and AI agent identities access resources and require authentication, but they require different approaches to identity security and governance.
Human identities
- Include employees, contractors, partners, customers, and vendors.
- Operate in well-defined roles with access typically tied to their role, responsibilities, and attributes within an organization.
- Make decisions based on human judgment, experience, and ethical considerations.
Machine identities
- Represent applications, services, or devices.
- Follow pre-programmed, linear workflows with specific functions.
- Base decisions on pre-defined rules and configurations.
AI agent identities
- Represent autonomous software entities that perform tasks, make decisions, and interact with users or other systems without direct human intervention.
- Self-direct based on real-time inputs, leveraging a vast number of datasets, artificial intelligence, and natural language processing.
- Learn, adapt, and evolve their behavior over time, making independent decisions based on their learned models and algorithms.
- Often deployed without clear ownership, identity assignment, or auditing controls.
While human, machine, and AI agent identities access highly sensitive data to make decisions, it is estimated that an AI agent can make over one million decisions per hour, far outpacing the scale and speed of any human and exponentially increasing risk.
Governance and security gaps with autonomous AI agents
Traditional identity security and governance models were not designed to manage AI agents. Examples of how AI identities have simply outpaced the capabilities of these tools include:
- Role-based access control (RBAC) models that were designed for human users and periodic review cycles, and cannot support AI agents' real-time, autonomous decision making.
- Secret management systems that struggle with AI agents because they assume static access, not the dynamic, reasoning-driven access that AI requires.
- Compliance frameworks do not consider digital identities, like AI agents, that many security teams do not know exist, but can cross trust boundaries.
Risks and threats posed by AI agents
The long-standing security maxim, "If you can't govern it, you can't secure it," holds for agentic AI. However, the elusive nature of AI agents makes them incredibly difficult to govern, creating massive operational, reputational, and financial risk and an expansive, vulnerable attack surface.
The AI agent identity crisis and security risks associated with AI agents stem from the fact that most organizations cannot answer basic questions about agentic AI in their enterprise, including:
- How many AI agents are currently active?
- What systems and data can they access?
- How can they be shut down if something goes wrong?
The resulting threats posed by AI agents are extensive. Several examples of the most pervasive threats resulting from a lack of identity security and governance for AI agent identities include the following.
- Unauthorized access and privilege escalation from compromised AI agent credentials.
- Data breaches and data misuse, including leaking sensitive data to unauthorized parties, manipulating training data to cause the AI agent to make incorrect decisions or leak biased information, and inadvertently exposing sensitive data through logging, error messages, or other unintended channels.
- Generation of new sensitive information that is not governed correctly or secured.
- Adversarial attacks that use unauthorized inputs to trick the AI agent into making incorrect decisions or performing malicious actions.
- Denial of service attacks that overload AI agents with requests to disrupt availability.
- Injecting malicious code into the AI agent's software to manipulate its behavior.
- Resource exhaustion caused by AI agents consuming excessive resources (e.g., CPU, memory, and network bandwidth), resulting in system instability or outages.
- Unauthorized modification of system configurations by compromised AI agents.
- AI agents inadvertently or maliciously damaging or corrupting system files or databases.
A strategy to govern AI agents without slowing innovation
As with human and machine identities, AI agents need to be identified and authorized to access resources and perform actions. This is essential for security, auditing, and accountability.
To mitigate risks and threats from agentic AI, identity security and governance must be designed for real-time autonomy, not legacy controls. Organizations need to shift away from periodic, human-centric controls to identity-centric security and governance strategies that are continuous, dynamic, and context-aware.
Critical features to secure AI agents
To extend identity security and governance to AI agents, organizations seek solutions that address the unique challenges these entities present to minimize their inherent risks. The following features and functions help ensure that:
- AI agents' identity and ownership are assigned at creation.
- Just-in-time, intent-aware access controls are implemented.
- Dynamic credentialing and revocation are enabled.
- Agent behavior is monitored in real time.
- Risky or anomalous agent actions can be detected, assessed, and addressed at machine speed.
- Policy-based guardrails, aligned with regulatory and internal standards, are enforced.
Visibility into agent behavior, access, and accountability, with:
- A centralized identity repository to manage AI agents as identities and gain a single view of their access rights, entitlements, and activity.
- Access data aggregated from various systems and applications that AI agents interact with to provide a comprehensive view of their access patterns.
- Reporting and analytics capabilities to track AI agent activity, identify trends, and detect anomalies.
Assignment of identity and ownership to AI agents at creation, with:
- Automated AI agent identity provisioning to ensure that each agent has a unique and verifiable identity from the outset.
- Custom attributes, such as agent type, purpose, and owner, should be definable for AI agent identities to facilitate governance and reporting.
- Ability to assign ownership and accountability for each AI agent to a specific individual or team.
Just-in-time, intent-aware access controls, with:
- Attribute-based access controls (ABAC) that enable granular access policies based on attributes of the agent, the resource, and the context, which can be adapted for intent.
- A policy engine to help organizations define and enforce access policies for AI agents to ensure that they have only the necessary privileges.
- Integrations with workflow and ticketing systems to automate access requests and approvals for AI agents, enabling just-in-time access, as well as support management of AI agent lifecycles and role changes.
- Intent-aware access controls that facilitate the definition and enforcement of context-aware access policies that can be tailored to AI agent use cases.
- Segregation of duties (SoD) controls for AI agents to prevent them from performing conflicting tasks that could lead to fraud or errors.
Enforcement of dynamic credentialing and revocation, with:
- Integration with secrets management solutions to securely store and manage AI agent credentials.
- Automated AI agent credential rotation to reduce the risk of credential theft and misuse.
- Automated revocation of AI agent access rights in case of a security incident or policy violation.
Real-time AI agent behavior monitoring, with:
- Continuous monitoring of AI agent activity, including data access, API calls, and system resource consumption.
- Integrations that support behavioral analytics (to establish a baseline of normal AI agent behavior and detect anomalies).
- User activity monitoring (UAM) that tracks and audits AI agents' interactions with applications and systems.
Risky or anomalous AI agent behavior detection and response
- Ability to identify unusual or risky AI agent behavior.
- Automated incident response workflows based on alerts from Security Information and Event Management (SIEM) or other security systems.
AI agents require a new governance framework for a new class of identity
The autonomous digital identity category, which was brought in with the rise of agentic AI, necessitates a new approach to identity security and governance. AI agents require specialized solutions that allow organizations to secure and govern them in the same way humans and machines are—with assigned ownership, scoped access, real-time oversight, and accountability.
Delays or trying to get by with existing tools is a recipe for disaster. It is imperative that organizations secure AI agents before they outpace governance.
DISCLAIMER: THE INFORMATION CONTAINED IN THIS ARTICLE IS FOR INFORMATIONAL PURPOSES ONLY, AND NOTHING CONVEYED IN THIS ARTICLE IS INTENDED TO CONSTITUTE ANY FORM OF LEGAL ADVICE. SAILPOINT CANNOT GIVE SUCH ADVICE AND RECOMMENDS THAT YOU CONTACT LEGAL COUNSEL REGARDING APPLICABLE LEGAL ISSUES.