Article

Agentic AI: Rethinking identity and governance in the enterprise

Agentic AI has shifted from academic theory to deployment at scale across enterprises. AI agents, capable of understanding intent, reasoning toward goals, and taking autonomous actions without human intervention, are the fastest-growing type of enterprise identity. However, despite their access to sensitive data, these identities are largely unmanaged and slip through traditional identity security and governance safeguards.

How AI agents differ from human and machine identities

Like humans and machines, AI agents have identities, although their nature and management differ significantly from those of other types. Human, machine, and AI agent identities access resources and require authentication, but they require different approaches to identity security and governance.

Human identities

  • Include employees, contractors, partners, customers, and vendors.
  • Operate in well-defined roles with access typically tied to their role, responsibilities, and attributes within an organization.
  • Make decisions based on human judgment, experience, and ethical considerations.

Machine identities

  • Represent applications, services, or devices.
  • Follow pre-programmed, linear workflows with specific functions.
  • Base decisions on pre-defined rules and configurations.

AI agent identities

  • Represent autonomous software entities that perform tasks, make decisions, and interact with users or other systems without direct human intervention.
  • Self-direct based on real-time inputs, leveraging a vast number of datasets, artificial intelligence, and natural language processing.
  • Learn, adapt, and evolve their behavior over time, making independent decisions based on their learned models and algorithms.
  • Often deployed without clear ownership, identity assignment, or auditing controls.

While human, machine, and AI agent identities access highly sensitive data to make decisions, it is estimated that an AI agent can make over one million decisions per hour, far outpacing the scale and speed of any human and exponentially increasing risk.

Governance and security gaps with autonomous AI agents

Traditional identity security and governance models were not designed to manage AI agents. Examples of how AI identities have simply outpaced the capabilities of these tools include:

  • Role-based access control (RBAC) models that were designed for human users and periodic review cycles, and cannot support AI agents' real-time, autonomous decision making.
  • Secret management systems that struggle with AI agents because they assume static access, not the dynamic, reasoning-driven access that AI requires.
  • Compliance frameworks do not consider digital identities, like AI agents, that many security teams do not know exist, but can cross trust boundaries.

Risks and threats posed by AI agents

The long-standing security maxim, "If you can't govern it, you can't secure it," holds for agentic AI. However, the elusive nature of AI agents makes them incredibly difficult to govern, creating massive operational, reputational, and financial risk and an expansive, vulnerable attack surface.

The AI agent identity crisis and security risks associated with AI agents stem from the fact that most organizations cannot answer basic questions about agentic AI in their enterprise, including:

  • How many AI agents are currently active?
  • What systems and data can they access?
  • How can they be shut down if something goes wrong?

The resulting threats posed by AI agents are extensive. Several examples of the most pervasive threats resulting from a lack of identity security and governance for AI agent identities include the following.

  • Unauthorized access and privilege escalation from compromised AI agent credentials.
  • Data breaches and data misuse, including leaking sensitive data to unauthorized parties, manipulating training data to cause the AI agent to make incorrect decisions or leak biased information, and inadvertently exposing sensitive data through logging, error messages, or other unintended channels.
  • Generation of new sensitive information that is not governed correctly or secured.
  • Adversarial attacks that use unauthorized inputs to trick the AI agent into making incorrect decisions or performing malicious actions.
  • Denial of service attacks that overload AI agents with requests to disrupt availability.
  • Injecting malicious code into the AI agent's software to manipulate its behavior.
  • Resource exhaustion caused by AI agents consuming excessive resources (e.g., CPU, memory, and network bandwidth), resulting in system instability or outages.
  • Unauthorized modification of system configurations by compromised AI agents.
  • AI agents inadvertently or maliciously damaging or corrupting system files or databases.

A strategy to govern AI agents without slowing innovation

As with human and machine identities, AI agents need to be identified and authorized to access resources and perform actions. This is essential for security, auditing, and accountability.

To mitigate risks and threats from agentic AI, identity security and governance must be designed for real-time autonomy, not legacy controls. Organizations need to shift away from periodic, human-centric controls to identity-centric security and governance strategies that are continuous, dynamic, and context-aware.

Critical features to secure AI agents

To extend identity security and governance to AI agents, organizations seek solutions that address the unique challenges these entities present to minimize their inherent risks. The following features and functions help ensure that:

  • AI agents' identity and ownership are assigned at creation.
  • Just-in-time, intent-aware access controls are implemented.
  • Dynamic credentialing and revocation are enabled.
  • Agent behavior is monitored in real time.
  • Risky or anomalous agent actions can be detected, assessed, and addressed at machine speed.
  • Policy-based guardrails, aligned with regulatory and internal standards, are enforced.

Visibility into agent behavior, access, and accountability, with:

  • A centralized identity repository to manage AI agents as identities and gain a single view of their access rights, entitlements, and activity.
  • Access data aggregated from various systems and applications that AI agents interact with to provide a comprehensive view of their access patterns.
  • Reporting and analytics capabilities to track AI agent activity, identify trends, and detect anomalies.

Assignment of identity and ownership to AI agents at creation, with:

  • Automated AI agent identity provisioning to ensure that each agent has a unique and verifiable identity from the outset.
  • Custom attributes, such as agent type, purpose, and owner, should be definable for AI agent identities to facilitate governance and reporting.
  • Ability to assign ownership and accountability for each AI agent to a specific individual or team.

Just-in-time, intent-aware access controls, with:

  • Attribute-based access controls (ABAC) that enable granular access policies based on attributes of the agent, the resource, and the context, which can be adapted for intent.
  • A policy engine to help organizations define and enforce access policies for AI agents to ensure that they have only the necessary privileges.
  • Integrations with workflow and ticketing systems to automate access requests and approvals for AI agents, enabling just-in-time access, as well as support management of AI agent lifecycles and role changes.
  • Intent-aware access controls that facilitate the definition and enforcement of context-aware access policies that can be tailored to AI agent use cases.
  • Segregation of duties (SoD) controls for AI agents to prevent them from performing conflicting tasks that could lead to fraud or errors.

Enforcement of dynamic credentialing and revocation, with:

  • Integration with secrets management solutions to securely store and manage AI agent credentials.
  • Automated AI agent credential rotation to reduce the risk of credential theft and misuse.
  • Automated revocation of AI agent access rights in case of a security incident or policy violation.

Real-time AI agent behavior monitoring, with:

  • Continuous monitoring of AI agent activity, including data access, API calls, and system resource consumption.
  • Integrations that support behavioral analytics (to establish a baseline of normal AI agent behavior and detect anomalies).
  • User activity monitoring (UAM) that tracks and audits AI agents' interactions with applications and systems.

Risky or anomalous AI agent behavior detection and response

  • Ability to identify unusual or risky AI agent behavior.
  • Automated incident response workflows based on alerts from Security Information and Event Management (SIEM) or other security systems.

AI agents require a new governance framework for a new class of identity

The autonomous digital identity category, which was brought in with the rise of agentic AI, necessitates a new approach to identity security and governance. AI agents require specialized solutions that allow organizations to secure and govern them in the same way humans and machines are—with assigned ownership, scoped access, real-time oversight, and accountability.

Delays or trying to get by with existing tools is a recipe for disaster. It is imperative that organizations secure AI agents before they outpace governance.

DISCLAIMER: THE INFORMATION CONTAINED IN THIS ARTICLE IS FOR INFORMATIONAL PURPOSES ONLY, AND NOTHING CONVEYED IN THIS ARTICLE IS INTENDED TO CONSTITUTE ANY FORM OF LEGAL ADVICE. SAILPOINT CANNOT GIVE SUCH ADVICE AND RECOMMENDS THAT YOU CONTACT LEGAL COUNSEL REGARDING APPLICABLE LEGAL ISSUES.

Agentic AI frequently asked questions (FAQ)

What are AI agents and agentic AI?

The terms AI agent and agentic AI broadly encompass autonomous systems that perceive, make decisions, and take action to achieve specific goals within an environment. These agents often require several different machine identities to access needed data, applications, and services, and they introduce additional complexities like self-modification and the potential to generate sub-agents.

What are the three main categories of AI agent identities?

1. Functional identity (what the agent does):

  • Task-oriented agents—focus on performing specific tasks.
  • Information retrieval agents—specialize in finding and filtering information from various sources.
  • Recommender agents—provide suggestions based on user preferences.
  • Dialogue agents / chatbots—engage in conversations with users, providing support, or completing simple tasks.
  • Autonomous agents—operate independently and make decisions without human intervention.

2. Persona identity (how the agent presents itself):

  • Helpful assistants—friendly, supportive, and focused on assisting the user.
  • Expert advisors—present a knowledgeable and authoritative persona.
  • Neutral information providers—deliver factual information without expressing personal opinions.

3. Architectural identity (how the agent is built):

  • Simple reflex agents—react directly to percepts.
  • Model-based reflex agents—maintain internal state.
  • Goal-based agents—strive to achieve defined goals.
  • Utility-based agents—maximize a utility function.
  • Learning agents—adapt and improve over time.
What are the primary attributes of AI agent identities?
  • Name / ID—a unique identifier for the AI agent.
  • Type: Indicates the type of AI agent (e.g., chatbot, data analysis agent, automation agent).
  • Purpose—the intended function of the AI agent.
  • Permissions—the resources and actions that the AI agent is authorized to access.
  • Owner / custodian—the individual or team responsible for the AI agent.
  • Trust level—the level of trust assigned to the AI agent based on its behavior and risk profile.
What are examples of risks that agentic AI introduces to identity security and governance?
  • Agent compromise—Malicious actors could compromise AI agents to gain unauthorized access to systems and data.
  • Policy drift— AI agents might deviate from intended policies over time due to biased data or algorithm changes, leading to unintended consequences.
  • Lack of explainability—Difficulty in understanding how AI agents make decisions, due to opacity, which can make it challenging to audit and verify their actions.
    • Runaway automation—AI agents could perform unintended actions at scale, causing widespread disruption or damage.
    • Data poisoning—Attackers could inject malicious data into training datasets to manipulate AI agent behavior.
How can identity security and governance mitigate agentic AI risks?
  • Treat AI agents as privileged identities requiring strict access controls.
  • Develop specific policies governing the design, deployment, and operation of AI agents.
  • Implement mechanisms to track and audit the actions of AI agents to provide insights into their decision-making processes.
  • Leverage AI-powered analytics to detect anomalies in AI agent behavior and identify potential security threats.
  • Maintain human oversight and control over AI agents and ensure that humans can intervene and override AI decisions when necessary.
  • Implement rigorous data validation and cleansing processes to prevent data poisoning attacks.
  • Continuously monitor the performance of AI models and retrain them as needed to prevent policy drift and ensure accuracy.
  • Thoroughly test AI agents in sandboxed environments before deploying them to production systems.
  • Least privilege access for AI agents, granting only the minimum necessary privileges to perform their tasks to limit the potential damage from compromise.
What are examples of how AI agent identities can be compromised?

AI agent identities, like any other digital identity, can be compromised through a variety of methods, such as:

  • Adversarial attacks—Attackers craft specific inputs designed to trick an AI agent into making incorrect decisions, bypass security controls, or manipulate an agent's behavior.
  • Code injection—Attackers can inject malicious code into an AI agent's software through vulnerabilities in input validation or other security mechanisms to steal credentials, manipulate the agent's behavior, or gain remote control.
  • Compromised service accounts—If service accounts that AI agents run under have weak passwords, are not properly managed, or are subject to credential stuffing attacks, attackers can gain control.
  • Configuration errors—Misconfigured AI agent settings can expose sensitive information or create vulnerabilities (e.g., overly permissive access controls, disabled security features, or default credentials) that attackers can exploit.
  • Data poisoning—Attackers can inject malicious data into an AI agent's training dataset, causing the agent to learn biased or incorrect patterns and leading to the agent making incorrect decisions or performing malicious actions.
  • Dependency vulnerabilities—AI agents that rely on third-party libraries and dependencies can be exposed to attacks if these dependencies have known vulnerabilities, which attackers can exploit to compromise the agent.
  • Insider threatsMalicious insiders with access to AI agent systems can steal credentials, manipulate configurations, or inject malicious code.
  • Malicious AI agent components—Cybercriminals can distribute malicious AI agent components through online repositories or marketplaces, resulting in developers including malware in their AI agents.
  • Privilege escalation—Attackers can exploit vulnerabilities in an AI agent's software or configuration to gain elevated privileges, allowing them to access resources beyond the agent's intended scope.
  • Stolen API keys/secrets—AI agents that rely on API keys or secrets to access resources are at risk if these keys are stored insecurely, because if attackers steal them, they can impersonate the agent.
Date: September 19, 2025Reading time: 8 minutes
AI & machine learningCybersecurityIdentity security