Blog
Navigating the AI review board: Answering SailPoint Identity Security Cloud questions before they’re asked
Artificial intelligence is transforming how organizations approach identity security, making processes faster, smarter, and more accurate. Identity management tasks (such as creating new complex workflows or onboarding applications) historically have been time-intensive endeavors that can give identity admins headaches. With AI assistance, many of these tedious tasks can now be executed in minutes. The timesaving and value-generating impact of AI is undeniable, however, according to KPMG, only 46% of people globally are willing to trust AI and only 39% report having some form of AI training at their workplaces. AI is a valuable but complex advancement that rightfully leads to questions to ensure the AI that organizations implement is safe, explainable, and trustworthy.
To investigate the multitude of questions that arise when an organization is considering an AI offering, many organizations now involve an AI review board to evaluate potential implementations. These boards play a vital role to ensure safety by asking questions about the type of data on which a model is trained, whether an AI can access sensitive customer or company data, if there is potential for the AI to perform an unapproved action AI, and many others. The members of AI review boards bring diverse perspectives—security, compliance, privacy, ethics, and more—to the table, helping organizations deploy AI with confidence. Conducting a well-rounded investigation into new AI tools is a necessary exercise to ensure both their safety and efficacy.
SailPoint’s Identity Security Cloud is an identity security solution that is built with AI at its core so that your organization can experience efficient automation, deep insights, intelligent recommendations, and proactive measures to ensure proper access of sensitive resources and data. While customers often understand the business value and time savings that the AI within Identity Security Cloud enables, we often hear important questions from customers and their AI review boards who want to better understand how the AI works and why it is safe.
What AI capabilities are embedded in the platform, and what are they used for?
SailPoint Identity Security Cloud uses AI to strengthen identity security with advanced recommendations, risk modeling, detection of anomalous access, and automation. Key capabilities include Access Modeling, Role Discovery, Access Request Recommendations, Identity Outliers, Harbor Pilot, and more. These features analyze complex access patterns across your organization to recommend roles, detect anomalies, streamline workflows, and guide certification decisions. These AI-driven features are designed to improve efficiency and accuracy while reducing manual workload. They’re specifically tailored for identity security, not general-purpose AI.
Who controls the final decision, the AI or a human?
SailPoint’s AI works with our customers, not around them. Our AI is used to augment human decisions, not replace them. Some examples of AI functionality include Harbor Pilot, an agent capable of drafting potential workflows based on an organization’s unique processes and policies, Access Modeling recommending common access for similar roles, or generative AI suggesting descriptions for entitlements at the click of a button. In these examples and all other instances of AI-based automation or recommendations, the AI will assist and guide the human who is ultimately responsible for an action or decision rather than replace a human decision maker altogether. Administrators remain fully in control, with the ability to review, approve, or override any AI-generated recommendation. Human-in-the-loop design is a foundational principle across all AI-enabled capabilities.
What data does the AI use, and how is it governed? Is customer data used for any model training purposes?
SailPoint uses two different types of AI models: shared models, and customer-specific models. For most of our AI features, SailPoint deploys a customer-specific version of the model to the customer’s tenant or environment. The customer-specific version will be optimized and periodically updated with the customer’s data in an effort to provide insights and recommendations that evolve dynamically with the customer’s organization. Models deployed in this fashion are isolated to the customer and the customer’s data is not accessible to, nor used to train a model for, any other customers.
Shared AI models are trained on metadata and behavioral signals from identity systems—such as entitlement usage, peer group analysis, and access history—not on customer-specific data.
As AI technology continues to evolve, SailPoint may determine there are business cases that can best be solved by models that leverage cross-tenant training on customer data. If SailPoint releases any features that use models that leverage cross-tenant training on customer data, customers who do not wish to have their data used for training may opt-out of using these features.
Strict security and privacy policies govern all data usage, including residency and regional processing where required, and customers retain full ownership and control over how their data is used. SailPoint does not use personally identifiable information (PII) for the training of AI capabilities.
Check out our whitepaper to learn how data is governed, how bias is monitored, how human oversight is maintained, and to get answers to a variety of other questions about how SailPoint uses AI in Identity Security Cloud. We’re excited to share how SailPoint’s AI capabilities are designed to work with you, not around you, to provide the best possible identity security experience.