Artificial Intelligence as a Force Multiplier

Security professionals, identity managers, and IT operations teams are under constant pressure to make rapid decisions based on an incessant flow of alerts, reports, as well as demands to support the business. Consider a recent survey from security firm FireEye which questioned enterprise security executives who said that 37 percent have to deal with more than 10,000 alerts every month. Not only that, more than half are false positives and 64 percent duplicates.

That’s certainly not a tenable situation.

So, what’s the answer? Increasingly the answer is looking a lot like AI and machine learning. The idea being not to replace expertise, but to use these algorithms as a force-multiplier for security analysts, identity management professionals, and incident responders who all need to sort through an increasing amount of information to do their jobs.

To get a better sense about how AI and machine learning will help to improve visibility, identity governance, and provide insight into specific risks associated with user access, I spoke with SailPoint chief strategy officer Kevin Cunningham for answers.

Where do you see AI and machine learning when it comes to helping enterprises improve identity management? How’s that going to impact how people will or should manage their identities?

 Kevin: It’s going to have a significant impact regarding how we view managing identities. The reality is that in today’s world there’s just an enormous amount of identity data generated. We’re talking about all kinds of different users and different systems and automated robotic processes. We are talking about an enormous amount of activity that generates a tremendous amount of data. When it comes to identity, at the end of the day, enterprises want to do two things: they want to manage risk, and they want to drive efficiency. Identifying the areas that might represent risk and identifying the areas that could be automated go hand in hand with analytics. That’s what analytics helps people figure out: how to better reduce risk and how to be more efficient.

How are customers using machine learning to improve their identity-related risk management efforts?

When it comes to managing risk, our customers have an enormous amount of data and activity to deal with. Much of the data reflects quite normal activity and doesn’t constitute any abnormal risk to their environment. But there’s so much data that finding the anomalies are a bit like finding the proverbial needle in the haystack. Through things like peer group analysis and machine learning, you can begin to identify anything out of the ordinary – whether they be from a permission perspective, or in terms of user activity. Typically, the people in the same peer group have roughly the same kind of job function and therefore they all have very similar access levels. Commonly, enterprises manually filter through permission settings and access logs to try to identify what could be out of the norm. It’s very time-consuming and very inaccurate

Through machine learning, however, enterprises can watch what people are accessing on a regular basis as part of their job. Then, if they see something unusual, it can be flagged. It’s a bit like when people travel overseas and use a credit card for the first time. They’re likely to get a text message asking if the transaction is legitimate because it is a deviation from normal behavior. The credit card companies can detect, very accurately, when one strays from their norms of activity and they want to verify what’s going on.

It’s the same way when it comes to enterprise access. Machine learning helps to identify activity outside of the norms and flag it for review. It’s essentially a risk management mechanism that helps to verify something is legitimate, whether that be a permission setting or how people are using permissions.

IdentityAI will provide customers the visibility they need to better understand their specific risks associated with user access, so they can better focus their governance controls to reduce risk more effectively. Many could certainly use it, according to Verizon’s Data Breach Investigations Report. The average enterprise doesn’t recognize that an attack is underway on their systems for 200 days.

You mentioned that it’s not just about risk reduction, but machine learning can also help identify manual efforts that can be efficiently automated?

Just as we can identify things that might represent risk, we can actually, over time, discern things that represent very little risk. And this helps to inform what processes can be safely automated. For instance, you know that when someone joins the accounting department that they always get access to a certain set of applications and that access to this set of applications always gets approved. You know this because you’ve been watching over time what the behavior is. That is a way to identify an area that’s ripe for automation. Why do I need to include somebody’s manual efforts in interceding when I know what the answer is going to be already? I know they’re going to say yes to an approval request. IdentityAI can better identify these circumstances with predictive analysis.

You’re saying machine learning can help better identify these situations and identify additional key areas ready for such automation, which will help to drive efficiencies and productivity?

That’s right. Because we already know what the outcome is likely to be, the access review process, which is largely a manual verification/certification activity, can now be automated. No one likes doing access reviews. They’re just an onerous part of the job, but it has to be done. But if you can accurately predict the outcome of the reviews, over a course of time you see that this access is always approved for this particular type of actor or this particular type of position in my organization, so why wouldn’t we just automate that? We should. Enterprises need to be as efficient and effective as they can be, and this is another way to do exactly that.