2019 Market Pulse Survey: Good Bot, Bad Bot

There are two sides to every coin. This analogy is especially true when we talk about the ever-present use of software bots in organizations today. On one side of the coin, we can all agree that bots are time-savers. In one case, bots helped this company automate over 100 tasks and gave them countless staff-hours back. On the other side, bots can turn to the dark side—in severe instances, ‘bad’ bots were responsible for the plundering of data from well-known enterprises. Despite the divide, a ‘good’ bot and a ‘bad’ bot are only separated by how properly their access is governed. The key here is that by integrating bots within identity, they will be a valuable asset for us for years to come. With this in mind, in our 2019 SailPoint Market Pulse Survey, we dive into the current state of bots as a new type of ‘user’ and what it means for cybersecurity as we know it.  

State of bots

We found that IT teams have taken to the idea of bots—2 out of 3 respondents use software bots in their organization. This comes as no surprise. Bots are critical to helping save the workforce’s time as they automatically complete jobs and can handle a variety of different tasks like being on the front lines for customer service queries and admin tasks (think manually entering information into a database). And we know just how valuable time is for IT teams who are currently in need of more of it—bots can be an excellent answer for them. 

Bot accountability

Although it is raining bots, we found organizations still do not have the correct oversight into their day-to-day activities. Only 5% of respondents had 100% of bots, and their access accounted for in their identity process. This lack of bot accountability is leaving huge identity-shaped holes in organizations—one that can be costly if a bot’s user credentials got into the wrong hands. Having unknown bots, or treating a bot’s identity different from human users, can have a devastating impact on the overall cyber hygiene of an organization. This is timely to note, as well as we enter the holiday season. As the use of customer service bots skyrockets, organizations need to have visibility into their activities as they handle customer data in higher quantities.

Shadow bots

Despite the alarming visibility issue organizations have with their bots, there is another security concern to consider. We found 2 out of 3 respondents discovered employees using bots within the organization without IT’s knowledge. These ‘shadow bots’ have made a name for themselves as the next round of security-related concerns for the enterprise. In a similar vein to the impact of shadow IT, shadow bots are being leveraged by teams to automate repetitive business processes across the organization. While this idea may sound productive in theory, it creates a whole new world of security-related headaches for IT teams to manage — the majority (fortunately) falls in the realm of identity management.

What Bots Mean for Cybersecurity

We found that bots are being used but are not always correctly managed or properly governed when it comes to their identity. Without supervision, organizations run the risk of allowing these bots to run rampant across the network. If any of these bots were to be spotted by a looming hacker and turned into a ‘bad bot,’ a whole slew of new security incidents can occur. However, these issues can be fixed with a firm identity governance program in place that can help treat bots like their human counterparts. I mean, after all, they are doing the work of a human, so let’s treat them as one. It is high time we start accepting bots as they are—the good, the bad, and the ugly.

This is the second installment of our 2019 Market Pulse Survey. You can read the first blog here.

For the 2019 Market Pulse Survey, SailPoint commissioned independent research firm Vanson Bourne to interview 550 employees at organizations between 1,000 to 5,000 employees across Australia, France, Germany, the United Kingdom, and the United States. 


Discussion