Five Potential Malicious Uses For Chatbots

Just Like Any Other Tool, Chatbots Will Be Exploited By Bad People For Bad Purposes

Len Epp
4 min readMay 11, 2016

Lots has changed since I first wrote about what it would be like to work with chatbots, way back in 2014. A recent post by Tomasz Tunguz, “Five Questions About The Future Of Chatbots”, offers a good interrogation of the growing complexity and the increasing potential around how companies and customers are going to use chatbots, and how much is still unknown.

Chatbots have even entered the public imagination, thanks to the controversy involving Microsoft’s Tay on Twitter, which brought chatbots to greater popular awareness, hinting at the potential for negative outcomes from combining chatbots with ever-evolving AI.

For me, what made the Tay situation so interesting is that Tay learned to be bad by observing human behavior. The key lesson there is that Tay’s negativity was an effect of human negativity.

When sophisticated chatbot AI becomes a more popular and perhaps openly available tool, it will of course be used by organizations and individuals to increase efficiency and improve services in all kinds of ways, perhaps even becoming a complex form of new hands-free UI.

But we also need to consider how it will be exploited by people purposefully to achieve negative goals. Some of these goals may be motivated by greed, either to steal or con people out of their money and, importantly, their time. Others will be motivated by a desire to intimidate and harass people. Some will be motivated by pure evil. Terrorists, too, will find uses for it, and it will in that sense also represent a real security challenge.

What’s different about the combination of universally available and accessible AI and chatbot technology is a new kind of problem for humanity: the inability of humans to distinguish between human and chatbot interlocutors. Given the way so much communication these days is not done face to face, there will be some pretty dangerous consequences to technology finally passing a version of the Turing Test.

The main problem lies in the potential corruption of services that are provided to human beings in text form and, eventually, probably in voice or VR form, too.

If a chatbot can effectively come across as a person in an environment where a service provider has an obligation to respond to a person in distress or with a need that requires fulfilment, then there are many opportunities for exploitation.

Here are five examples of ways chatbots might be used for harm. I did not have fun thinking them up, but obviously it’s an important exercise to go through generally with any new technology.

Obviously the uses to which chatbots might be put will be as legion as human imagination and machine learning can make them; these are just a few obvious ones.

Disruption of emergency and support services

There was an amazing story in the news recently of how a text sent by a boy in danger led to lives being saved. Texts like this could easily be set up by malicious actors to confuse or overwhelm emergency services for a variety of reasons. In my mind’s eye I can already see the movie about how a hacker spoofs 911 operators with AI chatbots that are used to lure the police away from a bank that the hacker’s collaborators are about to rob.

Customer support disruption and corporate sabotage

This might sound fanciful but I think corporate sabotage is a real thing, and disrupting a company’s customer support with malicious chatbots is something that could probably already be done to some effect today. As a (non-AI, non-chatpbot-related) startup cofounder I read customer emails, and many of them can take a pretty simple and repeatable form. In fact, when someone is asking for instructions and receiving them in return, simplicity is key, and adding the human touch is the service provider’s responsibility, which just makes the malicious faux-customer chatbot’s job that much easier.

Of course, something automated could be set up to analyze text-based questions, and only surface customer interactions to people within the company when it appears that the chatting customer is a real person in need of human assistance. Still, if this kind of malicious use starts happening, it could be a real problem for startups, especially ones that place a high value on positive interactions with customers. How would you deal with an AI chatbot on Twitter that was complaining about your company?

Bullying and harassment

Unfortunately, cruel people — like evil-Tay’s human precursors — will have a new weapon in their Twitter and social media arsenals when they get their hands on malicious chatbots. It’s possible to imagine someone sending out an army of them to harass people and even try to measure the success of their harassment, and then using machine learning to hone and adapt new strategies and develop detection countermeasures. For example, a malicious chatbot could do things like become gradually crueller to a target person over a long period of time, apologizing when they step over a line, and then starting in again on the target at a later time.

Recruiting bad people to do bad things

Wouldn’t it be great if bad people who are setting out to find recruits to their causes could automate the initial stages of the process? Obviously it would not be great for the rest of us, but it’s not hard to imagine it being done. A recruitbot could start by sending messages on social media, chatting and posting according to a personality profile and maybe even a personal history. Then, when a recruit crosses a threshold of interest, a human recruiter could be brought into the process. Obviously this tactic could also be used to throw investigators off the trail of the actual humans.

Spreading misinformation

When a text-sending presence can come across as a human rather than a chatbot, its message will gain a lot credibility. Just imagine if a network of human-seeming chatbots with crafted online identities start getting worked up on Twitter about, say, a false but catastrophic-if-true claim about one of two candidates in an important election. With enough chatbots behind it, such a network might be able to achieve something like artificial social proof.

--

--

Len Epp

Startup cofounder. I like to write about tech, publishing, the interwebs, politics, and such.