A chatbot app being used on a mobile device.

Chatbots are some of the best AI-infused applications businesses can utilize. Depending on the size of your operation, you could save tens of thousands of dollars per year in customer service expenses by leveraging this emerging technology. However, you should understand the multiple security risks associated with this technology before connecting chatbots to your digital infrastructure.

What are chatbots, anyway?

Chatbots are software programs that receive and interpret customer questions via a chat interface on your website. They access databases to locate the requested information and then give that information to the end user. In this regard, you can almost look at chatbots as mini search engines targeted to your business or product.

For example, let’s say a customer wants to know when his subscription period ends. He goes to his service provider’s website and clicks on the chatbot widget. He might then enter a simple question, such as “When does my subscription end?” The chatbot checks a database to verify the user’s account information for security purposes and retrieves the information. It then “replies” to the customer in the chat interface.

Chatbots are infinitely useful to businesses because they reduce the burden on customer service agents. By allowing software to answer basic questions from customers, those agents are available for more complex inquiries. This in turn increases employee efficiency and productivity and reduces costs. At least one study puts the combined cost savings from chatbot deployments in retail, banking and healthcare at a stunning $11 billion per year by 2023.

Those figures are hard to ignore, making it worthwhile for any business to consider the technology for customer service operations. However, the nature of how chatbots are used, where they’re hosted, and the information they have access to means they can be a security vulnerability. As we showed in the example, most available chatbot services are connected to business databases with client account information. This is sensitive data that can easily be exploited and sold by hackers.

Chatbot Security Vulnerabilities

Even the best software engineers can’t design programs that are totally risk-free. Almost every business application has vulnerabilities—96% of them, in fact, according to a 2020 report from Contract Security.

A vulnerability is a gap or a weakness in an application that makes it susceptible to misuse. Some are code-level oversights that software engineers may have missed. Others are unavoidable, inherent in the design or purpose of the program. An easy way to picture this is a laptop computer versus a desktop computer. Both could have the same software security features, but a laptop is more vulnerable to theft because of its primary purpose (mobility). It’s far easier for someone to snatch your 2.5-lb ultra-thin laptop than cart off a bulky desktop tower.

For chatbots, common vulnerabilities can include (but are not limited to):

 

  • Server misconfiguration
  • Unencrypted message channels
  • Employee misuse
  • Weak hosting platform security
  • Critical coding errors
  • third-party module weaknesses

 

These (and other potential vulnerabilities) don’t necessarily lead to security breaches. But all it takes is an enterprising malcontent with some know-how to recognize, research and exploit that vulnerability.

Chatbot Security Threats

Whereas a vulnerability is an weakness, a security threat is what someone could specifically do that exploits that vulnerability. Chatbot security threats are fairly common for any web-based or cloud-based software on a company’s website that accesses private information held in databases. Some examples include:

  • Man-in-the-middle attacks
  • Retraining or repurposing by hackers (tampering)
  • Malware injections
  • Webhook exploits
  • Phishing (both from hacked bots against users and by hackers against bots)
  • Impersonation or identity spoofing

 

Threats like man-in-the-middle attacks, filter messages between the chatbot and the end user by spying on unencrypted channels. A hacker sharing the same Wi-Fi network as the end user could do this as long as the connection between the users’ computer and the chatbot occurs over an unencrypted channel (such as a website using HTTP instead of HTTPS).

 

Some threats that are primarily unique to chatbots. These can include retraining the chatbot to spit out malicious information and using spoofed or hacked chatbots to phish for information. This threat is inherent in machine learning, which allows chatbots and other AI to better understand language by exposing them to humans naturally using that language. Because chatbots are only effective if they can understand the information they get from human end users, machine learning is a key to their effectiveness.

However, it can also backfire. Microsoft learned that the hard way when it tried to train an AI chatbot on Twitter, only to have the Twitterati teach the bot to become racist and xenophobic. A similar thing occurred with a chatbot designed by Scatter Lab in Korea.

 

Thankfully, most business-based chatbots don’t learn language from human trolls on social media. However, higher-quality chatbots are designed to improve over time through their interactions with end users. Hackers use this to their advantage by swarming malicious bots into the chat system to retrain a business’ chatbot to do any number of things, such as give up private user data without proper verification or send real end users malicious links as a response to certain queries.

 

Chatbot Spoofing

 

Many chatbots are designed to help users solve account issues without the aid of a live agent. Users will often give chatbots private information, which is exactly what hackers want to get their hands on. They can scoop this information by injecting malicious programs into websites or establish phishing websites that include fake chatbots. Users submitting private account details or financial information to these chatbots can then become victims.

How To Mitigate Chatbot Security Risks

None of this is intended to scare you away from chatbots. Thousands of companies worldwide now use them to some degree and to great effect. However, make sure you’re mitigating some of the common security risks before fully implementing any chatbot infrastructure into your website.

Use end-to-end encryption. Many websites, especially e-commerce businesses, only employ TLS/SSL encryption for the data-sensitive parts of the site. However, a chatbot may sit in areas of your website not protected by this encryption. While the chatbot itself may have a secure connection to access data from your servers, the messages sent between it and the user might not be fully secure—making them vulnerable to data sniffing.

End-to-end encryption on your website helps prevent man-in-the-middle attacks and shores up some of the glaring vulnerabilities inherent with chatbots. This means protecting your entire website, landing pages included, with HTTPS.

Note, however, that this will only work if the chatbot is hosted on your own site. If it’s hosted elsewhere, hackers could find that chatbot’s unique URL and attack the system that way. Mitigate this by ensuring your chatbot’s interface has authentication in place.

Limit chatbot delivery of or inquiries for private user information. Chatbots potentially have access to a wealth of private user information. An individual with some of this information can easily get the rest of it by tricking the chatbot. To avoid this, limit the information your chatbot can access or the type of private user data it will provide. If a user needs access to secure account information, your chatbot can instead direct them to call a live agent or log in to their account using their secure password.

That also applies on the other end. Limit how much personal information a chatbot will ask a user to give. For example, don’t have your chatbot ask users for their passwords to verify identity. Instead, consider using  two-factor authentication via text message codes or authenticators.

Add authentication time-outs to failed authentication lockouts. Although some users find this irritating, consider it a “better safe than sorry” form of security. By adding authentication time-outs and failed authentication lockouts, you can help prevent hackers from repeatedly trying to access user accounts.

Authentication time-outs are fairly simple: If the user fails to authenticate in a certain amount of time, the system automatically logs that user out of the chat. Lockouts are similar, but will lock a user out of the account with several failed attempts to authenticate identity.

Make sure locked-out users have a way to regain access easily. This is typically done by verifying using another method, such as an emailed password reset link.

Train staff on best-practices. Chatbots aren’t good enough (yet) to be completely self-sufficient. You’ll need to have staff in charge of monitoring the chatbots. These individuals should be trained on best-practices that help mitigate some of the vulnerabilities and threats that are part of operating a business chatbot, such as device theft or loss and hackers attempting to send phishing messages in the chat system.

Your in-house IT staff should be involved in managing your chatbot system. Barring that, you may want to hire a managed IT service provider with expertise in chatbot security.

Chatbots are Here to Stay

There’s no denying it; the cost savings associated with chatbots make them highly attractive for businesses. And while some consumers find them irritating, most issues stem from the difficulty they can have handling more complicated questions.

Security threats are not the reason consumers are hesitant with chatbots, but they should certainly be top-of-mind for your business. Plan your chatbot security before you implement such a system to guarantee you’re not putting your users at risk.

 

All content provided in this article is for informational purposes only. Matters discussed in this article are subject to change. For up-to-date information on this subject please contact a James Moore professional. James Moore will not be held responsible for any claim, loss, damage or inconvenience caused as a result of any information within these pages or any information accessed through this site.