If your company has invested heavily in new technologies like artificial intelligence (AI) or machine learning, and is now looking at chatbots and digital assistants to improve digital experiences and workplaces, you need to take a look at WatchGuard’s cybersecurity predictions for 2019. In fact, any company that is heavily invested in technology should check it out, but be warned, the report is not for the faint-hearted.
Rogue Chatbots Emerge
Under the catchy title 'Will the internet be held hostage?' WatchGuard has eight predictions for 2019. When looked at globally, the predictions suggest that because technologies are becoming so well-developed they will pose a threat to the web itself — chatbots lead the list of threats.
Under a subtitle "AI-driven chatbots go rogue," the report predicts cybercriminals and black hat hackers will create malicious chatbots that try to socially engineer victims into clicking links, downloading files or sharing private information. The point is that while enterprises should go ahead with chatbot strategies, they need to take the same security measures that they would with other technologies.
The predictions point out that as AI and machine learning technologies have improved over the past few years, automated chat robots have become increasingly common. Chatbots are now a useful first layer of customer support and engagement that allow actual human support representatives to address more complex issues. “But life-like AI chatbots also offer new attack vectors for hackers,” the report reads. “A hijacked chatbot could misdirect victims to nefarious links rather than legitimate ones. Attackers could also leverage web application flaws in legitimate websites to insert a malicious chatbot into a site that doesn’t have one.”
As an example, an attacker could force a fake chatbot to pop up while a victim is viewing a banking website, asking if they need help finding something. The chatbot might then recommend that the victim click on malicious links to fake bank resources rather than real ones. Those links could allow the attacker to do anything from installing malware to hijacking the bank’s site connection. “In short, next year attackers will start to experiment with malicious chatbots to socially engineer victims. They will start with basic text-based bots, but in the future, they could use human speech bots like Google Duplex to socially engineer victims over the phone or other voice connections,” according to the report.
Related Article: Top 26 ChatBot Builders for 2019
Can Enterprises Trust Chatbots?
So are digital assistants and chatbots secure enough to be allowed to interface with enterprises data? Joe Huang, product manager for Oracle Digital Assistant, said that in general they are, but it all depends on which digital assistant and chatbot platform you are using, and how well it meets enterprise data security requirements. Ideally, the platform should be tightly integrated with enterprise security systems and seamlessly support authentication, authorization and auditing, and data security mechanisms, regardless of which channel (e.g. Slack) the end user is using to interact with the digital assistant.
It's also key, he said, that the platform hosting the chatbot be one where operational and data security are tightly controlled, and compliance to security/privacy standards such as GDPR are adhered to. Conversation data should only be accessed by enterprise customers' designated admins, all logins must be audited and reported, and any personal identification information, such as a social security number, should be automatically redacted in the logs.
There are two key elements for a secure digital assistant platform:
- Ability to control access.
- Ability to prevent common hacking mechanisms.
On the access control front, platforms must seamlessly integrate with enterprise security infrastructure and support authentication, authorization, auditing and data security mechanisms. As for hacking prevention, compliant platforms will adhere to strict security review guidelines that eliminate hacking techniques such as SQL injection.
As an example, Huang cites how Oracle Digital Assistant achieves these objectives by exposing a developer user interface that inherently blocks developers from unsafe development practices that might be subject to the various hacking techniques. Furthermore, Oracle digital assistant pre-integrates and delivers all critical components of a chatbot platform, such as AI engine and conversational flow engine, and they all adhere to the strict security guidelines.
Related Article: What To Consider Before Bringing Chatbots Into Your Organization
Chatbots Need To Be Secured
Like nearly every other technology, digital assistants are not entirely secure and there is no guarantee that they cannot be hacked. In fact, Alan Majer, CEO of Good Robot Monitoring, said that current indications suggest they may be a bit less secure than a typical technology because people who interface with them may be more easily recorded and a speaker itself may not securely distinguish between one person and another. “Yet, as we know, value and ease of use tends to outweigh all those factors. Smartphones were once banned in many enterprises (largely because of their ability to take photographs, which was judged as a security risk), but the convenience and benefit of a modern smartphone now make it essential in most workplaces,” he said.
The key, then is to mitigate the risk and access the benefits. The simplest precaution is to ensure you're not giving digital assistants unlimited access to sensitive data and carefully monitor/filter what's allowed to pass between these separate systems.
If you have weaknesses in your ability to manage information and access among different systems, then this is already a problem for you, and things like digital assistants will indeed make it worse.
That said, security risks are always relative to any benefits. Majer argued that the true risk to enterprise information is that it goes to waste — it never manages to make it into the hands of people who actually need it. Digital assistants offer a powerful new way to make information accessible where and when it's needed — reducing costs, increasing accuracy, lowering response times and improving innovation.
What Enterprises Should Do
There is also a certain amount of inherent security built into the bots themselves, according to Keri Lindenmuth, marketing manager at the Kyle David Group. He pointed out that many chatbots are built using HTTPS protocols, like any other website or app. This means that the chatbot is secure and information that's shared is encrypted from one bot to another. Some chatbots are even secured by two-factor authentication, like an email address would be, to ensure that it is harder for an outside party to gain access to it.
However, the key to keeping chatbots secure, as it is with any other enterprise technology, is for leadership to train employees on what should and should not be shared via chatbots. For example, passwords and credentials should never be shared via a chatbot, just as they should never be shared via email. For extra security you should choose an encrypted service.
With chatbots set to play a major role in the enterprise in the coming years, teaching employees good security practices in the development and use of chatbots is a good starting point, but it's not enough. In the coming years as hackers develop new ways of compromising bots, it is important that organizations treat the security of these tools as part of enterprise security overall. What is needed, then, is a holistic security strategy that spans multiple platforms and interfaces and keeps all enterprise digital properties secure.