Recent research from IBM has shown that even though investment in AI has been stable over the past year, it is likely that investments will increase now that a return to the physical workplace seems increasingly likely.
IBM’s Global AI Adoption Index found that enterprise deployment of AI was flat compared with 2020 but that businesses plan “significant investments” in AI throughout the coming year. Adoption is being driven by both pressures and opportunities, from the pandemic to technological advances that make AI more accessible.
Microsoft Launches Counterfit Out of Need
There is, however, a problem and one that is common to all emerging technologies, notably security. In recent weeks, Microsoft has jumped into the fray here with the recent release of Counterfit. Counterfit enables organizations to conduct AI security risk assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.
By way of explanation, a blog from Microsoft points out that AI systems have become so widespread now that they are increasingly used in critical areas such as healthcare, finance, and defense. Consumers must have confidence that the AI systems powering these important domains are secure from adversarial manipulation.
"This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative,” the blog reads.
Microsoft is not the only one concerned about AI security either. Gartner’s Top 5 Priorities for Managing AI Risk Within Gartner’s MOST Framework published in Jan 2021, argues that organizations should adopt specific AI security measures against attacks to ensure resistance and resilience, noting that “by 2024, organizations that implement dedicated AI risk management controls will successfully avoid negative AI outcomes twice as often as those that do not.”
Matthew Gribben, a UK-based cyber security consultant and AI/ML expert who has worked for the UK’s GCHQ /CESG. Cyber-security, he said, isn’t necessarily the first thing that springs to mind when talking about Machine Learning or AI (ML / AI). Organization leaders tend to focus on how these new technologies can help us with our day to day lives or in business. “However, out in the ether a new breed of hackers is quietly looking to take advantage of how these systems work with deeply worrying consequences,” he said. “This new breed of cyber threat is called Adversarial Machine Learning.”
Related Articles: Why Artificial Intelligence May Not Offer the Business Value You Think
Attacks Targeting Algorithms
He points out that these attacks differ from traditional cyber-threats in that the traditional threat was to a specific piece of software, a very specific version of that software or tailored to a particular implementation of a particular system but these new ML attacks are aimed at the flaws in the underlying nature of the machine learning algorithms.
Those algorithms, with some slight variance, are common across differing ML powered systems meaning that any system that implements that type, he explains, the most common attack vectors for adversarial ML are "Evasion" attacks and "Poisoning" attacks. In an evasion attack you are supplying the ML system with actions or data that is designed to fool it into ignoring your true intent, this could be credit card fraud or simple spam filter avoidance, to much more nefarious intent like evasion on an ML based security system.
In poisoning attacks the goal is to contaminate the data set that the ML algorithm is working with, if well targeted, contamination can then trick the system into performing specific actions desirable for the attacker, like granting access to a secure system, damaging an intrusion detection systems abilities or exposing confidential data that the system wouldn’t normally expose.
The key difference with ML / AI attacks is like many other types of cyberattack, good penetration testing. In this respect, he says, Microsoft’s Counterfit tool is extremely helpful here as it allows you to target your own ML implementation with different kinds of known attack types. Whilst this tool is useful for initial testing, what is key is continuous evaluation of your ML / AI models, monitoring for changes in behavior or new weaknesses appearing.
If you have a computer attached to a network, you should assume it will be hacked and plan accordingly. The same goes for AI. No AI is perfectly secure, Chris Nicholson, CEO of Pathmind, a company applying AI to industrial operations, said. They can be fooled, just like your eyes can be fooled with an optical illusion.
Testing an AI model is the best way to find out how secure it is. Microsoft's new tool is one way to do that. Adversarial testing should be standard practice for AI models that are performing important functions which people may want to hack: fraud detection, spam detection, computer vision on security cameras.
Making an AI more robust with adversarial testing won't make you 100% secure, but it will make you harder to fool. Adversarial models are already widely used in the industry, not for security, but just to increase the accuracy of predictions. Actor-critic models and GANs are based on that adversarial dynamic. Think of it as a coach who spars with a boxer as part of training. It's a way to raise your skill level before you enter the ring.
Understand Your AI
Jose Morey is the former associate chief health officer for IBM Watson and an AI advisor to NASA. He says that the major problem with AI and security is the same as with any emerging technology. Safety is often an afterthought. “We have seen this play out repeatedly with every new piece of technology, whether hardware or software. It is only after there has been a major breach that safety then becomes first amongst equals,” he said. “Think back to the early days of the internet, windows, to when cars were becoming more digital, home devices, mobile applications etc. Stories of how all these things were hacked at some point or another are ubiquitous. AI applications are no different,” he said.
The biggest problem that most companies have is they truly don't understand how much of their more advanced AI applications work due to the blackbox aspects. This is a major problem. The other issue is that most AI applications still take place in the cloud. Edge AI is becoming more available due to hardware advancements, but the vast majority are still cloud based.
Other issues include lack of knowledge of the training sets. Most companies don't truly understand how the application was trained so malicious errors could be placed into the training, testing and validation sets that could cause major AI errors downstream. For an AI application to truly be most beneficial to an end user it also needs access to more internal data to better learn and produce more robust outcomes. This can inadvertently lead to increased exposure. This latter is however being mitigated by newer Federated Learning models. But older AI algorithms still have the same risk. Happy to discuss further.
Which brings us back to IBM's Global AI Adoption Index 2021. It points out that 80 percent of companies are already using automation software or plan to use this technology in the next 12 months, and for more than one-in-three organizations, the pandemic influenced their decision to use automation to bolster the productivity of employees, while others found new applications of this technology to make themselves more resilient, such as helping to automate the resolution of IT incidents.
It adds that trustworthy and explainable AI is critical to business. It shows that 91 percent of businesses using AI say their ability to explain how it arrived at a decision is critical. While global businesses are now acutely aware of the importance of having trustworthy AI, more than half of companies cite significant barriers in getting there including lack of skills, inflexible governance tools, biased data and more.