If, as we have seen in recent months, the rate of digital transformation has gone beyond anything we have seen in the past, it has also opened up many enterprises to attack in ways that have never been possible until now. Each time an organization adds new technology to the digital workplace it exposes itself to new risks. However, there are also new ways to protect their digital assets, just as there are new ways to ensure productivity.
At the end of last year, Capgemini released research into how organizations are turning to artificial intelligence (AI) to protect their digital properties. Titled Reinventing Cybersecurity with Artificial Intelligence, it showed that 42% of the companies studied had seen a rise in security incidents through time-sensitive applications. It pointed out, for example, that Cisco alone reported that, in 2018, it had blocked seven trillion threats on behalf of its customers.
Cyber analysts are finding it increasingly difficult to effectively monitor current levels of data volume, velocity, and variety across firewalls. More to the point the type of cyberattacks that require immediate intervention, or that cannot be remediated quickly enough by cyber-analysts, has increased dramatically. It also showed that most are turning to AI-driven cybersecurity to combat the threat. Here is a look at before the pandemic and the potential impact on business' balance-sheets:
- 48% said that budgets for AI in cybersecurity will increase by an average of 29% in FY2020.
- The average increase in FY2020 budgets for nearly one in ten organizations was predicted to be 40% over FY2019 budgets.
Related Article: The Role of AI in Ensuring Data Privacy
Investing in Cybersecurity
Whether those predictions have held up considering the current crisis remains to be seen, but many organizations are still investing heavily in cybersecurity. Aaron Applbaum is a parter at San Francisco-based MizMaa Ventures, an investment firm that specializes in investments in technology companies in Israel.
He explained that artificial intelligence is very advanced statistics, essentially taking historical data, finding patterns in that data, and learning to recognize those patterns in new data as it enters a given system. It is a methodology that is used in many verticals where there is a lot of inputs to train on. Cybersecurity is no different and offers a system that analyzes historical data of both good behavior, and bad behavior, and protects a system by recognizing the bad patterns before damage is done.
The place where AI is really changing the game is in analyzing entitlements, access, misconfigurations and compliances within a network and cloud architecture. If one looks at the recent cloud-native breaches (Twilio, Trustware, DoD.) they are not like the typical malware-based attacks of the past, instead capitalizing on misconfigured native features of the cloud. AI can be used to solve the sorts of problems by grouping accounts according to their patterns of cloud resource usage and find outliers accordingly. “By looking at historic data to benchmark, and discovering anomalies and outliers, an organization can keep track of accounts. It can show who is connecting with whom, with which rights, for how long and how frequently,” he said.
Data Drives Cyber Responses
However, an AI system is only as good as the data being fed into it. Just like a child, if you teach a child bad behaviors, those behaviors will be carried on with them as an adult. If the information being fed to an AI system is intentionally malicious or inaccurate, the AI system will learn to behave the way the attacker wanted it to, not the way the system was designed to behave, said Steve Tcherchian, chief information security officer at XYPRO and a regular contributor to and presenter at the EC-Council, a cyber security technical certification body.
In less important cases, this can be a mild annoyance. He cites the example of smart homes. These smart devices, he said, learn our habits and adjust themselves based on those inputs. “My Roomba has mapped my house based on the house and furniture layout,” he said. “If my daughter were to place random objects in its path and do this on a routine basis, the Roomba would eventually learn to avoid the area where it encountered an obstacle. That means that area would not be swept.”
In more extreme circumstances, manipulating AI input can be dangerous. Planes have been using autopilot for years. Autopilot is getting increasingly smarter as AI technology advances, but flaws still exist because it's based on input. One faulty input or sensor can have irrecoverable effects. If an attacker could get his hands on the input the AI systems rely on to make decisions, the affects could be incomprehensible. Especially considering AI is being intertwined into our lives more and more without us even knowing. On a large scale, this could be very damaging.
Learning From Feedback Loops
Cybersecurity is constantly evolving, with a threat landscape and attack surface that can change daily. Even with the latest generation of tools, security analysts can have a hard time keeping pace, Saryu Nayyar, CEO of Los Angeles based Gurucul, which provides actionable risk intelligence to prevent internal and external threats, told us.
Modem security information event management (SIEM) systems help them visualize what's happening, but the sheer volume of data can be daunting and it's easy to lose subtle indications of attack in the flood of information. Add in novel attacks that the system doesn't know how to flag, and their jobs become even harder.
Artificial intelligence can give security operations an edge by identifying subtle patterns in the flood of data and recognizing new attacks by their behaviors rather than by a known signature. AI excels at parsing through huge volumes of data to see patterns that aren't obvious to a human analyst. It can alert the human operators to a potentially hostile event, or respond automatically to stop an attack in progress, informing security personnel of the event so they can investigate and remediate. “The flood of data from a myriad of sources that can overwhelm a human only serves to increase an AI's capability,” he said. “The more data AI systems have to correlate and analyze, the more obvious the patterns become. The machine learning aspect of AI lets the system learn from a feedback loop, with the humans providing guidance as the system evolves.”
Cybersecurity in the Future
Todd Blaschka is COO at Redwood City, Calif.-based TigerGraph. He pointed out that the current pandemic is only going to make things worse. The modern, post COVID-19 workplace removes even more traditional network boundaries which is expanding the attack areas to new devices, users, applications, and platforms,” he said.
This combined with the new cybersecurity model of identity-centric security, creates an ideal use case for AI and cybersecurity. The result is an AI intelligent security network (or graph) that can break the silos of security data points spread across most organizations, combining them into a cohesive security fabric. This fabric creates security points as patterns which security professionals can analyze for behavioral patterns that might indicate cyberattacks. “The system is continuously learning based on new data, new patterns to identify potential vulnerabilities,” he said.