privacy please sign in hotel door

The Role of AI in Ensuring Data Privacy

6 minute read
David Roe avatar
With AI now used in many apps that drive the digital workplace, more questions around the privacy implications are surfacing in the enterprise.

With artificial intelligence (AI) now used in many of the apps that drive the digital workplace, many enterprise managers are beginning to question the implications this will have for privacy. This is not just speculation though. There is a good deal of research that indicates just how deep AI has dug into the technology that is in place to protect customers and app users’ privacy and, by extension, the enterprise itself. The 2019 Gartner Security and Risk Survey, which was conducted from March 2019 through April 2019, showed that over 40% of privacy compliance technology will rely on AI by 2023, up from 5% in 2019.

The result is that privacy leaders are under pressure to ensure that all personal data processed is brought in scope and under control, which is difficult and expensive to manage without technology aid.

In fact, it is these very considerations that are driving enterprise leaders to act and adapt AI. Speed, scale and automation are the key reasons why AI has become attractive for businesses and customers, said Ben Hartwig, chief security officer at InfoTracer. The quantity of data that AI can raise is bigger than what human analysts are capable of. This is the only way to process big data in a reasonable time frame. “One of the reasons why privacy is a big concern here is the fact that people are not familiar with the measures they can use to protect it even if there are some principles that can help with protecting ourselves,” he said. These include ensuring that:

  • AI systems can be easily perceived.
  • Consumers are to opt out of the system.
  • Data can be deleted upon consumer request.

"Our loss of privacy is another example of how digital technologies such as AI can work to our detriment,” he added.

Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources

AI in  Privacy

So, what role will AI play in privacy? According to Geoff Webb, VP of strategy at PROs, there are three areas where we will see AI taking a central role in privacy and data governance, now and in the future.

Privacy Concierge: First, AI bots can provide a “privacy concierge” function in which they can recognize, route and service privacy data requests faster and more cheaply than humans, in much the same way that other AI bots handle increasingly complex requests today.

Data Classification: AI has already shown itself to be highly effective at identifying and classifying data that could take a human operator significant time and effort to review. This means that much of the existing data businesses hold that could fall within privacy regulations (and therefore need to be available to consumers on request) can be identified and aggregated by AIs doing continual sweeps through disparate data stores. “We already see AI’s performing the role of central manager, consumer and analyzer of siloed data stores in other parts of the business, so the AI 'data bridge' is a natural fit for privacy and compliance tasks,” he said.

Managing Sensitive Data: AI can also provide a role in handling sensitive data itself. Specifically, tasks in which sensitive data might be exposed to a human operator unnecessarily. For example, routing requests for healthcare records between providers in which there is a need to aggregate data but a desire to provide an additional layer of privacy. AIs are extremely effective at consuming and analyzing data yet are essentially impervious to the implications of the information they see. It’s simply not possible to bribe an AI into leaking a celebrity’s healthcare records, as an obvious example. “This means that AIs could, in the near future, be used to handle much larger amounts of sensitive data in ways that remove humans from the chain, and thus simplify the process of keeping that data secure,” he added.

Augmenting Tools, Processes

Ilia Sotnikov is VP of product management at Netwrix, which develops a vendor of information security and governance software, points out that AI is not a silver bullet and that it would be a mistake to expect that it will "automagically" solve all privacy related issues. However, it can potentially augment existing tools and processes in several areas that help reducing privacy related risks.

Related Article: Why Artificial Intelligence Will Create More Jobs Than it Destroys

Learning Opportunities

Identifying Personal Data, Systems and Processes

Before you do anything, you need to have some sort of a “data processing map.” This is drawn by responding to several questions such as:

  • What privacy related data do you have?
  • Where is it in the system?
  • What processes rely on it?
  • Who are the owners of systems and processes where personal data resides?

Automated data discovery and classification solutions augmented with AI can significantly reduce the amount of time required to build and maintain such a map.

Facilitating Timely Retention

One of the important aspects of personal data management is properly archiving or discarding the data in accordance with the policies. Content management technologies can help identify such data; and when paired with classification, you can automate the workflows and reduce your company’s liability for storing personal data longer than can be justified by the business needs and by your policies.

3. Protect Systems and Data

The cybersecurity space is booming with hundreds of solutions, many of which rely on AI for intrusion detection, user behavior analytics, phishing and malware detection, and much more.

The Role of Data Quality

For many AI initiatives — whether they are related to compliance assurance or fraud detection, for example — data quality often becomes one of the major points of failure. In the report Driving Value with Machine Learning in Banking, which was co-authored by Darya Shmat, business development management at the Itransition Group, data preparation is highlighted as the second critical step in the machine learning implementation roadmap. This means that the data used in training a machine learning model needs to be secure, transparent and bias-free for the model to comply with data privacy regulations.

To meet these rigorous data quality requirements, businesses may be put in a position where they will have to invest millions, if not billions to address the data privacy imperative (consider the case of PNC Financial Services that spent $1.2 billion on modernizing their data infrastructure).

There are, however, 5 feasible steps that will offer data quality assurance within every company's reach.

  • Cleaning data at the point of capture.
  • Properly labeling data when used for supervised machine learning.
  • Implementing a numbering system for cross-referencing between databases.
  • Maintaining a cleaned "golden copy" of data aggregated from external sources.
  • Updating data to prevent its decay over time.

“I see more use cases for AI in compliance emerging right now,” Shmat said. “Natural language processing seems by far most helpful in terms of taking some administrative burden off compliance managers, but intelligent face recognition technologies have made a leap forward too in identity management.”