Last week, Microsoft gathered experts from academia, civil society, policy making and more to discuss one of the most important topics in tech at the moment: responsible AI (RAI).

Microsoft’s Data Science and Law Forum in Brussels was the setting for the discussion, which focused on rules for effective governance of AI. 

Whilst AI governance and regulation may not be everyone’s cup of tea, the event covered an array of subjects where this has become a red hot issue, such as the militarization of AI, liability rules in AI systems, facial recognition technology and the future of quantum computing and more. The event also gave Microsoft an opportunity to showcase its strategy around this important area.

A few highlights are worth sharing, so let’s dig a bit deeper into what Microsoft is doing in RAI, why it’s important and what it means for the market moving forward.

Responsible AI Is Now a Priority

Responsible AI is a combination of principles, practices and tools that enable businesses to deploy AI technologies in their organizations in an ethical, transparent, secure and accountable manner. 

The subject has been getting a lot of attention for several reasons.

First, we are seeing more high-profile examples of biased algorithms, autonomous vehicle accidents and privacy-violating facial recognition systems which are increasing the public's awareness of the dangerous unintended consequences of AI. 

Second, enterprises are now beginning to shift early AI projects out of the labs and are considering the real-world risks and responsibilities they will have when deploying AI in their operational processes.

And third, as decision makers consider introducing data and AI solutions in critical areas such as finance, security, transportation, healthcare and more, concerns are mounting over its ethical use, the potential for bias in data and a lack of interpretability in the technology, as well as the prospect of malicious activity such as adversarial attacks.

For these reasons, the governance of machine learning (ML) models has now become a main priority for investment with enterprises. According to my firm, CCS Insight's 2019 survey of senior IT decision-makers, for example, the level of transparency of how systems work and are trained, and the ability of AI systems to ensure data security and privacy are now the two most important requirements when investing in AI and machine learning technology, cited by almost 50% of respondents.

Related Article: The Next Frontier for IT: AI Ethics

'Tools and Rules' for Trustworthy AI

It was against this backdrop that Microsoft’s Corporate, External and Legal Affairs (CELA) division — an underrated asset in Microsoft's cloud business against Google and Amazon — hosted “Tools and Rules for Responsible AI.”

The event was located a stone’s throw from the European Union parliament, which was symbolic following the release of the European Commission’s (EC) whitepaper on AI just two weeks earlier. The whitepaper represents a critical moment for the AI industry as the first major attempt at outlining potential policy options for regulating the technology within the EU. Amongst a host of EC proposals in the paper, the prime recommendation was a potential regulatory framework that focuses (initially) on sensitive uses of AI, such as in areas that may violate human safety or rights and in high-risk sectors such as healthcare, transportation and energy.

Naturally, the EC’s white paper focused much of the discussion at the event and the consensus was while a lot more work on the details remains to be done in the coming years, particularly on implementation, the EU is headed in the right direction (the proposals are currently in a public consultation period until May). Similar to its position on data privacy which led to GDPR, the EU’s goal here “is to become the leader in trustworthy AI” as stated by Didier Reynders, the EC’s Commissioner for Justice in the event’s opening keynote.

Related Article: How Enterprise Adoption of Artificial Intelligence Must Shift in 2020

How Microsoft Approaches Responsible AI

In addition to leading and facilitating the discussion, the event also provided Microsoft an opportunity to articulate its approach to RAI.

The firm has been highly vocal on the topic going as far back as 2016 when CEO Satya Nadella laid out his six goals that AI research must follow in order to keep society safe. A year later, Microsoft set up its AI and Ethics in Engineering and Research (AETHER) Committee, a cross-company set of internal working groups tasked with deliberating over hard questions about the use of AI and advising Microsoft leadership on its development.

The AETHER advisory committee set the blueprint for the publication in 2018 of Microsoft’s six principles for AI in the book “The Future Computed.” The principles, which include fairness, inclusiveness, reliability and safety, transparency, privacy and security and accountability, now guide its end-to-end approach to AI, from development to deployment.

Interestingly, there are now over 30 sets of similar AI principles in the tech industry, many of which are too high level and abstract to carry any practical and operational guidelines for customers. But what’s unique about Microsoft’s approach is the focus on practices and implementation as well.

Learning Opportunities

Related Article: Microsoft's AI Moves at Ignite Deserve a Closer Look

From Principles to Practices: Learning From Tay Bot and Facial Recognition

In January, Microsoft unveiled its new Office of Responsible AI, an internal group within its CELA organization dedicated to putting a set of AI-related ethics and governance principles into practice across the company.

The unit has four main responsibilities. First, it sets company-wide policies and practices for the responsible implementation of AI. Second, it ensures readiness to adopt practices within Microsoft and supports customers and partners to do the same. Third, it has a case management role which sees it review and triage sensitive use cases for AI to help ensure its principles are upheld. And finally, it has a public policy role, which sees it help shape and advocate RAI policies externally through its work with the Partnership for AI, events such as the Data Science and Law Forum and its consultations with the EU and other regulatory bodies worldwide.

Natasha Crampton discusses Microsoft’s approach
Microsoft’s Chief Responsible AI Officer, Natasha Crampton discusses Microsoft’s approach responsible AI on a panel hosted by Justin Nagarede from Foundation for European Progressive Studies with Lisa Dyer Director of Policy at the Partnership for AI, and Stengg Werner Member of the Cabinet at the European Commission.

Perhaps most importantly, the Office of Responsible AI is a powerful vehicle to implement learnings into practices from some of the mistakes Microsoft has made with AI in the past. Natasha Crampton, the head of the division as Microsoft’s Chief AI Responsibility Officer, said that the company's experiences with its Tay chatbot in 2016 as well as its facial recognition technology failing to recognize certain skins tones in 2018 have been “instrumental” in shaping governance policies under its new charter. Microsoft has now published responsible bot guidelines as well as transparency notices that communicate the limitations of its facial recognition technology as a result.  

The company is also sharing its learnings for RAI through its AI Business School, which is also becoming a differentiated asset in its AI strategy. RAI is now one of the most popular subjects in the program. Last month the school announced it had expanded the RAI module to provide business leaders with insights from AETHER, the Office of Responsible AI and customers such as State Farm on the topic.

Related Article: Microsoft Launches Free AI Business School for Execs

My Take: Microsoft Is Currently Leading the RAI Discussion

Altogether, these efforts reinforce the view that Microsoft is playing a leading role in shaping the discussion around trustworthy AI. Customers I speak to care less about the conceit of algorithmic perfection from an AI vendor. Rather, they want to know above all, that they are on solid foundations with a responsible provider as they advance their AI strategies. Microsoft’s approach, spanning principles, practices and efforts to shape policy, like its efforts in security, privacy and accessibility before it, is helping differentiate it as a trusted advisor, which in turn is enhancing trust in its cloud business overall.

While this sets Microsoft apart from the competition, the road ahead is far from straightforward.

The firm will need to look more closely at offering certification and training programs in RAI. Additionally, an area that understandably received less attention in Brussels, given the focus on policy, was its tools for RAI aimed at data scientists and developers, which are expanding rapidly in Azure Machine Learning. These include: MLOps for lifecycle management, InterpretML and fairlearn toolkits for explainability and bias detection as well as data drift monitoring among others. A cynic could argue that Microsoft’s effort is all about driving business to Azure where it has a lot to gain from this area. I will explore the competitiveness of these tools as a key aspect of the strategy in an upcoming post.

Although less directly related to AI, Microsoft's HoloLens product continues to be used by the US military, raising ethical questions in other areas of its business. More importantly, it faces some very tricky privacy and security issues around areas like social media provenance, deepfakes as well as new products like Workplace Analytics and its custom speech API which is in gated preview.

These areas will test both AETHER and the Office of Responsible AI’s case management function in the future. In the latter’s case, the Wall Street Journal reported last August that in 2018, a voice-based deep fake of a major UK company's CEO emerged and was able to trick a senior employee into wiring over $240,000 to a criminal bank account. It is the world's first case of voice fraud and although the incident did not involve Microsoft's technology, it is an astonishing reminder of the unintended effects of custom speech technology, which will require deeper operational and security guidelines down the road.

It’s early days for responsible AI, but it’s a crucial area to help companies avoid problems and improve the performance and quality of the AI applications they deploy. It’s going to be fascinating to track how customers, Microsoft and the competition respond to these trends over the next 12 months.

fa-solid fa-hand-paper Learn how you can join our contributor community.