"What's the next big thing in IT?" Analysts get this question a lot. Companies are always on the lookout for technology they should be paying attention to. I usually shy away from answering this question. While there are a number of interesting technologies, determining which ones will be the next big thing — meaning a combination of useful for IT and profitable for vendors — is impossible to determine without lengthy analysis.
Yet, this time I had an immediate answer: Ethics, especially the ethics of AI. The next big issue and big technology will involve the ethics of artificial intelligence and machine learning.
Competing Interests
As an industry, IT has had a charmed existence. Unlike many other technologies such as biotech, rocketry or nuclear, the IT industry has always been secure in the knowledge that it is achieving good or is at least neutral. Yes, some issues have arisen related to displacing workers from their jobs — there are much fewer secretarial jobs than there used to be because of IT — but not to the degree that factory automation and similar technologies have experienced. Other industries have caused massive numbers of workers to lose their jobs or developed terrible weapons of mass destruction. For IT, it’s always been blue skies.
That is no longer the case. With the advent of artificial intelligence new ethical issues, including the weaponization of IT, has emerged. IT privacy issues have been a problem for some time, but now those problems have metastasized into the use of AI for surveillance of a country’s own citizens and for censorship. IT is being used to enable dictatorships and harm democracy.
What makes the ethics of AI so difficult is the complexity of the issues involved. The JEDI Project, for example, is an example of a government project that is driving dissension in the IT community. JEDI, which stands for Joint Enterprise Defense Infrastructure, is a $10 billion project for a modern cloud-based infrastructure for the US military. Some vendors, such as Google, have withdrawn from competition for the project citing concerns with how its software might be used. The company was especially concerned with the use of its AI technology by the military. Microsoft has had fewer worries. Both CEO Satya Nadella and president and chief legal officer Brad Smith have defended their bidding on the project as important to the safety of the country. In the case of JEDI, there is both the sense that defense of the country is important but also the potential for misuse of technology.
Related Article: Is it Finally Time for a Federal Privacy Law?
No Clear Right or Wrong
The problem is not just duality, but the lack of guidelines as to what is acceptable and what is not. It's easy to say it is ethical to create an autonomous vehicle but not an autonomous weapons platform that kills without human intervention, but is that true? Is it wrong to provide your country with a military edge that keeps it safe? On the other hand, what if autonomous weapons platforms kill the wrong people? How long will it take the AI to recognize its mistake and learn from it? Does it make sense to have weapons that don’t care who they kill or is it better to have military veterans with PTSD from the choices they had to make?
This is only one problem with AI. Complex problems emerge when discussing the use of AI for surveillance or censorship. We are still developing a set of rules that will help us understand what is acceptable and what is not.
Learning Opportunities
Related Article: Marketers, Data Collection and the E-Word: Ethics
Start With Transparency and Critical Thinking
One thing is clear — transparency is a major tool in insuring ethical IT. The recent worldwide walkout by Google employees was, in part, due to the lack of transparency about what their work was being used for. Transparency enables discussion and dialog whereas secrecy hides vital information that can provide context. Discussion is critical to creating a shared vision of what is and isn’t acceptable use of IT technology, especially AI. In addition, new controls will need to be built into AI technology to insure compliance with acceptable use rules, once they are agreed upon.
The other defense against unethical behavior is critical thinking. People should think about the work they are doing and its effects. Critical thinking about the ethics of AI should be a core function within development projects and that thinking can be driven by the shared values of the organization.
The IT industry has been lagging in examining these issues. As the people who understand IT technology and its potential for use and misuse, practitioners need to engage in resolving these issues just as doctors are critical to questions of medical ethics.
We can’t wait any longer. If we wait, we will find ourselves staring at the dystopian future we have always imagined in science fiction.
Learn how you can join our contributor community.