Artificial intelligence (AI) tools might be good at predictions, but if they're not used properly they are not worth the investment. In a paper for the Harvard Review last year, Prithwiraj (Raj) Choudhury, an assistant professor in the technology and operations management unit at the Harvard Business School (HBR), pointed out the advent of AI in the form of machine learning (ML) technologies ushers new questions regarding the pace at which it may substitute both older technology vintages and human capital.
While academic debates about the use and misuse of technology are common enough, in the HBR paper entitled Different Strokes for Different Folks Choudhury looked at skills needed to optimize AI, which has become an increasingly practical consideration for C-Suite executives currently looking at using AI for a competitive advantage.
While there are many takeaways from Choudhury’s paper, written with Evan Starr and Rajshree Agarwal of the University of Maryland, it suggests that firms must think carefully about the skills they’ll need to hire for or train for in employees if they are going to get the most bang for the buck from their new AI.
Choudhury has spent his career researching human capital, looking inside companies such as Microsoft, Infosys and McKinsey to analyze what makes knowledge workers most productive. A few years ago, he began looking at the United States Patent and Trademark Office (USPTO) because of its innovative practices for employees working remotely. His insights are informed by years of research.
Related Article: AI vs. Algorithms: What's the Difference?
Why Use AI
He identified a clear interface between academia and enterprise and the skill set needed to optimize AI in the digital workplace. Before deploying, though, enterprise leaders need to be clear what they are using it for and whether they should be using it or not. Pavel Cherkashin, managing partner at GVA Capital, explained that there are two things enterprise leaders need to consider:
- When decision-making involves human morality and responsibility for human lives in an uncertain situation. For example, a military drone should not be able to make a decision to shoot. It should only do it after receiving the command from a human, as AI is not yet able to consider all the necessary factors to make these kind of decisions and will not be able to do it at least for a couple of decades.
- AI should also not be used in situations, for which the data volume is not big enough to support decision-making. In this case humans should be involved. For example, a vehicle management system should not be making decisions, if it suddenly starts snowing in Palo Alto. Car systems are not "trained" to handle such an extraordinary situation, there is no data they can rely on when making a decision of this sort, so they simply won’t know what to do.
Using AI Ethically
This combination of data management and ethical considerations will be the key to successful AI deployments because it will ultimately dictate what skills will be needed in an AI-driven digital workplace. Periscope Data CEO Harry Glaser is a proponent of making sure that data scientists and analysts are the key to making sure AI/ML is used ethically and responsibly. Without those data analysis skills and background, there's high potential for outcomes from AI that are biased, causing discrimination and other issues. In fact, in this respect he often refers to the head of the data team serving as a chief conscience officer for the company when dealing with AI.
Learning Opportunities
“In many cases, using AI and ML data for prediction and classification runs a strong risk of delivering immoral outcomes if unchallenged,” he said. Beyond just governing AI and ML processes to ensure they are accurate, it should be the job of data professionals to use their technical expertise to be the moral compass of the organization. There are two points to note here:
- The traditional C-Suite executive team cannot and will not fill this role because they don’t have the skills and expertise to question the AI/ML systems. The technical data scientists or analysts need to make sure they are empowered to lead in this regard, and executives need to make sure their data scientists are aware of the powers and responsibilities granted to them.
- In the years to come, look for data team leaders to step up into the role of “human governor” or chief conscience of AI systems, taking on the challenging role of understanding the impact of unchecked, AI-driven outcomes. It is becoming the job of the data team who builds and maintains the algorithms to understand the potential for harm when analyzing the data and implications that come from them.
Without addressing the potential for bias in AI models through employing people that have the perfect skills, an entire AI deployment can crumple according to Annalisa Nash Fernandez, a specialist in world cultures focusing on cultural elements in technology and business strategy. “The perfect AI skill set can implode on issues of bias in the AI models. It already has for Amazon in its hiring tools and Facebook in its advertising algorithms,” she said. “Even if your AI dream team learns from these very public failures, which were part of the industry learning process, and trains their models on diverse data sets, which was a significant early adopter misstep, if your data scientists are not a diverse team of individuals themselves, you're leaving a door open for bias creeping into the models.”
Tech's biggest secret is its own diversity problem, and bias in AI is not only rooted in society's existing biases populating the AI training data, she added. It's a function of these data decisions that carry societal implications being made by a small group of data scientists, instead of a more diverse group of social scientists and stakeholders in general. And that is no secret in the tech community.
Related Article: 7 Ways Artificial Intelligence is Reinventing Human Resources
Three Necessary Human Skills
In order for both employees and employers to get the most bang for the buck from AI, Sean Chou, CEO and co-founder of Catalytic, an automation cloud leader, said employees need to know the following.
- Know how to use AI - Learn where and how to leverage AI. That means that employees have to rethink how many things will be done. Think about a city planner who is planning a conventional city vs. a city planner that is planning a city for flying cars and autonomous vehicles. They will be designing completely different things. Future employees need to understand how AI can be used to make their jobs more efficient.
- Know the value of humans - Understand the strengths of humans. Many people are still viewing AI as competing with humans, but it's not at all a competition. Just as robotics replaced certain categories of physical labor, AI will replace certain categories of knowledge work. But also, just as robotics combined with online markets such as Etsy and Kickstarter have paved the way for the return of artisans, AI can pave the way for much more creative, innovative and rewarding knowledge work.
- Know/learn how to train and fix AI - If there's one thing that we've learned about AI is that it isn't magical, it isn't self-aware, it's not learning and discovering things in the sense that humans can, and it's not fixing itself. Just as the internet revolution redefined the job landscape, AI will again redefine the job landscape with brand new, not yet invented jobs. Who would have imagined that marketing would become one of the heaviest quant jobs, or that social media influencers would even be a thing.