During a high-level session on the implications and repercussions of artificial intelligence at the World Economic Forum in Davos, Switzerland this week, the discussion took what some observers (those who aren’t economists) might consider an unexpected turn.

A Carnegie-Mellon professor suggested white-collar tasks should be automated to the same degree that robotics automates blue-collar tasks such as manufacturing and inventory control.

At one point, Andrew Moore, dean of CMU’s School of Computer Science, gave two attendees specific examples of what he explicitly described as white-collar jobs that could, and probably should, be automated for the benefit of humankind: lawyers and doctors.

“The big thing that’s going on behind the scenes, but you don’t see it in front of the cameras so much, is the gradual work to remove the boring parts of white-collar work,” Moore said.

“For example, in the legal world, there’s many startups now taking away the boring parts of understanding millions of legal documents to prepare for a case, by actually having computers read and understand what’s in the documents. And over and over again, you see that in the business plan and in the academic world: how we’re going to get rid of the boring parts of white-collar work.”

Boring Things, Like the Law

Moore did not provide names, but examples are not at all hard to find.One is Legal Robot, a contract analytics system that deconstructs contract documents in search of context.According to the company, its system is using what it learns from examining existing documents to discern the context from new ones.

“Legal Robot replaces traditional contract reviews and drafting with an automated intelligent assistant,” reads that company’s Web site marketing page.“Using the legal language model, the intelligent assistant flags issues and suggests improvements by considering best practices, risk factors, and jurisdictional differences.”

Later in the discussion, Moore went on to suggest that the same type of business model that has just launched Legal Robot could be applied to launch a kind of “medical robot.”

“Many sessions which we thought were ‘smart’ — and I’m going to actually go out on a limb here and say the lawyer profession or the doctor/physician profession — there’s a lot that can be automated there,” he told Connyoung Moon, an anchor for Korea’s Arirang TV, “and those careers might diminish.

There are some other areas where we’re going to be using AI to help humans, who will remain in charge, such as teaching small kids, or nursing, or things which involve care and really deep, social interactions with other folks.”

Moore was careful to point out that things that act like humans should not be confused with things that can, or ever will, think like humans.However, he did assert that for machines that may or may not include AI to accomplish the goal of assisting in teaching children or nursing patients, they would need two abilities: that of discerning the emotions of the people they service, and of presenting the appearance of emotion in return.

“One by one, we’re going to see things that required our own personal ingenuity turning out to be things which can be automated.”

Obstacles to Progress, Such as Pedestrians

Also participating in Monday’s discussion was Stuart J. Russell, the director of the Center for Intelligent Systems at the University of California, Berkeley.

When Moon suggested that a system suddenly capable of substituting for people in the driving of cars could soon substitute for them in other things, Russell drew a clear distinction between the types of systems that solve specific problems, and those which present AI in general cases.

In so doing, Russell (probably intentionally, but gently nonetheless) suggested that many special-purpose decision-making systems were not truly artificially intelligent after all.

Although self-driving car algorithms are getting better at identifying traffic hazards on highways, Russell noted, these systems are currently not being extended to metropolitan traffic (especially not by Tesla) because the nature of the obstacle identification algorithms currently in use, does not and probably cannot scale up to the level required to identify every potential obstacle downtown.

Professor Stuart J. Russell, UC Berkeley, at World Economic Forum 2016
“Although the perception is quite capable — it’s able to detect persons, other vehicles, obstacles, policemen giving signals, road signs, traffic lights, and so on — the decisions about what to make are currently made by what we would call in AI a good, old-fashioned rule-based system,” said Russell.

In such a system, specific rules evaluate whether particular conditions are true, and like logic gates, generate “true” signals when all conditions pass.

“Every so often, you find a situation where the rules don’t apply,” the UC Berkeley professor continued.

Suppose a bicyclist were driving in your lane, but on the opposite side of it, as many cyclists are instructed to do. Russell said he was personally told by the head of Google X (which changed its name this week to just X) that this specific exception would confuse the current set of car-driving algorithms.

Learning Opportunities

In such a situation, the car would alert the human driver to take over the wheel.“Which is fine if you’re there with your hands poised to take over and you’re paying attention,” he explained.

“But if you’re checking e-mail on your phone, or playing cards with your friend, or whatever, it could be catastrophic.”

Simple Truths, e.g., Atari

Russell’s explanation made clear that it may indeed be impossible for specialist decision-making algorithms, such as the type that drive cars or win at Jeopardy, could ever evolve into general-purpose intelligence systems that could be perceived by ordinary people as ‘smart.’

The reason seems simple enough once you get to the point:Rule-based systems can only evolve to the extent that they add more rules that are contingent upon specific use cases.The more rules you apply to a car-driving algorithm set, for example, the less likely that it will one day, at random, compose Hamlet.

That’s not to say that general-purpose algorithms are not being tried.

Russell mentioned the DeepMind project (another recent Google acquisition), which recently demonstrated a so-called deep reinforcement learning algorithm [PDF] which, it believes, “learns” to play any randomly-selected Atari 2600 game in just a few hours’ time, without any preconceived or trained context as to what movement, space, or time are in any situation.

Russell called this a good demonstration of the potential for general-purpose learning.

But as both an expert on Atari games of that period (as “D. F. Scott,” I was a contributing editor for an Atari magazine for much of the 1980s) and a lecturer on the subject of artificial intelligence in microcomputers during that same time, I can personally provide evidence to the contrary.

While Atari 2600 games seem, from the perspective of a Sears catalog from the early 1980s, to be multifarious in nature, their implementation tends to be much the same.Objects on a grid are given eight directions of motion, and a “dumb” class of moving objects (which Atari gave the technical term “missiles”) are periodically tested to determine whether they collide with the “player” class of objects.

Given this simple system, I demonstrated to middle-school students learning programming 30 years ago, an intelligent “player” could be programmed using the same logic to make moves equal to what a human player would call “decisions,” using only eight kilobytes of code.

While CMU's Moore refrained from refuting Russell directly, Moore did gently imply that computer-based systems that we presume to be complex, are actually devastatingly simple.It’s the complex human-based systems that pose the real problems; and what we consider simple tasks are actually precisely these complex systems.

The example Moore offered: the problem faced by both roboticists and AI engineers, of programming a mechanical arm to lift any open container of liquid like a person would.Once that problem is solved, conceivably, tens of millions of physically-impaired people could benefit from one solution to a special-purpose AI problem.

“I don’t think we need to worry about what’s defined as ‘artificial intelligence,’” said Moore at one point.“It just turns out, over and over again, the things which we really thought were fancy and clever — like playing Atari video games — turn out to be quite easy to implement.And other things which we thought should be pretty easy — we all think it’s easier to pick up a glass than to drive a vehicle.It turns out to be the other way.”

For More Information: