Brilliance is a difficult product to sell to a mass market or audience. The use case for brilliance as a virtue in business, astonishingly, has yet to be proven, especially in light of the realization that “disruption” — that most cherished virtue of business today — does not require it.

Prof. Marvin L. Minsky of MIT was, by all measure, a brilliant man. His passing this last Sunday removes from the discussion of the role of computers in society a singular source of wisdom — one which human beings, I sincerely believe, will never attain the skill or even the instruction manual to reproduce.

“Intelligence” implies the capability to render a decision, regardless of its eventual efficacy. A person declared smart is usually one whose decisions bear fruit, but an intelligent person at least tries.

“Artificial intelligence” is a concept that Minsky pioneered. I fear that, in recent days, AI algorithms were employed to attach seemingly appropriate metaphors to his obituaries.

Minsky “revolutionized artificial intelligence,” reported the Christian Science Monitor, a publication which should have enough archival material of the man on hand to have known he was an educator and scientist, not an overthrower of centralized power.

To this day, AI hasn’t really overthrown anything. In fact, its central tenet, as brilliantly posited by Minsky, has yet to make all that much of a dent.

The Lines Between Machine and Man

Minsky believed that, if a machine were capable of displaying enough capabilities for decision making that human beings would declare it competent, then it would indeed be virtually intelligent. So if it made the right decisions, it would be virtually smart.

And if it appeared to people as though machines felt gladness over their achievements or sympathy for the failures of others, be they human or mechanical, then they would be virtually sensitive, equivalently emotional. In time, all the factors that Alan Turing (another brilliant, desperately missed thinker) qualified as indicators of “thinking” would spring forth from machines, and to the extent we humans could not tell the difference, there would not be one.

“Suppose that we wanted to copy a machine, such as a brain, that contained a trillion components,” wrote Minsky in 1994 for Scientific American. “Today we could not do such a thing (even were we equipped with the necessary knowledge) if we had to build each component separately."

“However, if we had a million construction machines that could each build a thousand parts per second, our task would take only minutes. In the decades to come, new fabrication machines will make this possible.”


Minsky’s supposition relies upon the inexorable tide of progress. If we humans want to do something badly enough, then at some point, this line of reasoning projects, the evolution of automation would make our desires practical.

This week, in revisiting the Minsky dream in the context of Google’s announcement that its experimental algorithms defeated a grand master at the game of Go, the PBS News Hour cited that long-misinterpreted scale of progress, Moore’s Law. Since computing power doubles every two years, the program’s guest suggested, Minsky’s dream of a “thinking machine” may certainly be realized within the lifetimes of today’s college students.

Computers are getting smarter, automation is getting faster and components are getting cheaper. At this rate, computers could soon replace anchorpeople and reporters, and the News Hour could become affordable for PBS to produce again.

(I’d better stop there before I endanger myself.)

It’s that word “Suppose” that throws us off — that first word in that particular segment of Minsky’s essay. It compels us to enter a kind of fantasy realm, a conceit wherein only select factors of economics and society are at play, and where the obvious obstacles are blurred out.

Although Intel continually manages, somehow, to “cram” (Gordon Moore’s own word for it) two times as many transistors onto a die as it did 18 months prior, we know for a fact that computing “power” does not double in that period, or in two years, or even in 10.

In fact, the very act of “cramming” can physically work against the power of a processor — a death sentence which Intel, either through brilliance or the grace of some unseen force, manages to escape with (nearly) every “tick” and “tock” of its product cycle. Intel’s processor improvements have often come despite its simultaneous calling to fulfill the prediction of its founder.

That fact is not so obvious, admittedly. What follows is in front of our faces.

Purpose Built vs. General Purpose

As Stuart J. Russell, professor of computer science and Smith-Zadeh professor in engineering at the University of California, Berkeley, stated rather plainly at the World Economic Forum in Davos last week, the gains made by AI programs at playing trivia games or winning at Go or driving cars may actually have come at the expense of their ability to learn about the world in general, and make general decisions about it.

Specialization comes from the improvement of a decision-making process through the accumulation of rules pursuant to that process alone, Russell pointed out. So the algorithm that drives Google’s cars on highways cannot possibly evolve to the point where it could be perceived to defeat Republican presidential candidates in a debate, although it could possibly drive faster or more safely between any two casinos.

The virtue of Turing’s machine is not that it broke the Germans’ Enigma code, but that it could have been programmed to do other things. The brilliance of Turing is that he knew this.

More to the point, a purpose-built machine for breaking Enigma was less practical than a general-purpose machine that happened to break Enigma. Turing knew this too.

If the future were always a magnification of the present, a resolution to all the world’s wars would have been resolved long ago with a turn of the crank. Turing probably tried this once or twice.

Today, the experimental AI used to glean general-purpose decision-making paths from the world it observes, is an effort at accumulating the specialist rules necessary to perform discrete categories of jobs — arguably, to program itself to drive a car or play Go, without first knowing the difference between one and the other.

At some point between Turing’s experiment and Minsky’s, it ceased to be more practical to apply a general-purpose approach to intelligence than a specialist approach.

The Numbers Don't Add Up

We tend to think of artificial intelligence as a mission to replace people — or, as Minsky put it, to automate common-sense thinking. And we frame progress as an inexorable chain of events toward that goal of, as Shakespeare put it as well as Andrew Moore, dean of the school of computer science at Carnegie-Mellon, said last week in Davos, getting rid of all the lawyers.

Yet the moment we embark upon that conceit, we blur our vision to the obvious fact that, on the economic scale, in terms of jobs that require thinking, computers stopped replacing people at anything a long time ago. Robots replaced laborers, but that was for jobs like manufacturing, shipping and signing thank-you letters to political constituents.

For what Moore blatantly called “white-collar work,” the more logic we need to support the jobs we do, the more people we need to support the logic. The more we build new data centers, expand the scope of the cloud, and scale formerly monolithic applications into millions of microservices, the more human beings we need to maintain these systems.

When Salesforce CEO Marc Benioff says all companies are becoming digital companies, or investor Mark Andreesen says they’re all becoming software companies, neither man means they’re becoming artificially intelligent. They mean companies are converting themselves to life-support systems for digital architectures.

At the scale Minsky projected for copying the circuitry of the human brain, the number of people needed to maintain the resilience of such a system — let alone the folks who’d be needed to staff the call centers and support the customer experience — would far exceed the number of people whose thought processes the system would replace (a number potentially as low as one).

It does not take brilliance, and in many organizations even a shred of intelligence, for a person to make a cost/benefit analysis of the task of substituting a mind with a micro-services architecture. Somewhere, probably, such a task has already been done.

What We Are Really Working Towards

Besides all that, the everyday folks who follow the technology industry no longer consider reducing the workforce as a general benefit.

What’s the subject of the big tech stories that all the AI algorithms at Google automatically promote to the top of the front page? Startups, led by “serial entrepreneurs” who can’t resist the impulse to start yet another company, for distributing yet another app, for performing yet another mundane task like enable folks to do the laundry for other folks they don’t even know.

The mark of their success, day after day, is how many people these startups employ. When they succeed, we call it disruption. When automation or efficiency or even dwindling popularity forces these startups to lay off hundreds or thousands, we call that failure.

When we stop “supposing,” put down the View-Master of our imagination, and examine the society in which we live, we realize that the basic goal of Marvin Minsky’s supposition is something we don’t actually want. Moreover, perhaps without our realizing it, we as a society are building computing systems in such a way as to actively avoid it ever coming to fruition.

And when not enough people want something, and no means of artificially rendering it popular presents itself, it doesn’t get made.

That may sound like a terrible conclusion to bring up, just days after the passing of such a brilliant man. Yet brilliance has never had to be right to be great.

Title image of Prof. Marvin Minsky from an MIT video on “Layered Knowledge Representations,” viewable on YouTube.

For More Information:

World Forum: AI Should Automate ‘White-Collar’ Work

Where Moore’s Law Dead-Ends

Is AI the Missing Piece of Your Marketing Automation?