It’s 1985 and I’m at UCLA, attending my very first International Joint Conference on artificial intelligence.

As a young researcher tasked with pioneering AI in my corporate research lab, this is an exciting opportunity. We are at the very peak of the AI hype curve.

This is no ordinary academic conference. The Japanese had announced their 5th Generation computing project that promised fundamental logical processing.

The USA responded with a 10 year research project CyC to give computers common sense reasoning power. HAL 2001 still had 16 years to come to fruition.

And we were just a hop step and jump from Silicon Valley. The exhibition area would have made CeBIT proud.

The Turing Test

The talk is all about the LISP machine, custom built for AI applications. For a price equivalent to a Manhattan apartment, you got the pleasure of writing AI applications in a bespoke AI programming language (LISP – Lots of Insane Stupid Parentheses) in a windows environment (before MS Windows existed).

Alan Turing’s Turing Test articulated the early ambitions for AI.

He determined that a system could be seen as "intelligent" if a human could not distinguish its interactions from those of a real human. In fact, this proved not too challenging.

The infamous Eliza system, arguably the world’s first chatbot, was able to mimic the conversation of a human psychologist and fool many of its early users by simply manipulating and feeding back text strings fed to it by the user.

Artificial intelligence was achieved, but the value was also artificial.

Optimizing AI

Without getting overly technical, AI is all about searching for useful solutions from a network graph of potential options.

The size of this solution space and how it is effectively analyzed and searched is the key. The larger the solution space, the greater the risk of failure.

Poor analysis and search strategies can also lead to failure. Games like chess have an admittedly large solution space, but defined rules of play.

IBM, the company that appears to be betting its future on cognitive computing, has long been able to demonstrate mastery of the game through its Deep Blue initiative, using some hefty computing machinery to help speed the search. As complex as chess may be, real world situations are even more complex and hence more challenging.

Making AI Useful in the Workplace

We didn’t purchase a LISP machine.

We did however spend the next decade doing applied AI R&D, for my former employer’s steel mills, petroleum fields, mines and transport businesses. We had a few wins with deployed applications and many failures, as we navigated what it took to do artificial intelligence for ‘real’ rather than ‘artificial’ business value.

The following graphic succinctly encapsulates our learning from a decade of applied AI.

AI value

Our successes with ‘real’ value were achieved mostly at the bottom two levels.

An example of the base level is like the utilities that help you fill out standard name and address forms.

At the next level, perhaps a CAD system that checks that dimensions are consistent and meet building standards.

The third level is where expert systems are targeted, providing advice that can be overridden.

Up to here the system is augmenting human performance. Full automation is complex for even what might be seen as a simple task e.g. building a school timetable.

We built a few of the top tier applications which had increased complexity, therefore tended to cost a lot, achieved some impressive demonstrations, but in the end, the humans preferred to still do the task themselves, rather than trust a system.

Constraining the AI Search Space

Twenty years on I suspect this value chart still largely holds true.

We have recently seen what can go wrong when the search space is too large, with Microsoft ‘s recent Twitter bot embarrassment.

If you look at current chatbot deployments — take Microsoft's Cortana and Apple’s Siri as examples — they work best if you constrain the search space by having a complete calendar, contact directory and a rich profile to guide the search.

fully automated systems

This is not to say that fully automated systems should not be pursued. Robotics has always held our continued fascination.

Being Australian, a colleague of mine from the Carnegie-Mellon Robotics Institute once recounted his Crocodile Dundee moment at a major gathering for robotics institutes around the world.

At the time the gold standard was robotic mice that could navigate a maze. But they were blown away when this big Aussie guy came in with a video of a robot that could shear sheep! A real "you call that a robot…?" moment!

The Dawn of Autonomous Robots

Fixed function robots have been working well for decades in auto manufacturing, for example.

But it is the autonomous robots that provide the real challenge. Their vision systems are required to search and navigate a nearly infinite physical space.

Robo Soccer has replaced robotic mice. Yet despite the major advances in computing power it still remains a non-trivial exercise, as this comical short clip I took at a recent CeBIT exhibition shows.

Of course it’s in autonomous self-driving cars that the breakthroughs have been made. Its not surprising that a company that has built its reputation on searching large linked artifacts across the internet is now leading the way.

Google appears to be successfully exploiting that capability in areas beyond the internet.

Extending potential AI applications beyond the personal level to the ‘social network’ level, we are faced with another graph searching opportunity: the social graph.

social graph

This is the graph of your connections and your connections’, connections. The social graph underpins all social networking platforms like Facebook, LinkedIn, Twitter, Yammer and the like.

There is a reason why these companies keep their social graph hidden. The power of the social graph is that it is defined, albeit at different levels of clarity.

The ability to effectively search the social graph offers unending opportunity for value creation. The ubiquitous “here are some people you might want to connect with” used by Facebook and LinkedIn is derived directly from searching your social graph.

But to extract that value, the analytics need to be relationship centered i.e. focused on the links between people and not just on their activity profiles. The following chart shows an example of how activity metrics are unable to identify with social cohesion within groups.

activity measures are a poor indicator of social and collaboration performance

The data was taken from a Swoop Dashboard analysis of 19 active Yammer groups within a single enterprise. Social Cohesion, which is used as a collaboration performance indicator, is measured by identifying the proportion of reciprocated relationships that exist within a group.

Many highly active groups showed no social cohesion, while several lower activity groups showed very high social cohesion.

AI: From Disillusionment to Enlightenment?

Clearly the "intelligence" provided by traditional top down activity measures provide only "artificial value" in trying to predict collaboration performance.

This mismatch in the use of hierarchical search methods on search spaces that are largely relationship focused is precisely why Google was able to surpass Yahoo, the previous incumbent internet search engine leader.

(This is something AI researchers have known for a long time. The LISP language was designed to rapidly search linked lists.)

So in the language of Gartner, has AI now emerged from the AI winter "trough of disillusionment," where artificial intelligence meant artificial value? Are we indeed climbing up the slope of enlightenment toward true business value?

It certainly seems that current established AI leaders like Google and IBM are betting on it.

What it will take however, is careful management of the solution search space, matched with appropriate relationship centered analytics and search, if real value is to be now achieved from AI.