thumbs up robot
Facebook's ParlAI has high ambitions for the future of AI and NLP, but it's a future that's far off PHOTO: Bill Benzon

One of Facebook's more forward-looking artificial intelligence initiatives is ParlAI (pronounced par-lay), a recently-released open source platform developed by the Facebook AI Research team. Its raison d'etre is to create and test dialog models and then multi-task and train these models over many datasets at once.

Translation: ParlAI aspires to be an AI system capable of having a real conversation with a human — including carrying on a back-and-forth about a particular goal as well as fielding the odd and unexpected question.

For the research community, ParlAI represents a significant leap forward in natural language processing (NLP) development. Instead of using one or two datasets, researchers have 20 at their disposal — with more expected to join. Perhaps more significantly, ParlAI has been integrated into Amazon Mechanical Turk, allowing researchers to train and evaluate their live dialog models with the workforce for hire.

For the business and software community, ParlAI means they are that much closer to realizing the ideal of having chatbots and similar applications become truly conversational.

Webpages Are Old School

Such a development would be the holy grail for the business community, at least the segment that is starting to realize the potential that chatbots have to offer — and the limitations that webpages and apps have.

Webpages are dying, states Peter Friedman, founder and CEO of LiveWorld, an application that integrates chatbot programs with enterprise applications.

"It's just that we have gotten used to the idea that online content only comes in the form of a webpage or an app. But if you think about it, that type of interaction does not reflect the main way in which humans learn and process information," Friedman says.

"The real action is talking with each other, chatting."

That is why WeChat is so popular in China, he adds.

"When someone wants to find information, they don’t go to search but instead say what it is they are looking for into WeChat."

Our current generation of chatbots is sadly far off from this ideal, a reality acknowledged in a recent research paper about ParlAI [PDF].

"Existing chatbots can sometimes complete specific independent tasks but have trouble understanding more than a single sentence or chaining subtasks together to complete a bigger task," the authors say.

'Expect Some Adventures and Rough Edges'

To hear Facebook tell it, a chatbot capable of carrying on a human-like interaction is still far off.

It is an early-release beta and as Facebook AI states on its ParlAI webpage, “expect some adventures and rough edges.”

But even the early beta makes clear that this is an extraordinary tool.

Facebook has gathered about 20 datasets from various sources.

There is, for example, a dataset called Simple Questions. It consists of 100,000 questions used in conjunction with existing benchmarks which enable large-scale simple questions to be answered within the framework of Memory Networks.

Another example is the Stanford Question Answering Dataset. This consists of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text from the corresponding reading passage. The dataset contains 100,000 plus question-answer pairs on over 500 articles.

Having so many databases to work from — and multi-task across — is almost unprecedented for researchers, who are used to having one or two at best. The latter is not an ideal environment, the research paper noted.

"Working on individual datasets can lead to siloed research, where the overfitting to specific qualities of a dataset do not generalize to solving other tasks." 

The datasets are divided into categories that range from the simplest form of dialogue (question—answering) to practical applications (goal-oriented dialogue such as a customer and a travel agent discussing a flight or a friend recommending a movie to watch) to the highly complex. An example of that would be dialog that refers to physical objects, requiring visual dialog tasks, with images as well as text. "In the future we could also add other sensory information, such as audio," the paper states.

The Icing on the ParlAI Cake: Humans

Perhaps what most unique about the ParlAI framework is its relationship with Amazon Mechanical Turk. The people who sign up to work for the service provide the researchers with a large and live audience against which to train and evaluate their models. Not only that, but other researchers are using the same set of humans for their tests.

"This enables comparison of Turk experiments across different research groups, which has been historically difficult," the paper noted.