It could happen to you: You’re seated around a conference table with your fellow employees, talking with each other on Yammer or Slack because it’s that much easier than speech itself.
But it’s dull. The endless streams of unrestrained emoji have brought everyone down. And if not for the absence of an online connection with yourself, you’d be saying to yourself, “Gee, I wish some form of artificial intelligence could inject some much-needed humor into our daily discourse.”
Fear not, for like the White Tornado that came to housewives’ rescue in 1960s detergent commercials, IBM Watson has come forth with the solution: artificial humor.
“What do you call a murderer with moral fiber? A cereal killer!” announced Vinith Misra, an IBM Research staffer and consultant on the HBO series Silicon Valley, premiering one key example of his team’s efforts to build an effective algorithm capable of improving human behavior through the interjection of puns.
“Computers today are getting smarter, but they’re also developing a sense of humor,” explained Misra to attendees at the IBM Connect conference in Orlando, Florida, Tuesday morning.
“And as these algorithms of humor continue to evolve, they have the potential to change how we relate to our machines, and also how we relate to each other. And to be clear, I don’t think this is just a curiosity. As computers increasingly surround us in our lives, it’s becoming a necessity.”
Through an exercise in self-examination one day, said Misra, he realized how often every day he wished he could smash his laptop against a rock. People are equally frustrating, at least.
However, it is humor that acts as the “safety valve” preventing him from smashing people against a rock, he told the audience.
“Humor has a way of instantly connecting us with each other, even complete strangers,” he said, before presenting statistics attesting that the average person is 30 times more likely to laugh in the company of other people than alone.
“More broadly, you can think of humor as being the WD-40 of human interaction,” he added. “And in a world where we’re increasingly surrounded by these machines, we’re desperately going to need this lubrication, or we’re going to overload from the frustration.”
If good humor can be efficiently manufactured, Misra showed, it could have many potential use cases beyond mere interjection into business conversations. At one point, he showed how good humor could amplify the appearance of anyone’s popularity on social networks.
This is being made possible, he pointed out, through the sheer magnitude of exchanged viewpoints on Twitter. When people tweet that they like something or someone, evidently their choices of words and hashtags form recognizable patterns.
One such pattern is the “believer tweet,” which explains people’s responses to major events, he said, such as a Justin Bieber concert. Citing the work of Stanford University student and Google DeepMind intern Andrej Karpathy, IBM’s Misra told how an algorithm called char-rnn is capable of ascertaining and repeating patterns of social network chats that appear, to an ordinary observer, to be tweets from genuine fans.
Misra said he fed char-rnn a database of Bieber’s believer tweets, thus training it to generate similar looking tweets. One artificial Bieber tweet it generated read, “I love the way he is so beautiful baby.”
“#MTVhottest I love you so much,” with a crying emoji, is another prime cut.
Theoretically, an algorithm could detect the lagging popularity of any potential celebrity client online, and compensate with artificial affection.
Mocking the Mind
Char-rnn is one example of a multi-layer recurrent neural network, thus the “rnn” in its name.
In one of the more conventionally applied models of neural networking, called the “feed-forward” model, patterns of data are fed into a system that processes each layer as an individual sheet. Like a silkscreen filter, each sheet bleeds through a portion of the data onto an emergent pattern, which tends to resemble some portion — perhaps imperceptibly, to humans — of the original data.
The feed-forward model has been used to help algorithms identify written characters in any font, or from a multitude of handwriting examples.
But the recurrent model is different from feed-forward in one key respect: Emergent patterns are re-applied to the training mechanism, in an effort to “learn” the apparent “lessons” from training sooner.
RNN models have been applied in academia, for example, to study the effects of futures trading on crude oil prices. But Google and IBM researchers are foreseeing the obvious benefits of RNN for manufacturing jokes and for supplementing popularity.
Equally as lucrative, said Misra, is the potential for generating artificial political speeches.
Currently at work dissecting human social network speech patterns now, he said, are irony detectors; and currently responding to those patterns online today are insult generators.
“In some sense, we’ve used data to paint these machines’ personality,” said the IBM Research fellow, “and I call that a step in the right direction.
“We’re really just at the beginning of these things, because there is so much in the world of humor that these algorithms have yet to exploit or understand. The time to take things forward could not be better.”