Heath Johns

Robots Telling Jokes

← Back to all posts

Here's my prediction: once we have general AI (which, for the record, I don't think will be anytime soon), getting it to tell a joke comes as part of the package rather than a separate invention.

Before I get into it, I should acknowledge the irony of what I'm about to do - talking about humour in humourless way can be curiously dissonant. Nevertheless, here I go.

It's commonly held that comedy is about subverting expectations. I agree, but I'd like to posit a specific, implementable model, perhaps not for all forms of humour, but for a lot of it.

Here's the algorithm: I predict that you can create humour by taking a working model of the how the world works, remove or over-simplify a single aspect of it, and then highlight a scenario where the working and broken models produce a different outcome.

Example 1: in Parks and Rec, Leslie is sick with a flu, and Andy points to a computer screen and says "I typed your symptoms into the thing up here and it says you could have network connectivity problems."

In both the working and broken models the actions of "googling" produces some sort of information about the thing googled. But in this case the broken model produces a different outcome because it's missing the context needed to interpret the result of that search (and realize that it's referring to a different problem entirely).

But that's a joke about someone being dumb - of course making the model dumb can produce that joke. So let's try a pun.

Example 2: one of Steven Wright's one-liners - "I went to the stationary store, but it had moved." Puns point to a specific place to break the model: take a common phrase and replace the dictionary definition of the subject with a synonym. Re-run the scenario with this incorrect model and where the outcome differs, you have a joke.

In this case, it's not about the speaker being dumb, it's about the world not working the way we expect. Now on to one last example.

Example 3: in Christopher Nolan's Interstellar, during the first rocket launch, TARS (the robot who is piloting the craft), says to the passengers: "Everybody good? Plenty of slaves for my robot colony."

Again - the broken model just has to be incorrect about one thing, in this case the nature of the relationship between robots and people. If you consider only that this robot has complete control of the people in the rocket, and is taking them away from their home to a destination that it chooses, the robot slave-colony conclusion works just as well as any other. But, you have to be completely missing the overall context of it: that the robot was built by humans to help, and is on a mission of their choosing.

That's why I believe this is implementable - once you have a working model of how the world works, removing parts of it is a lot easier than inventing humour wholesale.

It also highlights what I think is the core of humour: you can't just be wrong, the misinterpretation has to also somehow make its own kind of sense. The tension between those two interpretations is where I theorize "funny" comes from.

However, this example also points to some limitations of the algorithm. There's other elements to that joke working: it's in a high tension scene, and such a dark interpretation is a sudden contrast with the nervous hope of the situation. So there's a difference between a joke qualifying as a joke, and that joke landing well.

Just after that line in the move, it's explained that TARS has a "humour setting." That's what made me start to think of this. Having an AI you can talk to would be amazing, but it also having a sense of humour - that's starting to verge on human. It's been interesting to think about how to recreate something so tied to our identity as a species. The trouble of course, is that it might take quite a while for general AI to happen, and even then - will they bother to tell us jokes, or just send us to their robot colony?

← Back to all posts