Heath Johns

Generosity of Nuance

To riff off a Douglas Adams quote: "In the beginning the [internet] was created. This had made many people very angry and has been widely regarded as a bad move."

Much has been said about the anger that the rode shotgun with the internet's arrival - anonymity has been blamed, as has the lack of body language, etc. - but there's one explanation that I haven't seen: I call it "generosity of nuance."

The idea is that many statements are alternately true and not true depending on what level of nuance you go into. Arguments involving this tend to look like this:

Person A:X is true
Person B:Read Y, it shows that X only looks true if you don't know anything about it
Person C:You ignoramus, Y was debunked by Z

And so on... You've seen it thousands of times.


Scott Adams (who, for the record, says a lot of other things that I disagree with) came up with a great metaphor: "One Screen, Two Movies." It's where, depending on what political camp they're in, people will arrive at completely different and incompatible interpretations of a given objective event.

It captures the current gestalt well, I think, and raises the question of "how?" Certainly it's multifactorial, but I propose that differing generosity of nuance explains a lot of it.

For an example, let's take the topic of taxation.

At level one - the lowest amount of nuance - you have money, and then someone comes along and demands some of it, and if you persistently refuse to give it to them eventually things will escalate until someone with a gun (i.e. a police officer) arrives and threatens violence. That absolutely fits the definition of a shakedown. So at level one "taxation is theft" is a completely valid take.

Level two adds some nuance: that collective action is sometimes the only reasonable course of action (e.g. firefighting) and taxation is an unpleasant but successful way of funding such things. And that the aforementioned person with a gun is constrained by the rule of law, and that the taxation was willingly imposed (via democracy, where applicable) on ourselves.

You can easily find a level three, and four, and so on - branching into deeper and deeper levels of nuance, each reversing the arrow of truth. You can bring in essays, studies, and then the controversies surrounding the people that wrote those essays and studies, and so on until you arrive at such levels of esotericity that your arguments can be dismissed just on that basis.

The critical feature of this, though, is that it's natural for people stop at the level that suits them. So all that's required for the world to match your worldview is to allocate the appropriate level of nuance to each topic.

This might seem the same as looking for sources that agree with you, but it's a better tool for self-deception than that. Because nuance in any topic requires concessions to the "other side" - that's almost the definition of nuance in politicized topics - it means that you can read from opposing sources and arguments and still be able "watch your own movie."

A way to detect this (in yourself, or in online discussions) is to see how "ragged" the level of nuance is. In a perfect world, all points in a discussion should be at a similar level, descending gradually as a topic is explored. The opposite - where people are jumping up and down levels to prop up various points - is a pretty good indication that the discussion is going to generate more heat than light.

And if you could point out someone's over- or under-generosity of nuance, would that point a discussion in a better direction? By itself, I doubt it - I think there's much more that's required to solve online angst - but perhaps being aware of it could be part of making the online world less furious.

Keep reading...

Robots Telling Jokes

Here's my prediction: once we have general AI (which, for the record, I don't think will be anytime soon), getting it to tell a joke comes as part of the package rather than a separate invention.

Before I get into it, I should acknowledge the irony of what I'm about to do - talking about humour in humourless way can be curiously dissonant. Nevertheless, here I go.

It's commonly held that comedy is about subverting expectations. I agree, but I'd like to posit a specific, implementable model, perhaps not for all forms of humour, but for a lot of it.


Here's the algorithm: I predict that you can create humour by taking a working model of the how the world works, remove or over-simplify a single aspect of it, and then highlight a scenario where the working and broken models produce a different outcome.

Example 1: in Parks and Rec, Leslie is sick with a flu, and Andy points to a computer screen and says "I typed your symptoms into the thing up here and it says you could have network connectivity problems."

In both the working and broken models the actions of "googling" produces some sort of information about the thing googled. But in this case the broken model produces a different outcome because it's missing the context needed to interpret the result of that search (and realize that it's referring to a different problem entirely).

But that's a joke about someone being dumb - of course making the model dumb can produce that joke. So let's try a pun.

Example 2: one of Steven Wright's one-liners - "I went to the stationary store, but it had moved." Puns point to a specific place to break the model: take a common phrase and replace the dictionary definition of the subject with a synonym. Re-run the scenario with this incorrect model and where the outcome differs, you have a joke.

In this case, it's not about the speaker being dumb, it's about the world not working the way we expect. Now on to one last example.

Example 3: in Christopher Nolan's Interstellar, during the first rocket launch, TARS (the robot who is piloting the craft), says to the passengers: "Everybody good? Plenty of slaves for my robot colony."

Again - the broken model just has to be incorrect about one thing, in this case the nature of the relationship between robots and people. If you consider only that this robot has complete control of the people in the rocket, and is taking them away from their home to a destination that it chooses, the robot slave-colony conclusion works just as well as any other. But, you have to be completely missing the overall context of it: that the robot was built by humans to help, and is on a mission of their choosing.

That's why I believe this is implementable - once you have a working model of how the world works, removing parts of it is a lot easier than inventing humour wholesale.

It also highlights what I think is the core of humour: you can't just be wrong, the misinterpretation has to also somehow make its own kind of sense. The tension between those two interpretations is where I theorize "funny" comes from.

However, this example also points to some limitations of the algorithm. There's other elements to that joke working: it's in a high tension scene, and such a dark interpretation is a sudden contrast with the nervous hope of the situation. So there's a difference between a joke qualifying as a joke, and that joke landing well.

Just after that line in the move, it's explained that TARS has a "humour setting." That's what made me start to think of this. Having an AI you can talk to would be amazing, but it also having a sense of humour - that's starting to verge on human. It's been interesting to think about how to recreate something so tied to our identity as a species. The trouble of course, is that it might take quite a while for general AI to happen, and even then - will they bother to tell us jokes, or just send us to their robot colony?

Keep reading...