I recently attended a computing group in which the following question was asked:
Can Software Achieve Human Level Intelligence?
We covered this question over the course of 3 meetings (7-9 hours total). Those meetings didn’t go well. We spent hours talking past each other, objecting to arguments, and accusing each other of missing the point. In the end we gave up and agreed to talk about something else.
How can people spend hours talking about something without anything to show for it?Because we weren’t talking about the same thing and never settled on meanings first to discover that.
After reflecting on those meetings, I thought the following process would have been more productive:
- Each person precisely translates the sentence as they understand it.
- Replace any contentious terms to avoid arguing over semantics.
- Decide if there’s any basis for arguing.
So let’s continue by assuming two arguers: John and Sally.
Translating the Sentence
The discussion question can be interpreted in many ways. For instance, John may interpret it as:
Will software transcend its programming to achieve free will?
On the other hand, Sally doesn’t believe in free will may consider John’s interpretation incoherent and refuse to discuss it. She might instead interpret the discussion question as:
Can software pass the Turing Test?
Other possible interpretations include, but are not limited to…
- Can software surpass human achievement on certain tasks?
- Will software achieve consciousness?
- Will software achieve creativity?
And so on.
And this process may be iterative. That is, once the claim has been translated, Sally may point out vague terms in John’s formulation and require him to become more precise, and so on. This could take quite some time, and result in a very wordy formulation of the position, but that’s required for precision.
Replacing Contentious Terms
Since Sally interpreted the claim in terms of the Turing Test, she must use that wording for her argument. She cannot use the original phrasing of the question.
Let me repeat: DO NOT USE THE ORIGINAL DISCUSSION QUESTION.
The problem here is that John disagrees that passing the Turing Test is equivalent to achieving intelligence. So if Sally starts talking about intelligence and the Turing Test interchangeably, he’d object to that, and they’d go down a rabbit hole and into quibbling over semantics. The same holds for any other contentious interpretations of intelligence, like “consciousness”, “creativity”, and so on.
Whether or not those terms are equivalent to AI is an entirely different argument.
Deciding if there’s a Basis for Arguing
Once the terms are clarified, John and Sally may find themselves in one of the following situations regarding the other’s claim:
- They have no opinion regarding it.
- They think it’s incoherent and therefore not arguable.
- They think the truth is indeterminate and therefore not arguable.
- They agree with it.
- They disagree with it.
- They think it can be arguable as a hypothetical.
1-4 means the discussion stops there, because no progress can be made on that front by arguing. Only with 5-6, can they argue productively.
6 is especially interesting. John’s interpretation assumes something Sally disagrees with, but Sally may decide to argue in the hypothetical. That is, while Sally doesn’t believe in Free Will, she believes that it’s worth assuming its truth — for the sake of the argument, and the continuing from there. That’s fine, but she needs to let John know she’s arguing against him in the hypothetical. Otherwise, if John wins that argument and Sally still rejects his claim, it could seem that she was arguing in bad faith.
Language is a remarkable thing, but it’s also a very ambiguous thing. We often talk past each other, and it’s not always clear if we mean the same thing. Very often, our communications are broad enough that we never discover this, and it often doesn’t matter for most purposes. But at times we do run head-first into this, such as with certain technical or specific arguments.
To me this was a sobering reminder that we truly communicate far less often than we think…