AI: An Exercise in Analytical Philosophy

The Question

I recently attended a computing group in which the following question was asked:

Can Software Achieve Human Level Intelligence?

We covered this question over the course of 3 meetings (7-9 hours total).  Those meetings didn’t go well.  We spent hours talking past each other, objecting to arguments, and accusing each other of missing the point.  In the end we gave up and agreed to talk about something else.

How can people spend hours talking about something without anything to show for it?Because we weren’t talking about the same thing and never settled on meanings first to discover that.

After reflecting on those meetings, I thought the following process would have been more productive:

  1. Each person precisely translates the sentence as they understand it.
  2. Replace any contentious terms to avoid arguing over semantics.
  3. Decide if there’s any basis for arguing.

So let’s continue by assuming two arguers: John and Sally.

Translating the Sentence

The discussion question can be interpreted in many ways. For instance, John may interpret it as:

Will software transcend its programming to achieve free will?

On the other hand, Sally doesn’t believe in free will may consider John’s interpretation incoherent and refuse to discuss it.  She might instead interpret the discussion question as:

Can software pass the Turing Test?

Other possible interpretations include, but are not limited to…

  • Can software surpass human achievement on certain tasks?
  • Will software achieve consciousness?
  • Will software achieve creativity?

And so on.

And this process may be iterative.  That is, once the claim has been translated, Sally may point out vague terms in John’s formulation and require him to become more precise, and so on. This could take quite some time, and result in a very wordy formulation of the position, but that’s required for precision.

 

Replacing Contentious Terms

Since Sally interpreted the claim in terms of the Turing Test, she must use that wording for her argument. She cannot use the original phrasing of the question.

Let me repeat: DO NOT USE THE ORIGINAL DISCUSSION QUESTION.

The problem here is that John disagrees that passing the Turing Test is equivalent to achieving intelligence.  So if Sally starts talking about intelligence and the Turing Test interchangeably, he’d object to that, and they’d go down a rabbit hole and into quibbling over semantics.  The same holds for any other contentious interpretations of intelligence, like “consciousness”, “creativity”, and so on.

Whether or not those terms are equivalent to AI is an entirely different argument.

 

Deciding if there’s a Basis for Arguing

Once the terms are clarified, John and Sally may find themselves in one of the following situations regarding the other’s claim:

  1. They have no opinion regarding it.
  2. They think it’s incoherent and therefore not arguable.
  3. They think the truth is indeterminate and therefore not arguable.
  4. They agree with it.
  5. They disagree with it.
  6. They think it can be arguable as a hypothetical.

1-4 means the discussion stops there, because no progress can be made on that front by arguing. Only with 5-6, can they argue productively. 

6 is especially interesting.  John’s interpretation assumes something Sally disagrees with, but Sally may decide to argue in the hypothetical.  That is, while Sally doesn’t believe in Free Will, she believes that it’s worth assuming its truth — for the sake of the argument, and the continuing from there. That’s fine, but she needs to let John know she’s arguing against him in the hypothetical.  Otherwise, if John wins that argument and Sally still rejects his claim, it could seem that she was arguing in bad faith.

 

Concluding Thoughts

Language is a remarkable thing, but it’s also a very ambiguous thing.  We often talk past each other, and it’s not always clear if we mean the same thing.  Very often, our communications are broad enough that we never discover this, and it often doesn’t matter for most purposes.  But at times we do run head-first into this, such as with certain technical or specific arguments.

To me this was a sobering reminder that we truly communicate far less often than we think…

 

12 comments

  1. Reblogged this on SelfAwarePatterns and commented:
    An excellent analysis of the issue! It seems like this is a problem for any interesting philosophical question. I’m always struck by how often philosophical disagreements are really just definitional disputes in disguise. It’s particularly troublesome for any discussion about the mind, about us at the most fundamental level, because people have intense emotions about the conclusions.

    • Thanks for reblogging, and I agree! The most fundamental questions seem to be most vulnerable to these kinds of rabbit hole, and at times I wonder how much progress we can make on those questions. I think in these cases, achieving clarity on what the others mean by their questions may be the best we can hope for.

      • My experience is that when we break the most intractable problems down, and clarify exactly what the issues are, a lot of the intractability diminishes, if not outright disappears. (Although not always; see quantum mechanics.) The issue is that many people like their mysteries.

  2. point of views depend on the starting point, the middle tier, the lower tier, the side view, from each direction there is a definitive picture, so there is more than one correct answer at all times, in a nutshell, it puts human experience in the drive seat when it comes to learning from experiences, amen

  3. I think your diagnosis is right but I’m not sure if the prescription will work in practice. It’s probably too iterative a process to get anywhere much, and too much time is likely to be spent on clarifying and arguing about terms to allow the argument itself to ever get started. Digressions begetting digressions. But in that scenario the argument was bound to be unproductive in any case, so perhaps not much is lost.

    On that note, I would say there is something to argue about on your points 2 and 3. They can argue about whether the claim actually is incoherent or the truth indeterminate. But that’s likely not to be productive as it’s digressive. An argument that was supposed to be about AI becomes an argument on free will.

    • You are right. I think few people would follow this process. And you are right; it’s not the sort of thing likely to produce the kind of result people expect, but rather show them when their argument will be fruitful anyway.

      There are jumping off points where the argument can transform, and if all parties agree that they’re fine heading down that rabbit hole, that’s cool. My concern is that often, people go down that rabbit hole, and an hour later realize they haven’t been talking about what they were supposed to be talking about 🙂

      Great comments!

      • If people can’t agree on a definition, they might agree to continue the discussion with a stipulative definition. Underscoring that not everyone agrees on what x is could make it possible to move things along, but I imagine it would only work if the stipulative definition is not too far off the mark from what most people would accept.

      • True. I can even see the discussion continuing even if the stipulative definition is far off the mark, as long as the folks agree that this is “for the sake of the argument” only and not a commitment to that definition in general. Sometimes, these sort of hypothetical arguments can be interesting and worth pursuing on their own.

  4. Hello BIAR,

    I consider the definition issue to be an enormous problem in academia. How might we effectively discuss our ideas about how things work, when the terms that we use to convey them are commonly interpreted in ways that we don’t intend? That should tend to kill potential progress.

    The “ordinary language” philosophers (such as Ludwig Wittgenstein), have tried help by restricting our terms to how they’re most commonly used. I’d say that their cure has failed for a number of reasons, and certainly given that people will dispute how a given term is most commonly used (and thus for example what consciousness “truly means”).

    I believe that I have a better potential solution that may also be simple enough to address DisagreeableMe’s concerns about lengthy procedures. Consider how things would be if it were formally understood that there are no true or false definitions for a given term, but rather just more and less useful definitions from a given context. When hearing someone’s ideas, if it were known going in that they must be grasped and accepted in order to provide a potentially effective evaluation, this might help us understand each other and so make better progress figuring things out. I call it my first principle of epistemology. (I also have a second, along with one for metaphysics and one for axiology.)

    In any case I believe that we’ll need a respected community of professionals who’ve developed various generally accepted principles in the domain of philosophy in order to better found the institution of science.

    • Very well put. I would be a bit careful about using evaluations like “more or less useful” because it may lure people into a semantic debate. But overall, I like your ideas and I think they’re much better than the way we normally argue.

      • Yes the “useful” claim can get contentious. If someone’s telling you about an idea they have, and that to understand you need to accept “consciousness” as a term that’s defined in a way that you don’t generally consider useful (I presume you’re not a panpsychist for example), then that’s something which could be brought out once (or if) you grasp what they’re saying. The goal however would be to understand. If you dispute the “truth” of the person’s definition in favor of your own, then under my (hopefully someday) universally accepted EP1, you’d be understood to be at fault there. Conversely someone who accepts the definition for that argument would be following the rules, and even if they end up rejecting the general usefulness of the argument in the end for various reasons (such as the speculative usefulness of referring to causal dynamics as “consciousness”!).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s