Previously, I asked if a computer could be conscious. I only explored that question, and in the process the following points emerged:
- Thought does not require consciousness
- Consciousness does not require thought
- Consciousness has been correlated with activity in a part of the brain
- Consciousness is not dependent upon the medium.
1-3 are based on observations/studies, while #4 is an assumption of those who believe in computer consciousness.
Enter Strong Artificial Intelligence (or Strong AI) — the view that a computer that thought like a human would be conscious. Here is a brief overview of positions on Strong AI. For more detail, I recommend reading John Searle’s Chinese Room experiment and the various responses to it. The key to reading Searle is to know he’s writing about consciousness, and not “knowledge” per se (it’s easy to miss this point with the way he phrased his thought experiment).
Depending on what “thinking like a human” means, Strong AI may fall afoul of 1-3. For instance, AI via symbol manipulation (what Searle attacks) would not achieve consciousness. On the other hand, AI via low level modeling of the activity in the human brain (eg: neural networks) that reproduces the low level patterns of activity consistent with consciousness, could.
Easier said than done. What constitutes a pattern? Do the components of the pattern (eg: electrons vs. bowling balls) matter? How complex must this pattern be? Is the pattern enough?
Also, Strong AI takes it as a premise that the medium doesn’t matter. However, without knowing what causes consciousness, can this be asserted with confidence? Why wouldn’t consciousness require a fleshy brain or more?
Then, some claim consciousness requires objects. That is, consciousness must be consciousness OF something — a vision, a thought, a sense datum. Or as put more pithily:
There is nothing in the mind that was not first in the senses.
Taking this further, is short term memory is essential to consciousness, or at least consciousness as we know it? This leads to another question: what if consciousness is achieved, but not as we know it? Is that success? Does it even make sense to speak of consciousness not as we know it?
Then there’s a question asked by guymax: how would we know if the machine is conscious? This is where the superfluity of consciousness — illustrated by the philosophical zombie — rears its ugly head. If consciousness is not needed to explain anything, then how could one devise a test for it?
For instance, how can someone discover if I am conscious? They can’t. All they can do is assume I am conscious and then observe my behavior to infer if I am conscious at specific times. For instance, if my eyes are closed and I am in a prone position, then they may infer I am unconscious. On the other hand, if I am responding to questions, they may infer that I am conscious. Yet even that fails. I could respond to questions while not being conscious (eg: on autopilot). I also may seem unaware of questions, but be conscious (eg: distracted).
If conscious activity can’t even be demarcated (with the assumption it exists), how can its existence be tested for a machine? Measurement won’t help because we don’t know what causes consciousness. Asking the machine won’t help because it doesn’t need consciousness to respond — we’ve long programmed computers to respond to queries without needing to program in consciousness (if we could). We can’t point to evidence of learning for the same reason. To take the mystery out of learning, here’s a simple system built from matchboxes that can learn to play unbeatable tic tac toe (noughts and crosses to my friends across the pond).
So maybe the intractable challenge is not designing a machine that is conscious, but knowing if we succeeded. For all we know, we could be surrounded by artificial consciousness and never know it.