Can a Computer be Conscious? Part 2

Previously, I asked if a computer could be conscious.  I only explored that question, and in the process the following points emerged:

  1. Thought does not require consciousness
  2. Consciousness does not require thought
  3. Consciousness has been correlated with activity in a part of the brain
  4. Consciousness is not dependent upon the medium.

1-3 are based on observations/studies, while #4 is an assumption of those who believe in computer consciousness.

Enter Strong Artificial Intelligence (or Strong AI)  — the view that a computer that thought like a human would be conscious.  Here is a brief overview of positions on Strong AI.  For more detail, I recommend reading  John Searle’s Chinese Room experiment and the various responses to it. The key to reading Searle is to know he’s writing about consciousness, and not “knowledge” per se (it’s easy to miss this point with the way he phrased his thought experiment).

Depending on what “thinking like a human” means, Strong AI may fall afoul of 1-3.  For instance, AI via symbol manipulation (what Searle attacks) would not achieve consciousness.  On the other hand, AI via low level modeling of the activity in the human brain (eg: neural networks) that reproduces the low level patterns of activity consistent with consciousness, could.

Easier said than done.  What constitutes a pattern? Do the components of the pattern (eg: electrons vs. bowling balls) matter? How complex must this pattern be? Is the pattern enough?

Also, Strong AI takes it as a premise that the medium doesn’t matter. However, without knowing what causes consciousness, can this be asserted with confidence? Why wouldn’t consciousness require a fleshy brain or more?

Then, some claim consciousness requires objects. That is, consciousness must be consciousness OF something — a vision, a thought, a sense datum. Or as put more pithily:

There is nothing in the mind that was not first in the senses. 

Taking this further, is  short term memory is essential to consciousness, or at least consciousness as we know it?  This leads to another question: what if consciousness is achieved, but not as we know it?  Is that success?  Does it even make sense to speak of consciousness not as we know it?

Then there’s a question asked by guymax: how would we know if the machine is conscious? This is where the superfluity of consciousness — illustrated by the philosophical zombie — rears its ugly head. If consciousness is not needed to explain anything, then how could one devise a test for it?

For instance, how can someone discover if I am conscious? They can’t. All they can do is assume I am conscious and then observe my behavior to infer if I am conscious at specific times. For instance, if my eyes are closed and I am in a prone position, then they may infer I am unconscious.  On the other hand, if I am responding to questions, they may infer that I am conscious.  Yet even that fails. I could respond to questions while not being conscious (eg: on autopilot). I also may seem unaware of questions, but be conscious (eg: distracted).

If conscious activity can’t even be demarcated (with the assumption it exists), how can its existence be tested for a machine?  Measurement won’t help because we don’t know what causes consciousness.  Asking the machine won’t help because it doesn’t need consciousness to respond — we’ve long programmed computers to respond to queries without needing to program in consciousness (if we could). We can’t point to evidence of learning for the same reason.  To take the mystery out of learning, here’s a simple system built from matchboxes that can learn to play unbeatable tic tac toe (noughts and crosses to my friends across the pond).

So maybe the intractable challenge is not designing a machine that is conscious, but knowing if we succeeded. For all we know, we could be surrounded by artificial consciousness and never know it.

30 comments

  1. “On the other hand, AI via low level modeling of the activity in the human brain (eg: neural networks) that reproduces the low level patterns of activity consistent with consciousness, could.” But aren’t neural networks ultimately just symbol manipulation? They live as software in a computer which basically manipulates 1s and 0s using logic gates.
    Here’s another question: can you conceive of any low-level activity in the universe that cannot be described as mere symbol manipulation?

    • Excellent points :).

      I think the question here are what are the “meaningful units” that would contribute to consciousness? Symbol manipulation would have some physical counter-part, but it would be the physical counter-part and not the symbol manipulation per se that would give rise to consciousness.

      Let’s say we have two machines built from scratch. One is a traditional architecture, the other a neural network. Now assume they both run symbol manipulation algorithms to translate a language. The relevant activity would be dramatically different.

      Now if consciousness is due to a particular pattern of electrical activity, then the second machine could give rise to consciousness while the first would not, despite the fact that both use symbol manipulation.

      So the key here would be what the relevant level is. Symbols don’t seem to be it, and the physical representations of symbols could vary across architectures, so symbols seem irrelevant.

      I think this is what Searle meant when he criticized symbols for not having causal effect.

      • But my whole point is that a neural network is nothing but a mathematical object! You could pretty much just do it on paper. You talk about “traditional architecture” vs. “neural network” but actually neural networks are run on traditional architectures. In fact, I’ve programmed quite a few using Octave on my regular Windows laptop. I could also have done the same with a pencil and a piece of paper or a calculator. What is the difference between a neural network run on a computer and a neural network run on cheese or people or pencil and paper? If the difference is in the electric signals, for example, what is it about the electric signals that makes a difference?

  2. One of the main ideas I see you pointing towards is the notion that ‘thoughts’, considered as a blanket term for outputs of a function that match our pre-theoretic intuitions on said mental concept, are neither necessary nor sufficient for consciousness. If I understand you correctly, it seems you are noting the need to establish what class or type of ‘thought’ is sufficient, or at least necessary, for us thinking we perceive consciousness, and this before we can attend to the harder problems of consciousness. If so, then I agree; how can we expect to uncover what physical networks are sufficient for the realizability of consciousness, if we cannot find a working epistemological process for being justified in thinking something is conscious?

    Enter the Turing test. The test is often dismissed as too simple, but taken as an empirical basis for justifying our judgment on the presence of consciousness in an entity, I think it is ingenius, and for precisely the reason that it specifies what sorts of outputs (thoughts) are sufficient for consciousness. What we are looking for is evidence that we are justified in thinking that something is conscious, as it doesn’t seem we will ever (or at least not for a long time) be able to directly perceive consciousness.

    Consider: to pass the Turing test a computer must lie. Not only would the human interlocutor be well advised to ask the computer whether it is a computer, they will ask the computer “personal” questions. Now, there is no doubt that a computer program can be programed to tell a certain story, such that, the computer isn’t exactly lying, per se. I think this is about the level current computers put up to the Turing test are currently at. But I think it is conceivable that soon a program will both know that it is a computer program without human features such as hair color, weight, etc, and be able to lie about the human features it has. So, if a computer passes the Turing test, then it will be because it knows how to lie effectively, and it will know how to lie because it is aware that it is actually a computer program. This, I submit, would be a self-awareness sufficient for consciousness, so that if something passes the Turing test we are epistemologically justified in positing that it is conscious.

    Sorry for the long comment, but I wanted to address an important insight you expressed in your post. Cheers!

    • Very well put! Also, always feel free to post long comments (and never apologize for them!). I learn a lot from comments, so I definitely don’t want you or anyone else hesitating out of concerns for length. In fact, often the best part of these articles are the comments and the discussions that ensue.

      Your first paragraph is mostly on the mark on what I was arriving at. I would however say that the kinds of activity that give rise to consciousness may not even be thoughts to begin with. That is, consciousness may arise from patterns that correspond to how thoughts are implemented on a particular medium (like the human brain), but that would be a byproduct, rather than anything essential to thinking.

      Your thoughts on the Turing test are fascinating. I never thought of it as a consciousness test, just an AI one. The perspective on lying and consciousness is interesting.

      Although I think a computer can still be conscious and concoct elaborate lies, as far as any test goes (and all the ones I can think of are inadequate), that Turing test might be the best one.

      • You’re right, Turing certainty meant his proposed test to be an assessment to answer the question ‘can machines think?’ In that way, it was meant explicitly to address the ‘intelligence’ of machines, and thus, the possibility of AI, as you noted.

        I suppose my extrapolation from ‘thinking’ to ‘consciousness’ is routed in the connection between the Turing test and its ancestral routes in Descartes. Descartes, as a substance dualist, thought that mind and body were distinct substances, such that, though man’s body is a machine (his words not mine), we are more than machines because we have souls, and the evidence for this difference is the fact that machines, “could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. […] And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us do, they infallibly fall short in others” (Discourse on the Method of Property Conducting the Reason). Given Descartes dualist convictions, I read him as positing an early version of the Turing test to test for the soul, which I understand as consciousness. (Full disclosure: I’m not a substance dualist, though I am open to arguments for property dualism and functionalism).

        Thanks for your response and kind welcome.

      • Connecting Turing to Descartes. Awesome!

        The connection between thinking and consciousness is a very common one, and a natural one to make. Who knows, maybe I’m being too dismissive of that connection?

        Let’s say you walk home and do so completely on autopilot, to the point that you don’t even remember walking home. Would you argue that you were conscious while walking home?

        One of the premises behind my dismissal of thinking as essential to consciousness is the claim that such events are not conscious events. But is this justified? What if they were conscious events and were forgotten?

        This opens up a can of worms, but it does bring up some confounding points between consciousness and memory that can undermine the premises I assented to in part 1 and re-iterated in this article.

        Thank you for commenting!

    • The Turing Test is not a test for consciousness. There is no test. This is known as the ‘other minds’ problem. There is no test and there never will be. It is simply a fact that it would be impossible to demonstrate machine consciousness. It would be too paradoxical if the researcher were able to demonstrate the consciousness of his machine but not of himself.

      According to Popper’s rules and any rules that I’ve come across, the theory that machines can be conscious is not scientific. Strictly speaking it is less scientific than panpsychism. As BiaR says in his essay somewhere towards the end, it is difficult to be sure that panpsychism is not true.

      I don’t think it is difficult to conclude that the phrase ‘machine consciousness’ is an oxymoron by almost any definition of the terms, but we must concede that we can never demonstrate that it would be impossible, and for exactly the same reasons that we cannot demonstrate that it would be possible.

      As always, I feel that it is a mistake to ignore Kant. He saw that more than computation was required for mental phenomena and categorical thought. Ordinary consciousness would depend directly on his fundamental phenomenon and would be impossible in its absence. I am therefore I think.

      • Right, passing the Turing test is meant to provide epistemological justification for our judgment that computers can be intelligent. The natural sciences use a coherestist model of justification, but as any foundationalist will tell you, coherentism does not guarantee that those beliefs that are justified are true, and the coherentist must and does acknowledge this. It is with this in mind that natualistic scientific methodology allows that we can be justified in a belief, without knowing for absolute certain that that belief is true, such as, for example, if that belief is not empirically verifiable given our current technology. I do not deny that the thesis ‘consciousness is a necessary but not sufficient condition for intelligence’ is controversial, though I have endeavored in this section to motivate that claim. My point is that if the Turing test can provide epistemic justification for holding that computers can be intelligent, then the Turing test can provide epistemic justification for the belief that computers can be conscious, in some appropriate sense. The problem of other minds, and solipsism more generally, continues to be relevant, but philosophy should and can provide epistemic principles that justify our beliefs about other minds, even if they do not guarantee the truth.

  3. Ive always had the theory that, consciousness was a side effect of mental evolution, the brain got smarter and as it did and it was able to connect more things together it started to wonder and ask questions this awareness then became our conscious, splitting our minds into two , the sub and the conscious, one instinct and the other aware, as people became more aware of themselves and others, a social evolution started and developed into reasonings and justifications to the ultimate unsolvable puzzle why are we here.

    As fast as computers are they do not compare to the speed and data handling the brain has, i think it will be quite a while before they can handle enough data to be able to have a shared illusion of consciousness that we all share.

    I enjoy reading your posts keep them coming please thanks

    Blindmuggy

  4. I see your point about doing things on auto-pilot, and certainly will concede that we do things without being conscious of doing them. What I think this suggests is a difference between consciousness, and conscious thought, where attention is focused on certain things. When I consciously decide to get up and get a glass of water it can be said that I am conscious that I want water. This is because I am reflecting on myself. There is the ‘I’ that exists throughout all my actions and makes those actions possible, and there is the reflective ‘I’, that is made possible by, and thinks about, the subsisting ‘I’. When you walk home on auto-pilot you still manage to get home, because your consciousness doesn’t stop existing when you are not consciously thinking about it. I am suggesting that the Turing test, or Descartes’ version of it can give us reason to think an entity possess the reflective ‘I’. As the reflective ‘I’ is made possible by a subsisting ‘I’, this gives us reason to think a computer possesses consciousness. (Note: I am borrowing slightly from the work of Edmund Husserl, the pioneer of phenomenology, here.)

    • Thanks!

      In order to ensure I understand, I’d like to confirm we mean the same thing when we write “consciousness”.

      I use it as a synonym for awareness. Unless I am aware of something, I’m not conscious of it, and if I’m completely unaware, then I’m completely unconscious.

      Is that how you are using “consciousness”?

      • I initially meant consciousness in the subsisting and pre-reflective way, but now I see that in order for my initial argument to go through I should understand it as you do.

  5. My view on consciousness, is that it emerges from the complex neurophysiological interactions in and of neural networks. When these interaction become sufficiently complex, with enough feedback loops, consciousness emerges. This consciousness is defined by it’s ability to be conscious of spatial-temporal entities, such as trees, predators etc. Understood in this way, gazelles have consciousness because consciousness is not co-extensive with intelligence. When the complexity of neural networks reaches a point where the being can become conscious of non-spatial-temporal entities, such as their own feelings, beliefs and desires, then they are intelligent.

    Along these lines, when you are on auto-pilot, you possess consciousness, it is just the fact that in those moments you are non-intelligent. On auto-pilot you still respond to spatial-temporal entities, which suggests that some bare level of your being is conscious of them. It is just that you are not conscious of you being conscious of them.

    Thanks for you question, I hope I addressed it clearly.

    • Yes, you clarified, thank you.

      So when I was using consciousness, I was referring to this intelligence aspect of consciousness, the non-autopilot awareness. The non-intelligent consciousness (autopilot), I simply treated as non-consciousness.

  6. An excellent and thought provoking article. You’ll probably guess this from my post the other day, but I think consciousness of other entities is a subjective judgment of when we perceive an entity to have a private experience somewhat like ours. In other words, consciousness outside of our own minds requires empathy.

    Empathy requires that we perceive some common experiences and motivations. So, a navigation computer’s programmed motivation to find the best path isn’t particularly empathy inducing, but a computer programmed to care about its own survival might very well induce it in us.

    This raises the ethical consideration that if we can see the line(s) of code that makes a machine feel “fear” (priority perceptions of danger to its functioning or wholeness) or “pain” (perception of damage it is undergoing), does that lessen our obligation not to inflict these things on it?

    • Thank you!

      I would say that our obligation would not lessen by seeing the lines of code; would your ethical obligations to others lessen if we completely figure out what makes us tick? I take the starting point of ethics as the recipient’s (of our act) ability to feel pain and joy. That wouldn’t change if we understood what caused the pain and joy.

  7. You certainly knows how to get a good discussion started,.

    I think it’s possible that one of your four opening proposition here are true, given a certain definition of terms, but I struggle to see how you could think they all are. I think they are all very dangerous and create a lot of complicated problems.

    Take the proposition ‘Thought does not require consciousness’. This is an axiom. We can take it or leave it. It has no usefulness except to commit us to a fixed position before we know whether it’s the right one. If it is used as an axiom for an argument for or against machine consciousness then the argument will be too weak to work.

    The proposition that I think might be true states that consciousness does not require thought. It would be false if we are referring to intentional consciousness, but true if we include Kant’s foundation for mind and thought. So even though it may be true it may lead us into some confusion.

    I think you’re right when you say, or I think this is what you said, that these debates are hopeless unless we know what we mean by ‘consciousness’, and if we did then we’d know the answer already. .

    • Well, 1-3 were suggested by studies and 4 is an assumption for those who wish to reproduce consciousness in another medium, although it may be possible to derive evidence for that position.

      #1 uses a specific definition of thought, namely being able to process what the brain can process. I think there is evidence to suggest this, so I wouldn’t call it an article of faith.

      #2 is the weakest premise, and depending on what one considers thought, one could easily object to the examples (those who suffered brain damage) called in to support it in the previous article.

      On top of this, I’m using consciousness to mean awareness and not an entity’s general tendency (or potential) to exhibit awareness. As such, unless one is aware at any moment, consciousness is not active.

      So yes, highly qualified and plenty to attack 🙂

      My position is close to what you stated. The debate is hopeless for the reason you pointed out earlier: without a reliable way to check for consciousness, how can we know what conditions give rise to it, much less if we achieved it?

      • Well, I feel that you’re making a mountain out of a molehill. It is impossible to test for consciousness. Why make it more complicated?

        There is no test for intelligence either, or not unless you define intelligence as what a Turing machine can do, i.e. a process not requiring consciousness. Once defined as such there is nothing to debate.

        I hope this comment ends up the right place. I’m a little confused by all the posts. .

      • The consciousness test I can agree to, although I think ausomeawstin’s repurposed Turing test is intriguing, and may be the best test.

        Intelligence is another matter. One would need to precisely define intelligence (which is possible) and then one could test for it. I don’t think the Turing test is a good one for intelligence, relying as it does on not only subjective criteria, but things that don’t require intelligence. After all, the Turing test is about emulating a human, and that may require acting less intelligent.

Leave a reply to bloggingisaresponsibility Cancel reply