Penrose's theory is interesting because if there are two things that we are unable to adequately explain in science it is quantum mechanics and consciousness. The idea that the two are linked is interesting, to say the least. Quantum physics demands an observer. It is the act of observing that causes the probabilistic nature of particles at smallest scale to collapse into the physical certainty of the larger world we observe. Physicists debate whether the observer needs to be conscious but from a philosophical perspective it seems nonsensical to talk about observation without a conscious observer.
Many AI experts believe consciousness is derived from complex computing processes through a process called recursion. If you think of a conventional personal computer, there are about four layers between what you see on screen and the underlying computer circuit - a layer of firmware (which, as the name suggests, is a blending of hardware and software), a binary operating system ('BIOS'), the functional operating system such as Windows or MacOS, and the end-user application such as a web browser running on top of all of that. Recursion is the ability of a programme to invoke itself. All computers have some degree of recursion whereby the software monitors what is going on and corrects for errors, etc. If instead of four layers you had fifty or one hundred, with many of those layers observing and monitoring what is going in other layers, you can imagine how the higher layers of processing could become so abstracted from the underlying computation that it would at least have the appearance of consciousness. The almost infinite processing power of quantum computers could produce an almost infinite number of layers and perhaps there is a point where consciousness bootstraps out of this.
Many AI experts believe consciousness is derived from complex computing processes through a process called recursion. If you think of a conventional personal computer, there are about four layers between what you see on screen and the underlying computer circuit - a layer of firmware (which, as the name suggests, is a blending of hardware and software), a binary operating system ('BIOS'), the functional operating system such as Windows or MacOS, and the end-user application such as a web browser running on top of all of that. Recursion is the ability of a programme to invoke itself. All computers have some degree of recursion whereby the software monitors what is going on and corrects for errors, etc. If instead of four layers you had fifty or one hundred, with many of those layers observing and monitoring what is going in other layers, you can imagine how the higher layers of processing could become so abstracted from the underlying computation that it would at least have the appearance of consciousness. The almost infinite processing power of quantum computers could produce an almost infinite number of layers and perhaps there is a point where consciousness bootstraps out of this.
Religious people have a metaphysical view of consciousness. They believe it exists separately from the observable electrical and chemical processes in the brain and that it may survive the death of the body - in other words, they believe we have a soul. We can observe the physical activity of the brain with functional MRI scanners and we understand quite well which parts of the brain account for various mental processes, but we can't see consciousness and really have no idea what it is. So a religious explanation of consciousness is plausible if not entirely unassailable, but the problem with the realm of religious explanations for phenomena that can't be explained by science is that it is an ever-diminishing domain.
How would we know whether consciousness exists in a machine or not? Are other animals conscious? Certainly my dog appears to be, but is its consciousness of a kind with human consciousness? We tend to differentiate higher-level conscious as self-awareness, but other animals are probably self-aware, if that is the test. It is easy to envisage robots that are convincing human companions like the operating system in the film Her, although the Siri on my iPhone has a long way to go. If a robot has all the attributes of consciousness and claimed to be conscious, how could we deny it? Certainly many scientists, such as Ray Kurzweil, who is head of engineering at Google, believe it is only a matter of time.
Most people find the idea of machine consciousness to be very scary. I find the prospect exciting, although I acknowledge there are risks in the quest to make machines autonomous of human control. Isaac Asimov addressed the risks by inventing the 'Three Laws of Robotics':
I believe that in order to be safe, intelligent robots will need to be able to make moral choices. In other words, they must have free will, which is, in my view, the essence of consciousness. It is at this point that I part ways philosophically with many atheists, who are materialists and determinists - in other words, they believe our actions are dictated solely by the external, physical world and that free will is an illusion. I also part ways with the theologians in that I don't believe God is necessary for free will. However, I do believe that free will is a necessary part of consciousness irrespective of whether it is physically derived or divine. If we don't have free will, why are we conscious? It would seem superfluous, to say the least.
I think that machines are likely to reach a point where they are indistinguishable from humans in terms of consciousness. Whether they are truly conscious or not won't really matter any more than it matters whether your pet is truly conscious. Human beings will have sophisticated relationships with robots and the boundaries between what is human and what is machine will become blurred. Perhaps this will be a threat to our humanity, or even to human existence, but I see it more as evolution. I will expand on this in a future post.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I believe that in order to be safe, intelligent robots will need to be able to make moral choices. In other words, they must have free will, which is, in my view, the essence of consciousness. It is at this point that I part ways philosophically with many atheists, who are materialists and determinists - in other words, they believe our actions are dictated solely by the external, physical world and that free will is an illusion. I also part ways with the theologians in that I don't believe God is necessary for free will. However, I do believe that free will is a necessary part of consciousness irrespective of whether it is physically derived or divine. If we don't have free will, why are we conscious? It would seem superfluous, to say the least.
I think that machines are likely to reach a point where they are indistinguishable from humans in terms of consciousness. Whether they are truly conscious or not won't really matter any more than it matters whether your pet is truly conscious. Human beings will have sophisticated relationships with robots and the boundaries between what is human and what is machine will become blurred. Perhaps this will be a threat to our humanity, or even to human existence, but I see it more as evolution. I will expand on this in a future post.
No comments:
Post a Comment