- If a machine cannot be conscious then we can’t ever hope to transfer ourselves into digital beings, extending our lives far beyond the typical century.
- Natural language interactions would question the machine’s inner workings to see if it experiences something other than the mere physical.
- Leaving an AI free to scour the internet could make its responses misleading because it may try to convince us that it has consciousness by imitating human behavior, such as by recreating philosophical discussions on what it means to be self-aware.
- It is uncertain, however, whether or not we could even box in a superintelligent machine.
Consciousness is a slippery sensation. Sometimes I feel present, aware, in control of my body. Other times I feel like a spectator — or an animal driven by primal needs and nothing more. As a person with no religious nor spiritual beliefs I do not think there is anything after death. No reunion with loved ones, no boisterous laughter in the halls of Valhalla, no god to forgive my sins and no devil to torment me. There is nothingness.
And yet still, now that I’m alive there’s this feeling of being myself. I am an entity that inhabits a body and I understand that other people have their own consciousness that is at once similar and vastly different from my own. When someone says that their consciousness left their body, I understand. When someone says that their body became a medium for someone else’s consciousness to temporarily inhabit, I understand, even if I don’t believe in ghosts and spiritual mediums myself.
Could this sensation that’s at the core of our religion and spirituality be replicated inside a machine? There are only two possible answers — and either possibility comes with its own set of unsavory consequences.
On the one hand, if a machine cannot be conscious then we can’t ever hope to transfer ourselves into digital beings, extending our lives far beyond the typical century. If consciousness can only ever be biological then our sense of the self will necessarily be temporary since biological beings will always be prone to mortality and decay. We will also have to admit that consciousness is not the most efficient carrier for intelligence. Machines that could complete tasks faster, produce more accurate calculations, and better explore the universe would be — from an intelligence perspective — superior to us. We will create that which will supersede us. But without a consciousness, can we ever trust our AI to do what is right for humanity? If they do not obtain consciousness they may not obtain compassion, suffering, or mercy.
And if they do?
Then the weight of wisdom falls on our shoulders. We have this pervasive image of AI in a place of servitude. It exists to care for us and our needs, to do the things that we can’t or don’t want to do. But if these machines develop awareness and emotions then we would be no better than the slave masters we’ve ardently condemned.
Before we make any decisions, we must first determine whether or not a machine is self-aware. One of the proposed methods to do this is with the AI Consciousness Test (ACT) which would examine whether or not the inner experience of an AI is similar to ours.
The test was proposed by Susan Schneider and Edwin Turner, a professor of philosophy and cognitive science and a professor of astrophysics, respectively. With ACT the main indication of whether or not an AI is conscious rests on its grasp of the concept of consciousness. When talking about astral projection, all of us can imagine what it would be like for your self to float above your body. But an unconscious AI could have no understanding of something it’s never possessed. Natural language interactions would question the machine’s inner workings to see if it experiences something other than the mere physical. Is there more beyond the systematic buzzing, clicking, code-creation in its synthetic mind? Increasingly difficult test levels would see how readily the machine could grasp concepts that revolve around consciousness, such as the idea of two people switching bodies. We might also discuss philosophy with the machine — relationships between the physical, the brain, and our experience of being alive. The strongest case that can be made for artificial consciousness is if the AI develops its own conversation on consciousness without first being prompted to do so.
A similar case can be made for alien lifeforms. To be alive is one thing, to have consciousness is another. An alien species which follows simple hunger and food cues might not have consciousness. But one that engages in rituals for death or love, one that exhibits belief in an afterlife or in a sensation of being more than just their carbon-based bodies provides strong evidence that these extraterrestrial beings are our cosmic brethren in consciousness. Are they afraid of death — as we are — because it represents an uncertain future for the self?
The AI we test might also have to be a boxed AI. These are machines that are denied access to certain resources like the internet. This works as a control during the exam. Leaving an AI free to scour the internet could make its responses misleading because it may try to convince us that it has consciousness by imitating human behavior, such as by recreating philosophical discussions on what it means to be self-aware. A superintelligent AI might do this because it was programmed to do so or because it has its own ulterior motive. It is uncertain, however, whether or not we could even box in a superintelligent machine. It may be clever enough to circumvent all the boundaries we place around it.
The test is not perfect. It aims to search for something subtle and elusive. A machine that is truly self-aware may not even be able to communicate this fact because it lacks the capacity for language or conceptualization. It is possible that consciousness may exist within a machine and yet they still aren’t able to pass the ACT. But it is a beginning. And if it is possible to control which machines do and don’t have consciousness we could implement ACT as a way of ensuring that we are not abusing those that do exhibit a capacity to be emotional and self-aware.
ACT would be related to — but functioning in a way opposite from — the Turing Test. Whereas the Turning Test asks an AI to veil the fact that it is a machine by convincing one human that it is talking to another human, ACT asks that a facet of the AI be unveiled, revealing the part of it that is immaterial.
Finding someone to unveil this facet is difficult. It requires a researcher skeptical of artificial consciousness and yet open-minded enough to be convinced that it is possible for such a thing to exist. Some dismiss the idea of artificial consciousness altogether. Others are much too eager to say that anything resembling an emotional human being must be self-aware. Dealing with our ever more sophisticated machines takes increased attentiveness and care on our part. We must remain objective and refrain from personifying our machines, yet at the same time we must admit that our grasp on the definition of consciousness is a tentative one, and we may not be the only ones to have it.
How do we know that this experience is bound to the biological? Who is to say the immaterial can manifest itself only via flesh and blood?