Episode Summary: Machine consciousness could very well exist, though in what form and to what degree, and how we recognize another entity’s experience, remains a problem to be solved. In this episode, Dr. Peter Boltuc helps define machine consciousness and expands on what may be necessary, in both the logical and moral arenas, in order to create such an entity.
Guest: Dr. Peter Boltuc
Expertise: Artificial Intelligence (Moral and Political Philosophy)
Recognition in Brief: Fulbright Fellowship Batory Visiting Scholar; UNESCO Visiting Fellow (Paris); author of numerous published academic papers, including with the International Journal of Machine Consciousness
Current Affiliations: University of Illinois Springfield (Philosophy); Committee on Philosophy and Computers of the American Philosophical Association
Are Conscious Machines Possible?
Could a machine do all of the same things as a human being? The “sky is the limit”, says Dr. Boltuc, who does not think there’s anything that a “conscious” machine could not potentially do the same as a human being (he prefers to use the term “conscious” over “intelligence”). But before we can comment on the potentials of machine consciousness, we should make a distinction between the terms functional consciousness and phenomenological consciouness.
Functional consciousness is whatever a conscious thing can do. Dr. Boltuc makes note that the differences between soft and strong artificial intelligence (AI) are not particularly relevant when distinguishing between the two types of consciousness in this case; he believes the tests that we have available to us now to discern levels of artificial intelligence are not as good as they one day might be. In any case, functional consciousness of an intelligence has to do with any action that could be performed by mapping it out mathematically, something that in theory we’ll be able to achieve at some point in the future.
Phenomenological consciousness is akin to the experience of the first-person i.e. owning the conscious experience and being able to label it as my experience. Think of a nurse walking into a patient’s room, who asks whether the patient is conscious or not. She or he is not concerned with that person’s capabilities, or his or her name, or personality; the primary concern is whether there is “some light in that brain”, which is greater than the ability to think, or to exhibit sensory perception, both of which would (according to Dr. Boltuc) fall under the functional consciousness category.
The interesting question that remains is, could machines ever have that kind of first-person conscious experience?
The Ethics and Consciousness Crossroads
If we think it might be possible to build machines that are capable of phenomenological consciousness, could we also assume that it’s possible to build a machine that could gain more consciousness in some ways than humans? Could such a machine in turn hold a higher moral weight? Dr. Boltuc thinks this is conceivable; however, he warns that even though this is a logical conclusion, it might be counter-intuitive and dangerous in the sphere of universal human-based ethics as we conceive of today.
Peter advocates for what he terms a non-homogenous moral space, which is basically the perspective that morals hold different weights and values depending on individual experiences and relative “positioning”. The syllogism “a machine can do a better moral judgment at some point, therefore it would be a better moral entity”, is an example of a homogenous moral viewpoint. We could make this claim by classifying characteristics of morality that we value and deem important based on our first-person conscious experience, and then project those criteria onto the analysis of another entity.
But as Dr. Boltuc points out, “We don’t want to take into account just (moral) operations of first-person being.” In other words, perhaps the best solution for moving forward toward building conscious machines is to cultivate respect for the many forms of consciousness that do, and could, exist in the world. Humans form special relationships with their pets, for example, and recognize the conscious experience of other organisms such as dolphins.
There often exists a moral problem in dealing with other conscious organisms, but Peter believes that solving these problems doesn’t have to be overwhelming. Ethics might be universal and there is logic to considering the value of different levels of consciousness, but we still maintain a human-based ethical system that is somewhat flexible by nature. In Dr. Boltuc’s view, “ethics are (likely) parochial for a reason”, and we should stay with this ethical base for a very long time to come.