Episode Summary Do you need a body to think? This is a worthwhile (and also a perplexing) question, and an ongoing debate amongst roboticists. Cognitive Roboticist Dr. Mark Bickhard is part of a field of belief that cognition and intelligence – and maybe consciousness itself – requires embodiment and direct interaction with the world. In this interview, he discusses the concept of normative function and self maintenance in entities, and why this matters when it comes to thinking.

Guest: Mark Bickhard

Expertise: Philosophy, Cognitive Science, Psychology, Biology, Counseling, Computer Science

Recognition in Brief: Dr. Mark Bickhard holds the Henry R. Luce Professorship in Cognitive Robotics and the Philosophy of Knowledge, and is the recipient of  the Three Year Extension Award from the Henry R. Luce Foundation. Dr. Bickhard has authored five books, the latest being The Psychology of Personhood:  Philosophical, Historical, Social-Developmental, and Narrative Perspectives, in addition to publishing numerous book chapters and articles in various scientific publications. He is also a renowned global speaker and lecturer.

Current Affiliations: Henry R. Luce Professor of
Cognitive Robotics and the Philosophy of Knowledge at Lehigh University; Director, Institute for Interactivist Studies; Editor, New Ideas in Psychology

What’s in a Thought?

There is more to thinking than just using our brains. This is something that is all too clear to Dr. Mark Bickhard, currently Henry R. Luce professor of Cognitive Robotics and Philosophy of Knowledge at Lehigh University. When I asked Dr. Bickhard to define cognitive robotics, he explained it as “an orientation to cognition that basically holds that cognition arises in systems that interact with the world, not just processes inputs from the world.” Robots could be a prime example, but animals are an even more obvious one.

Interaction with the world seems to require some level of sensory interaction, whether at the level of a bacterium or at levels not yet perceived by humans. The ability to think, built over millions of years, is based on this sensory information, and as far as we know requires some form of body to receive that information. This leads us to the issue of robots that can truly think, and the debate over the need for embodiment.

An Argument on the Side of Embodiment

When it comes to the debate over whether embodiment is required for real thought, Mark would cross the line in the sand and stand over on the “yes” side. He supports his argument based on the concept of future oriented cognition, which is basically the prediction of a range of situations based on contextual variables. This was a concept already recognized in the 19th century and during the early decades of psychological research.

Bickhard emphasizes the importance of an entity being able to anticipate its options in a given situation, where a particular interaction will yield a result that renders that interaction as either correct or incorrect. If an organism anticipates and engages with the external world, and the outcome of a given situation does not go as the initial anticipation directed, then the idea or thought is made false. This is a simplified conception, but it seems to hold weight in how we create and interact with reality, and vice versa.

“Truth value is the most fundamental characteristic of representation”, remarks Bickhard. Truths emerge in interactive representations, but these thought conclusions can only exist in a system that can interact in the world in the first place – which brings us back to the necessity of embodiment.

What about Watson?

So if a machine really needs some form of body to interact and hence, to think, then what about Watson, or any other high-powered computer; is it thinking, or could it think, by maintaining its box-like form, even if we replicated and implanted a human-like mind inside its box self?

Mark would argue no, for two reasons. For one, the computer does not interact with the world, not in a normative sense, anyhow. Basically, interactions have to be normative to be able to succeed or fail, for anticipations to be incorrect or correct. Computers are not thinking because they are not normative, argues Bickhard.

In its most basic sense, an asymmetry among distinctions exists in the world, so that some actions (or lack thereof) are better than others, based on the notion of function. For example, kidneys filter blood, and it makes sense that they can also be dysfunctional or do the job poorly; this illustrates a normative function at the biological level. The success and fail factor is what underlies other normative functions, all the way up to abstract notions such as representation and ethics.

The second reason that machines (as we know them) cannot think (according to the theory of embodiment) has to do with the physical model itself. “What constitutes emergent normativity, I don’t think computers could ever have it,” says Mark. “Normativity seems to arise in a thermodynamic sense, in systems that are far from thermodynamic equilibrium.”

A candle flame clearly is not thinking but it does differ from a rock in the sense that the latter is a stable organization of processes. An isolated rock will sit around for an eternity (hypothetically), content and undisturbed, unless a certain amount of energy above the rock’s static threshold breaks it up (a common occurrence), yet its parts still remain.

When a candle flame or a person goes through disruption to its equilibrium, however, it ceases to exist altogether. In this way, a certain state of equilibrium has to be maintained by an entity, and that state is functional relative to its existence. “A bacteria that swims up a sugar gradient is going after food and contributing to the maintenance of its own equilibrium conditions, recursively so,” explains Bickhard. “If it (the bacteria) finds itself going down the gradient, it will tumble and then swim again, until it finds itself going up the sugar gradient once again…swimming is functional for the bacterium, but dysfunctional if going down the gradient.” Unlike a candle flame, a bacterium can sense the difference between function and dysfunction and can then adopt different actions, a concept known as recursive self maintenance.

This necessity to take advantage of functional possibilities, and to eventually think about the choices at stake, emerges naturally in evolution as a characteristic of complicated agents. Evolution creates systems that have this sense of self maintenance. If a system, like a computer, has no stake in its existence, then a system will do whatever it is going to do, absent of thought; there is no way for it to disappear or appear, persist or not persist.

Robots Will Have to Learn to Think

So what does that mean for robots? Dr. Bickhard would argue that robots, as we currently conceive of them, are not far enough from equilibrium to really be “thinking machines.” There are robots that can, of course, simulate many of these interactive possibilities, and there do exist robots that are farther from equilibrium because they have a battery that needs to be charged. As the battery runs down, some robots can even detect and plug themselves in, but most every part of that robot is like the rock – at a thermodynamic equilibrium.

One of the things that the field of cognitive robotics has come across in recent years is the idea that humans cannot build and design robots fresh – creating robots from scratch is simply too complicated of a task. This awareness is the basis for the birth of the fields of epigenetic and developmental robotics, a way of modeling robots after the evolutionary genius that surrounds us.

But we science can never completely rule out the possibility that such a machine could exist. Many advanced machine memories are far from equilibrium, and it’s possible that with time, they could start to have a stake in the world through increased sensory interaction and continued deep learning. Only time will tell.