A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception. As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating. What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed. Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate. As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.
While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead. More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness. That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits. The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.
To see this point more clearly, let’s consider an example. Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat). Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another). So the only discrimination in this latter case of perception is between red and green colored blobs.
Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size). In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles. Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world. At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.
Perception is Analogous to a Desktop Computer Interface
Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer. When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.
The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface. Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness. In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.
Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder. Likewise we should take our perceptions seriously, even if we don’t take them literally. We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.
Relevance to the Mind-body Problem
The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us. The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists. I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.
So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory. This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved. It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface. All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.
Predictions of the Theory
Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible. Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well. It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.” The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible. Thus, evidence of interfaces should be widespread throughout nature.
In his paper, he mentions the Jewel beetle as a case in point. This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates. According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this. Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large. We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”. The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.
More on Conscious Realism (Consciousness is all there is?)
This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash. It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.
This all depends on a theistic conception of truth. And it presupposes that “closer to reality” actually has a meaning.
I suppose Hoffman’s argument does work as a reductio ad absurdum of those assumptions, though I don’t think Hoffman intended it to be seen that way.
I’m not exactly sure what you mean by a theistic conception of truth. Do you mean like a “God’s eye” view of the universe? I suppose one could describe it that way, but the conception of truth that Hoffman seems to argue is the “objective reality” of conscious agents (the entire set of agents, which when in contact — possibly always — become a single conscious agent), so I’m not sure what you mean by your comment on Hoffman not intending it to be seen that way. Seen what way? In a nutshell though, “closer to reality” (per Hoffman) simply means the underlying reality of our perceptual interface.
You are right though, that it presupposes that “closer to reality” has a meaning. This is an assumption of Hoffman’s — that our perception is but a fitness-tuned interface to the actual reality (i.e. “truth”).
I questioned that idea in a recent post on my blog. And I don’t actually think that perception has anything to do with truth. Rather, I see truth as having to do with language.
Ah yes. Well then your view appears to be compatible with the interface theory of perception, at least in the sense that Hoffman thinks truth is driven to extinction in perception with fitness being key. I understand your view of truth focuses on its relationship to language and the idea that some posit truth as needing to be describable in language but you claim that a view-independent description of any metaphysical objective reality isn’t possible to represent in any language. That’s compatible with Hoffman’s realism as there is no need for truth or any underlying reality to be describable by language and even if it was, it wouldn’t have to describe anything meaningful to us. One wouldn’t expect it to be meaningful nor necessarily describable by propositions or the like under the ITOP.
I like to think of Hoffman’s theory regarding conscious agents as analogous to neurons in a brain. If we as individuals are like a neuron in a brain, then when we are communicating with other individuals (other conscious agents), it is analogous to neurons communicating with one another. The interaction of the neurons constitutes a new conscious agent. On a higher level, eventually we get the single conscious agent that we see as ourselves. On an empirical note, as Hoffman mentions we can see that if we split the corpus callosum of an individual’s brain, we find evidence of two consciousnesses with that split-brain patient. So before they were split, was there only one conscious agent? It appears so, but when we split that conscious agent into two parts, the two halves now behave as if they are two separate conscious agents. We recognize this because we know what it is like to interact with one conscious agent. In any case, individuals interacting with one another could create a new conscious agent and all the interactions we have in our environment could be interactions with other conscious agents whether we notice that or not. It’s all very interesting…