In the previous post, part 4 in this series on Predictive Processing (PP), I explored some aspects of reasoning and how different forms of reasoning can be built from a foundational bedrock of Bayesian inference (click here for parts 1, 2, or 3). This has a lot to do with language, but I also claimed that it depends on how the brain is likely generating new models, which I think is likely to involve some kind of natural selection operating on neural networks. The hierarchical structure of the generative models for these predictions as described within a PP framework, also seems to fit well with the hierarchical structure that we find in the brain’s neural networks. In this post, I’m going to talk about the relation between memory, imagination, and unconscious and conscious forms of reasoning.
Memory, Imagination, and Reasoning
Memory is of course crucial to the PP framework whether for constructing real-time predictions of incoming sensory information (for perception) or for long-term predictions involving high-level, increasingly abstract generative models that allow us to accomplish complex future goals (like planning to go grocery shopping, or planning for retirement). Either case requires the brain to have stored some kind of information pertaining to predicted causal relations. Rather than memories being some kind of exact copy of past experiences (where they’d be stored like data on a computer), research has shown that memory functions more like a reconstruction of those past experiences which are modified by current knowledge and context, and produced by some of the same faculties used in imagination.
This accounts for any false or erroneous aspects of our memories, where the recalled memory can differ substantially from how the original event was experienced. It also accounts for why our memories become increasingly altered as more time passes. Over time, we learn new things, continuing to change many of our predictive models about the world, and thus have a more involved reconstructive process the older the memories are. And the context we find ourselves in when trying to recall certain memories, further affect this reconstruction process, adapting our memories in some sense to better match what we find most salient and relevant in the present moment.
Conscious vs. Unconscious Processing & Intuitive Reasoning (Intuition)
Another attribute of memory is that it is primarily unconscious, where we seem to have this pool of information that is kept out of consciousness until parts of it are needed (during memory recall or or other conscious thought processes). In fact, within the PP framework we can think of most of our generative models (predictions), especially those operating in the lower levels of the hierarchy, as being out of our conscious awareness as well. However, since our memories are composed of (or reconstructed with) many higher level predictions, and since only a limited number of them can enter our conscious awareness at any moment, this implies that most of the higher-level predictions are also being maintained or processed unconsciously as well.
It’s worth noting however that when we were first forming these memories, a lot of the information was in our consciousness (the higher-level, more abstract predictions in particular). Within PP, consciousness plays a special role since our attention modifies what is called the precision weight (or synaptic gain) on any prediction error that flows upward through the predictive hierarchy. This means that the prediction errors produced from the incoming sensory information or at even higher levels of processing are able to have a greater impact on modifying and updating the predictive models. This makes sense from an evolutionary perspective, where we can ration our cognitive resources in a more adaptable way, by allowing things that catch our attention (which may be more important to our survival prospects) to have the greatest effect on how we understand the world around us and how we need to act at any given moment.
After repeatedly encountering certain predicted causal relations in a conscious fashion, the more likely those predictions can become automated or unconsciously processed. And if this has happened with certain rules of inference that govern how we manipulate and process many of our predictive models, it seems reasonable to suspect that this would contribute to what we call our intuitive reasoning (or intuition). After all, intuition seems to give people the sense of knowing something without knowing how it was acquired and without any present conscious process of reasoning.
This is similar to muscle memory or procedural memory (like learning how to ride a bike) which is consciously processed at first (thus involving many parts of the cerebral cortex), but after enough repetition it becomes a faster and more automated process that is accomplished more economically and efficiently by the basal ganglia and cerebellum, parts of the brain that are believed to handle a great deal of unconscious processing like that needed for procedural memory. This would mean that the predictions associated with these kinds of causal relations begin to function out of our consciousness, even if the same predictive strategy is still in place.
As mentioned above, one difference between this unconscious intuition and other forms of reasoning that operate within the purview of consciousness is that our intuitions are less likely to be updated or changed based on new experiential evidence since our conscious attention isn’t involved in the updating process. This means that the precision weight of upward flowing prediction errors that encounter downward flowing predictions that are operating unconsciously will have little impact in updating those predictions. Furthermore, the fact that the most automated predictions are often those that we’ve been using for most of our lives, means that they are also likely to have extremely high Bayesian priors, further isolating them from modification.
Some of these priors may become what are called hyperpriors or priors over priors (many of these believed to be established early in life) where there may be nothing that can overcome them, because they describe an extremely abstract feature of the world. An example of a possible hyperprior could be one that demands that the brain settle on one generative model even when it’s comparable to several others under consideration. One could call this a “tie breaker” hyperprior, where if the brain didn’t have this kind of predictive mechanism in place, it may never be able to settle on a model, causing it to see the world (or some aspect of it) as a superposition of equiprobable states rather than simply one determinate state. We could see the potential problem in an organism’s survival prospects if it didn’t have this kind of hyperprior in place. Whether or not a hyperprior like this is a form of innate specificity, or acquired in early learning is debatable.
An obvious trade-off with intuition (or any kind of innate biases) is that it provides us with fast, automated predictions that are robust and likely to be reliable much of the time, but at the expense of not being able to adequately handle more novel or complex situations, thereby leading to fallacious inferences. Our cognitive biases are also likely related to this kind of unconscious reasoning whereby evolution has naturally selected cognitive strategies that work well for the kind of environment we evolved in (African savanna, jungle, etc.) even at the expense of our not being able to adapt as well culturally or in very artificial situations.
Imagination vs. Perception
One large benefit of storing so much perceptual information in our memories (predictive models with different spatio-temporal scales) is our ability to re-create it offline (so to speak). This is where imagination comes in, where we are able to effectively simulate perceptions without requiring a stream of incoming sensory data that matches it. Notice however that this is still a form of perception, because we can still see, hear, feel, taste and smell predicted causal relations that have been inferred from past sensory experiences.
The crucial difference, within a PP framework, is the role of precision weighting on the prediction error, just as we saw above in terms of trying to update intuitions. If precision weighting is set or adjusted to be relatively low with respect to a particular set of predictive models, then prediction error will have little if any impact on the model. During imagination, we effectively decouple the bottom-up prediction error from the top-down predictions associated with our sensory cortex (by reducing the precision weighting of the prediction error), thus allowing us to intentionally perceive things that aren’t actually in the external world. We need not decouple the error from the predictions entirely, as we may want our imagination to somehow correlate with what we’re actually perceiving in the external world. For example, maybe I want to watch a car driving down the street and simply imagine that it is a different color, while still seeing the rest of the scene as I normally would. In general though, it is this decoupling “knob” that we can turn (precision weighting) that underlies our ability to produce and discriminate between normal perception and our imagination.
So what happens when we lose the ability to control our perception in a normal way (whether consciously or not)? Well, this usually results in our having some kind of hallucination. Since perception is often referred to as a form of controlled hallucination (within PP), we could better describe a pathological hallucination (such as that arising from certain psychedelic drugs or a condition like Schizophrenia) as a form of uncontrolled hallucination. In some cases, even with a perfectly normal/healthy brain, when the prediction error simply can’t be minimized enough, or the brain is continuously switching between models, based on what we’re looking at, we experience perceptual illusions.
Whether it’s illusions, hallucinations, or any other kind of perceptual pathology (like not being able to recognize faces), PP offers a good explanation for why these kinds of experiences can happen to us. It’s either because the models are poor (their causal structure or priors) or something isn’t being controlled properly, like the delicate balance between precision weighting and prediction error, any of which that could result from an imbalance in neurotransmitters or some kind of brain damage.
Imagination & Conscious Reasoning
While most people would tend to define imagination as that which pertains to visual imagery, I prefer to classify all conscious experiences that are not directly resulting from online perception as imagination. In other words, any part of our conscious experience that isn’t stemming from an immediate inference of incoming sensory information is what I consider to be imagination. This is because any kind of conscious thinking is going to involve an experience that could in theory be re-created by an artificial stream of incoming sensory information (along with our top-down generative models that put that information into a particular context of understanding). As long as the incoming sensory information was a particular way (any way that we can imagine!), even if it could never be that way in the actual external world we live in, it seems to me that it should be able to reproduce any conscious process given the right top-down predictive model. Another way of saying this is that imagination is simply another word to describe any kind of offline conscious mental simulation.
This also means that I’d classify any and all kinds of conscious reasoning processes as yet another form of imagination. Just as is the case with more standard conceptions of imagination (within PP at least), we are simply taking particular predictive models, manipulating them in certain ways in order to simulate some result with this process decoupled (at least in part) from actual incoming sensory information. We may for example, apply a rule of inference that we’ve picked up on and manipulate several predictive models of causal relations using that rule. As mentioned in the previous post and in the post from part 2 of this series, language is also likely to play a special role here where we’ll likely be using it to help guide this conceptual manipulation process by organizing and further representing the causal relations in a linguistic form, and then determining the resulting inference (which will more than likely be in a linguistic form as well). In doing so, we are able to take highly abstract properties of causal relations and apply rules to them to extract new information.
If I imagine a purple elephant trumpeting and flying in the air over my house, even though I’ve never experienced such a thing, it seems clear that I’m manipulating several different types of predicted causal relations at varying levels of abstraction and experiencing the result of that manipulation. This involves inferred causal relations like those pertaining to visual aspects of elephants, the color purple, flying objects, motion in general, houses, the air, and inferred causal relations pertaining to auditory aspects like trumpeting sounds and so forth.
Specific instances of these kinds of experienced causal relations have led to my inferring them as an abstract probabilistically-defined property (e.g. elephantness, purpleness, flyingness, etc.) that can be reused and modified to some degree to produce an infinite number of possible recreated perceptual scenes. These may not be physically possible perceptual scenes (since elephants don’t have wings to fly, for example) but regardless I’m able to add or subtract, mix and match, and ultimately manipulate properties in countless ways, only limited really by what is logically possible (so I can’t possibly imagine what a square circle would look like).
What if I’m performing a mathematical calculation, like “adding 9 + 9”, or some other similar problem? This appears (upon first glance at least) to be very qualitatively different than simply imagining things that we tend to perceive in the world like elephants, books, music, and other things, even if they are imagined in some phantasmagorical way. As crazy as those imagined things may be, they still contain things like shapes, colors, sounds, etc., and a mathematical calculation seems to lack this. I think the key thing to realize here is the fundamental process of imagination as being able to add or subtract and manipulate abstract properties in any way that is logically possible (given our current set of predictive models). This means that we can imagine properties or abstractions that lack all the richness of a typical visual/auditory perceptual scene.
In the case of a mathematical calculation, I would be manipulating previously acquired predicted causal relations that pertain to quantity and changes in quantity. Once I was old enough to infer that separate objects existed in the world, then I could infer an abstraction of how many objects there were in some space at some particular time. Eventually, I could abstract the property of how many objects without applying it to any particular object at all. Using language to associate a linguistic symbol for each and every specific quantity would lay the groundwork for a system of “numbers” (where numbers are just quantities pertaining to no particular object at all). Once this was done, then my brain could use the abstraction of quantity and manipulate it by following certain inferred rules of how quantities can change by adding to or subtracting from them. After some practice and experience I would now be in a reasonable position to consciously think about “adding 9 + 9”, and either do it by following a manual iterative rule of addition that I’ve learned to do with real or imagined visual objects (like adding up some number of apples or dots/points in a row or grid), or I can simply use a memorized addition table and search/recall the sum I’m interested in (9 + 9 = 18).
Whether we consider imagining a purple elephant, mentally adding up numbers, thinking about what I’m going to say to my wife when I see her next, or trying to explicitly apply logical rules to some set of concepts, all of these forms of conscious thought or reasoning are all simply different sets of predictive models that I’m simply manipulating in mental simulations until I arrive at a perception that’s understood in the desired context and that has minimal prediction error.
Putting it all together
In summary, I think we can gain a lot of insight by looking at all the different aspects of brain function through a PP framework. Imagination, perception, memory, intuition, and conscious reasoning fit together very well when viewed as different aspects of hierarchical predictive models that are manipulated and altered in ways that give us a much more firm grip on the world we live in and its inferred causal structure. Not only that, but this kind of cognitive architecture also provides us with an enormous potential for creativity and intelligence. In the next post in this series, I’m going to talk about consciousness, specifically theories of consciousness and how they may be viewed through a PP framework.