The Experientiality of Matter

If there’s one thing that Nietzsche advocated for, it was to eliminate dogmatism and absolutism from one’s thinking.  And although I generally agree with him here, I do think that there is one exception to this rule.  One thing that we can be absolutely certain of is our own conscious experience.  Nietzsche actually had something to say about this (in Beyond Good and Evil, which I explored in a previous post), where he responds to Descartes’ famous adage “cogito ergo sum” (the very adage associated with my blog!), and he basically says that Descartes’ conclusion (“I think therefore I am”) shows a lack of reflection concerning the meaning of “I think”.  He wonders how he (and by extension, Descartes) could possibly know for sure that he was doing the thinking rather than the thought doing the thinking, and he even considers the possibility that what is generally described as thinking may actually be something more like willing or feeling or something else entirely.

Despite Nietzsche’s criticism against Descartes in terms of what thought is exactly or who or what actually does the thinking, we still can’t deny that there is thought.  Perhaps if we replace “I think therefore I am” with something more like “I am conscious, therefore (my) conscious experience exists”, then we can retain some core of Descartes’ insight while throwing out the ambiguities or uncertainties associated with how exactly one interprets that conscious experience.

So I’m in agreement with Nietzsche in the sense that we can’t be certain of any particular interpretation of our conscious experience, including whether or not there is an ego or a self (which Nietzsche actually describes as a childish superstition similar to the idea of a soul), nor can we make any certain inferences about the world’s causal relations stemming from that conscious experience.  Regardless of these limitations, we can still be sure that conscious experience exists, even if it can’t be ascribed to an “I” or a “self” or any particular identity (let alone a persistent identity).

Once we’re cognizant of this certainty, and if we’re able to crawl out of the well of solipsism and eventually build up a theory about reality (for pragmatic reasons at the very least), then we must remain aware of the priority of consciousness in any resultant theory we construct about reality, with regard to its structure or any of its other properties.  Personally, I believe that some form of naturalistic physicalism (a realistic physicalism) is the best candidate for an ontological theory that is the most parsimonious and explanatory for all that we experience in our reality.  However, most people that make the move to adopt some brand of physicalism seem to throw the baby out with the bathwater (so to speak), whereby consciousness gets eliminated by assuming it’s an illusion or that it’s not physical (therefore having no room for it in a physicalist theory, aside from its neurophysiological attributes).

Although I used to feel differently about consciousness (and it’s relationship to physicalism), where I thought it was plausible for it to be some kind of an illusion, upon further reflection I’ve come to realize that this was a rather ridiculous position to hold.  Consciousness can’t be an illusion in the proper sense of the word, because the experience of consciousness is real.  Even if I’m hallucinating where my perceptions don’t directly correspond with the actual incoming sensory information transduced through my body’s sensory receptors, then we can only say that the perceptions are illusory insofar as they don’t directly map onto that incoming sensory information.  But I still can’t say that having these experiences is itself an illusion.  And this is because consciousness is self-evident and experiential in that it constitutes whatever is experienced no matter what that experience consists of.

As for my thoughts on physicalism, I came to realize that positing consciousness as an intrinsic property of at least some kinds of physical material (analogous to a property like mass) allows us to avoid having to call consciousness non-physical.  If it is simply an experiential property of matter, that doesn’t negate its being a physical property of that matter.  It may be that we can’t access this property in such a way as to evaluate it with external instrumentation, like we can for all the other properties of matter that we know of such as mass, charge, spin, or what-have-you, but that doesn’t mean an experiential property should be off limits for any physicalist theory.  It’s just that most physicalists assume that everything can or has to be reducible to the externally accessible properties that our instrumentation can measure.  And this suggests that they’ve simply assumed that the physical can only include externally accessible properties of matter, rather than both internally and externally accessible properties of matter.

Now it’s easy to see why science might push philosophy in this direction because its methodology is largely grounded on third-party verification and a form of objectivity involving the ability to accurately quantify everything about a phenomenon with little or no regard for introspection or subjectivity.  And I think that this has caused many a philosopher to paint themselves into a corner by assuming that any ontological theory underlying the totality of our reality must be constrained in the same way that the physical sciences are.  To see why this is an unwarranted assumption, let’s consider a “black box” that can only be evaluated by a certain method externally.  It would be fallacious to conclude that just because we are unable to access the inside of the box, that the box must therefore be empty inside or that there can’t be anything substantially different inside the box compared to what is outside the box.

We can analogize this limitation of studying consciousness with our ability to study black holes within the field of astrophysics, where we’ve come to realize that accessing any information about their interior (aside from how much mass there is) is impossible to do from the outside.  And if we managed to access this information (if there is any) from the inside by leaping past its outer event horizon, it would be impossible for us to escape and share any of that information.  The best we can do is to learn what we can from the outside behavior of the black hole in terms of its interaction with surrounding matter and light and infer something about the inside, like how much matter it contains (e.g. we can infer the mass of a black hole from its outer surface area).  And we can learn a little bit more by considering what is needed to create or destroy a black hole, thus creating or destroying any interior qualities that may or may not exist.

A black hole can only form from certain configurations of matter, particularly aggregates that are above a certain mass and density.  And it can only be destroyed by starving it to death, by depriving it of any new matter, where it will slowly die by evaporating entirely into Hawking radiation, thus destroying anything that was on the inside in the process.  So we can infer that any internal qualities it does have, however inaccessible they may be, can be brought into and out of existence with certain physical processes.

Similarly, we can infer some things about consciousness by observing one’s external behavior including inferring some conditions that can create, modify, or destroy that type of consciousness, but we are unable to know what it’s like to be on the inside of that system once it exists.  We’re only able to know about the inside of our own conscious system, where we are in some sense inside our own black hole with nobody else able to access this perspective.  And I think it is easy enough to imagine that certain configurations of matter simply have an intrinsic, externally inaccessible experiential property, just as certain configurations of matter lead to the creation of a black hole with its own externally inaccessible and qualitatively unknown internal properties.  Despite the fact that we can’t access the black hole’s interior with a strictly external method, to determine its internal properties, this doesn’t mean we should assume that whatever properties may exist inside it are therefore fundamentally non-physical.  Just as we wouldn’t consider alternate dimensions (such as those predicted in M-theory/String-Theory) that we can’t physically access to be non-physical.  Perhaps one or more of these inaccessible dimensions (if they exist) is what accounts for an intrinsic experiential property within matter (though this is entirely speculative and need not be true for the previous points to hold, but it’s an interesting thought nevertheless).

Here’s a relevant quote from the philosopher Galen Strawson, where he outlines what physicalism actually entails:

Real physicalists must accept that at least some ultimates are intrinsically experience-involving. They must at least embrace micropsychism. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an ‘inference to the best explanation’… Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential. But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. ‘And were the inmost essence of things laid open to us’ I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism… So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of ‘substance dualism’… Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.

— Galen Strawson, Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?
.
While I don’t believe that all matter has a mind per se, because a mind is generally conceived as being a complex information processing structure, I think it is likely that all matter has an experiential quality of some kind, even if most instantiations of it are entirely unrecognizable as what we’d generally consider to be “consciousness” or “mentality”.  I believe that the intuitive gap here is born from the fact that the only minds we are confident exist are those instantiated by a brain, which has the ability to make an incredibly large number of experiential discriminations between various causal relations, thus giving it a capacity that is incredibly complex and not observed anywhere else in nature.  On the other hand, a particle of matter on its own would be hypothesized to have the capacity to make only one or two of these kinds of discriminations, making it unintelligent and thus incapable of brain-like activity.  Once a person accepts that an experiential quality can come in varying degrees from say one experiential “bit” to billions or trillions of “bits” (or more), then we can plausibly see how matter could give rise to systems that have a vast range in their causal power, from rocks that don’t appear to do much at all, to living organisms that have the ability to store countless representations of causal relations (memory) allowing them to behave in increasingly complex ways.  And perhaps best of all, this approach solves the mind-body problem by eliminating the mystery of how fundamentally non-experiential stuff could possibly give rise to experientiality and consciousness.
Advertisements

It’s Time For Some Philosophical Investigations

Ludwig Wittgenstein’s Philosophical Investigations is a nice piece of work where he attempts to explain his views on language and the consequences of this view on various subjects like logic, semantics, cognition, and psychology.  I’ve mentioned some of his views very briefly in a couple of earlier posts, but I wanted to delve into his work in a little more depth here and comment on what strikes me as most interesting.  Lately, I’ve been looking back at some of the books I’ve read from various philosophers and have been wanting to revisit them so I can explore them in more detail and share how they connect to some of my own thoughts.  Alright…let’s begin.

Language, Meaning, & Their Probabilistic Attributes

He opens his Philosophical Investigations with a quote from St. Augustine’s Confessions that describes how a person learns a language.  St. Augustine believed that this process involved simply learning the names of objects (for example, by someone else pointing to the objects that are so named) and then stringing them together into sentences, and Wittgenstein points out that this is true to some trivial degree but it overlooks a much more fundamental relationship between language and the world.  For Wittgenstein, the meaning of words can not simply be attached to an object like a name can.  The meaning of a word or concept has much more of a fuzzy boundary as it depends on a breadth of context or associations with other concepts.  He analogizes this plurality in the meanings of words with the relationship between members of a family.  While there may be some resemblance between different uses of a word or concept, we aren’t able to formulate a strict definition to fully describe this resemblance.

One problem then, especially within philosophy, is that many people assume that the meaning of a word or concept is fixed with sharp boundaries (just like the fixed structure of words themselves).  Wittgenstein wants to dispel people of this false notion (much as Nietzsche tried to do before him) so that they can stop misusing language, as he believed that this misuse was the cause of many (if not all) of the major problems that had cropped up in philosophy over the centuries, particularly in metaphysics.  Since meaning is actually somewhat fluid and can’t be accounted for by any fixed structure, Wittgenstein thinks that any meaning that we can attach to these words is ultimately going to be determined by how those words are used.  Since he ties meaning with use, and since this use is something occurring in our social forms of life, it has an inextricably external character.  Thus, the only way to determine if someone else has a particular understanding of a word or concept is through their behavior, in response to or in association with the use of the word(s) in question.  This is especially important in the case of ambiguous sentences, which Wittgenstein explores to some degree.

Probabilistic Shared Understanding

Some of what Wittgenstein is trying to point out here are what I like to refer to as the inherently probabilistic attributes of language.  And it seems to me to be probabilistic for a few different reasons, beyond what Wittgenstein seems to address.  First, there is no guarantee that what one person means by a word or concept exactly matches the meaning from another person’s point of view, but at the very least there is almost always going to be some amount of semantic overlap (and possibly 100% in some cases) between the two individual’s intended meanings, and so there is going to be some probability that the speaker and the listener do in fact share a complete understanding.  It seems reasonable to argue that simpler concepts will have a higher probability of complete semantic overlap whereas more complex concepts are more likely to leave a gap in that shared understanding.  And I think this is true even if we can’t actually calculate what any of these probabilities are.

Now my use of the word meaning here differs from Wittgenstein’s because I am referring to something that is not exclusively shared by all parties involved and I am pointing to something that is internal (a subjective understanding of a word) rather than external (the shared use of a word).  But I think this move is necessary if we are to capture all of the attributes that people explicitly or implicitly refer to with a concept like meaning.  It seems better to compromise with Wittgenstein’s thinking and refer to the meaning of a word as a form of understanding that is intimately connected with its use, but which involves elements that are not exclusively external.

We can justify this version of meaning through an example.  If I help teach you how to ride a bike and explain that this activity is called biking or to bike, then we can use Wittgenstein’s conception of meaning and it will likely account for our shared understanding, and so I have no qualms about that, and I’d agree with Wittgenstein that this is perhaps the most important function of language.  But it may be the case that you have an understanding that biking is an activity that can only happen on Tuesdays, because that happened to be the day that I helped teach you how to ride a bike.  Though I never intended for you to understand biking in this way, there was no immediate way for me to infer that you had this misunderstanding on the day I was teaching you this.  I could only learn of this fact if you explicitly explained to me your understanding of the term with enough detail, or if I had asked you additional questions like whether or not you’d like to bike on a Wednesday (for example), with you answering “I don’t know how to do that as that doesn’t make any sense to me.”  Wittgenstein doesn’t account for this gap in understanding in his conception of meaning and I think this makes for a far less useful conception.

Now I think that Wittgenstein is still right in the sense that the only way to determine someone else’s understanding (or lack thereof) of a word is through their behavior, but due to chance as well as where our attention is directed at any given moment, we may never see the right kinds of behavior to rule out any number of possible misunderstandings, and so we’re apt to just assume that these misunderstandings don’t exist because language hides them to varying degrees.  But they can and in some cases do exist, and this is why I prefer a conception of meaning that takes these misunderstandings into account.  So I think it’s useful to see language as probabilistic in the sense that there is some probability of a complete shared understanding underlying the use of a word, and thus there is a converse probability of some degree of misunderstanding.

Language & Meaning as Probabilistic Associations Between Causal Relations

A second way of language being probabilistic is due to the fact that the unique meanings associated with any particular use of a word or concept as understood by an individual are derived from probabilistic associations between various inferred causal relations.  And I believe that this is the underlying cause of most of the problems that Wittgenstein was trying to address in this book.  He may not have been thinking about the problem in this way, but it can account for the fuzzy boundary problem associated with the task of trying to define the meaning of words since this probabilistic structure underlies our experiences, our understanding of the world, and our use of language such that it can’t be represented by static, sharp definitions.  When a person is learning a concept, like redness, they experience a number of observations and infer what is common to all of those experiences, and then they extract a particular subset of what is common and associate it with a word like red or redness (as opposed to another commonality like objectness, or roundness, or what-have-you).  But in most cases, separate experiences of redness are going to be different instantiations of redness with different hues, textures, shapes, etc., which means that redness gets associated with a large range of different qualia.

If you come across a new qualia that seems to more closely match previous experiences associated with redness rather than orangeness (for example), I would argue that this is because the brain has assigned a higher probability to that qualia being an instance of redness as opposed to, say, orangeness.  And the brain may very well test a hypothesis of the qualia matching the concept of redness versus the concept of orangeness, and depending on your previous experiences of both, and the present context of the experience, your brain may assign a higher probability of orangeness instead.  Perhaps if a red-orange colored object is mixed in with a bunch of unambiguously orange-colored objects, it will be perceived as a shade of orange (to match it with the rest of the set), but if the case were reversed and it were mixed in with a bunch of unambiguously red-colored objects, it will be perceived as a shade of red instead.

Since our perception of the world depends on context, then the meanings we assign to words or concepts also depends on context, but not only in the sense of choosing a different use of a word (like Wittgenstein argues) in some language game or other, but also by perceiving the same incoming sensory information as conceptually different qualia (like in the aforementioned case of a red-orange colored object).  In that case, we weren’t intentionally using red or orange in a different way but rather were assigning one word or the other to the exact same sensory information (with respect to the red-orange object) which depended on what else was happening in the scene that surrounded that subset of sensory information.  To me, this highlights how meaning can be fluid in multiple ways, some of that fluidity stemming from our conscious intentions and some of it from unintentional forces at play involving our prior expectations within some context which directly modify our perceived experience.

This can also be seen through Wittgenstein’s example of what he calls a duckrabbit, an ambiguous image that can be perceived as a duck or a rabbit.  I’ve taken the liberty of inserting this image here along with a copy of it which has been rotated in order to more strongly invoke the perception of a rabbit.  The first image no doubt looks more like a duck and the second image, more like a rabbit.

Now Wittgenstein says that when one is looking at the duckrabbit and sees a rabbit, they aren’t interpreting the picture as a rabbit but are simply reporting what they see.  But in the case where a person sees a duck first and then later sees a rabbit, Wittgenstein isn’t sure what to make of this.  However, he claims to be sure that whatever it is, it can’t be the case that the external world stays the same while an internal cognitive change takes place.  Wittgenstein was incorrect on this point because the external world doesn’t change (in any relevant sense) despite our seeing the duck or seeing the rabbit.  Furthermore, he never demonstrates why two different perceptions would require a change in the external world.  The fact of the matter is, you can stare at this static picture and ask yourself to see a duck or to see a rabbit and it will affect your perception accordingly.  This is partially accomplished by you mentally rotating the image in your imagination and seeing if that changes how well it matches one conception or the other, and since it matches both conceptions to a high degree, you can easily perceive it one way or the other.  Your brain is simply processing competing hypotheses to account for the incoming sensory information, and the top-down predictions of rabbitness or duckness (which you’ve acquired over past experiences) actually changes the way you perceive it with no change required in the external world (despite Wittgenstein’s assertion to the contrary).

To give yet another illustration of the probabilistic nature of language, just imagine the head of a bald man and ask yourself, if you were to add one hair at a time to this bald man’s head, at what point does he lose the property of baldness?  If hairs were slowly added at random, and you could simply say “Stop!  Now he’s no longer bald!” at some particular time, there’s no doubt in my mind that if this procedure were repeated (even if the hairs were added non-randomly), you would say “Stop!  Now he’s no longer bald!” at a different point in this transition.  Similarly if you were looking at a light that was changing color from red to orange, and were asked to say when the color has changed to orange, you would pick a point in the transition that is within some margin of error but it wouldn’t be an exact, repeatable point in the transition.  We could do this thought experiment with all sorts of concepts that are attached to words, like cat and dog and, for example, use a computer graphic program to seamlessly morph a picture of a cat into a picture of a dog and ask at what point did the cat “turn into” a dog?  It’s going to be based on a probability of coincident features that you detect which can vary over time.  Here’s a series of pictures showing a chimpanzee morphing into Bill Clinton to better illustrate this point:

At what point do we stop seeing a picture of a chimpanzee and start seeing a picture of something else?  When do we first see Bill Clinton?  What if I expanded this series of 15 images into a series of 1000 images so that this transition happened even more gradually?  It would be highly unlikely to pick the exact same point in the transition two times in a row if the images weren’t numbered or arranged in a grid.  We can analogize this phenomenon with an ongoing problem in science, known as the species problem.  This problem can be described as the inherent difficulty of defining exactly what a species is, which is necessary if one wants to determine if and when one species evolves into another.  This problem occurs because the evolutionary processes giving rise to new species are relatively slow and continuous whereas sorting those organisms into sharply defined categories involves the elimination of that generational continuity and replacing it with discrete steps.

And we can see this effect in the series of images above, where each picture could represent some large number of generations in an evolutionary timeline, where each picture/organism looks roughly like the “parent” or “child” of the picture/organism that is adjacent to it.  Despite this continuity, if we look at the first picture and the last one, they look like pictures of distinct species.  So if we want to categorize the first and last picture as distinct species, then we create a problem when trying to account for every picture/organism that lies in between that transition.  Similarly words take on an appearance of strict categorization (of meaning) when in actuality, any underlying meaning attached is probabilistic and dynamic.  And as Wittgenstein pointed out, this makes it more appropriate to consider meaning as use so that the probabilistic and dynamic attributes of meaning aren’t lost.

Now you may think you can get around this problem of fluidity or fuzzy boundaries with concepts that are simpler and more abstract, like the concept of a particular quantity (say, a quantity of four objects) or other concepts in mathematics.  But in order to learn these concepts in the first place, like quantity, and then associate particular instances of it with a word, like four, one had to be presented with a number of experiences and infer what was common to all of those experiences (as was the case with redness mentioned earlier).  And this inference (I would argue) involves a probabilistic process as well, it’s just that the resulting probability of our inferring particular experiences as an instance of four objects is incredibly high and therefore repeatable and relatively unambiguous.  Therefore that kind of inference is likely to be sustained no matter what the context, and it is likely to be shared by two individuals with 100% semantic overlap (i.e. it’s almost certain that what I mean by four is exactly what you mean by four even though this is almost certainly not the case for a concept like love or consciousness).  This makes mathematical concepts qualitatively different from other concepts (especially those that are more complex or that more closely map on to reality), but it doesn’t negate their having a probabilistic attribute or foundation.

Looking at the Big Picture

Though this discussion of language and meaning is not an exhaustive analysis of Wittgenstein’s Philosophical Investigations, it represents an analysis of the main theme present throughout.  His main point was to shed light on the disparity between how we often think of language and how we actually use it.  When we stray away from the way it is actually used in our everyday lives, in one form of social life or other, and instead misuse it such as in philosophy, this creates all sorts of problems and unwarranted conclusions.  He also wants his readers to realize that the ultimate goal of philosophy should not be to try and make metaphysical theories and deep explanations underlying everyday phenomena, since these are often born out of unwarranted generalizations and other assumptions stemming from how our grammar is structured.  Instead we ought to subdue these temptations to generalize and subdue our temptations to be dogmatic and instead use philosophy as a kind of therapeutic tool to keep our abstract thinking in check and to better understand ourselves and the world we live in.

Although I disagree with some of Wittgenstein’s claims about cognition (in terms of how intimately it is connected to the external world) and take some issue with his arguably less useful conception of meaning, he makes a lot of sense overall.  Wittgenstein was clearly hitting upon a real difference between the way actual causal relations in our experience are structured and how those relations are represented in language.  Personally, I think that work within philosophy is moving in the right direction if the contributions made therein lead us to make more successful predictions about the causal structure of the world.  And I believe this to be so even if this progress includes generalizations that may not be exactly right.  As long as we can structure our thinking to make more successful predictions, then we’re moving forward as far as I’m concerned.  In any case, I liked the book overall and thought that the interlocutory style gave the reader a nice break from the typical form seen in philosophical argumentation.  I highly recommend it!

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

An Evolved Consciousness Creating Conscious Evolution

Two Evolutionary Leaps That Changed It All

As I’ve mentioned in a previous post, human biological evolution has led to the emergence of not only consciousness but also a co-existing yet semi-independent cultural evolution (through the unique evolution of the human brain).  This evolutionary leap has allowed us to produce increasingly powerful technologies which in turn have provided a means for circumventing many natural selection pressures that our physical bodies would otherwise be unable to handle.

One of these technologies has been the selective breeding of plants and animals, with this process often referred to as “artificial” selection, as opposed to “natural” selection since human beings have served as an artificial selection pressure (rather than the natural selection pressures of the environment in general).  In the case of our discovery of artificial selection, by choosing which plants and animals to cultivate and raise, we basically just catalyzed the selection process by providing a selection pressure based on the plant or animal traits that we’ve desired most.  By doing so, rather than the selection process taking thousands or even millions of years to produce what we have today (in terms of domesticated plants and animals), it only took a minute fraction of that time since it was mediated through a consciously guided or teleological process, unlike natural selection which operates on randomly differentiating traits leading to differential reproductive success (and thus new genomes and species) over time.

This second evolutionary leap (artificial selection that is) has ultimately paved the way for civilization, as it has increased the landscape of our diet and thus our available options for food, and the resultant agriculture has allowed us to increase our population density such that human collaboration, complex distribution of labor, and ultimately the means for creating new and increasingly complex technologies, have been made possible.  It is largely because of this new evolutionary leap that we’ve been able to reach the current pinnacle of human evolution, the newest and perhaps our last evolutionary leap, or what I’ve previously referred to as “engineered selection”.

With artificial selection, we’ve been able to create new species of plants and animals with very unique and unprecedented traits, however we’ve been limited by the rate of mutations or other genomic differentiating mechanisms that must arise in order to create any new and desirable traits. With engineered selection, we can simply select or engineer the genomic sequences required to produce the desired traits, effectively allowing us to circumvent any genomic differentiation rate limitations and also allowing us instant access to every genomic possibility.

Genetic Engineering Progress & Applications

After a few decades of genetic engineering research, we’ve gained a number of capabilities including but not limited to: producing recombinant DNA, producing transgenic organisms, utilizing in vivo trans-species protein production, and even creating the world’s first synthetic life form (by adding a completely synthetic or human-constructed bacterial genome to a cell containing no DNA).  The plethora of potential applications for genetic engineering (as well as those applications currently in use) has continued to grow as scientists and other creative thinkers are further discovering the power and scope of areas such as mimetics, micro-organism domestication, nano-biomaterials, and many other inter-related niches.

Domestication of Genetically Engineered Micro and Macro-organisms

People have been genetically modifying plants and animals for the same reasons they’ve been artificially selecting them — in order to produce species with more desirable traits. Plants and animals have been genetically engineered to withstand harsher climates, resist harmful herbicides or pesticides (or produce their own pesticides), produce more food or calories per acre (or more nutritious food when all else is equal), etc.  Plants and animals have also been genetically modified for the purposes of “pharming”, where substances that aren’t normally produced by the plant or animal (e.g. pharmacological substances, vaccines, etc.) are expressed, extracted, and then purified.

One of the most compelling applications of genetic engineering within agriculture involves solving the “omnivore’s dilemma”, that is, the prospect of growing unconscious livestock by genetically inhibiting the development of certain parts of the brain so that the animal doesn’t experience any pain or suffering.  There have also been advancements made with in vitro meat, that is, producing cultured meat cells so that no actual animal is needed at all other than some starting cells taken painlessly from live animals (which are then placed into a culture media to grow into larger quantities of meat), however it should be noted that this latter technique doesn’t actually require any genetic modification, although genetic modification may have merit in improving these techniques.  The most important point here is that these methods should decrease the financial and environmental costs of eating meat, and will likely help to solve the ethical issues regarding the inhumane treatment of animals within agriculture.

We’ve now entered a new niche regarding the domestication of species.  As of a few decades ago, we began domesticating micro-organisms. Micro-organisms have been modified and utilized to produce insulin for diabetics as well as other forms of medicine such as vaccines, human growth hormone, etc.  There have also been certain forms of bacteria genetically modified in order to turn cellulose and other plant material directly into hydrocarbon fuels.  This year (2014), E. coli bacteria have been genetically modified in order to turn glucose into pinene (a high energy hydrocarbon used as a rocket fuel).  In 2013, researchers at the University of California, Davis, genetically engineered cyanobacteria (a.k.a. blue-green algae) by adding particular DNA sequences to its genome which coded for specific enzymes such that it can use sunlight and the process of photosynthesis to turn CO2 into 2,3 butanediol (a chemical that can be used as a fuel, or to make paint, solvents, and plastics), thus producing another means of turning our over abundant carbon emissions back into fuel.

On a related note, there are also efforts underway to improve the efficiency of certain hydro-carbon eating bacteria such as A. borkumensis in order to clean up oil spills even more effectively.  Imagine one day having the ability to use genetically engineered bacteria to directly convert carbon emissions back into mass-produced fuel, and if the fuel spills during transport, also having the counterpart capability of cleaning it up most efficiently with another form of genetically engineered bacteria.  These capabilities are being further developed and are only the tip of the iceberg.

In theory, we should also be able to genetically engineer bacteria to decompose many other materials or waste products that ordinarily decompose extremely slowly. If any of these waste products are hazardous, bacteria could be genetically engineered to breakdown or transform the waste products into a safe and stable compound.  With these types of solutions we can make many new materials and have a method in line for their proper disposal (if needed).  Additionally, by utilizing some techniques mentioned in the next section, we can also start making more novel materials that decompose using non-genetically-engineered mechanisms.

It is likely that genetically modified bacteria will continue to provide us with many new types of mass-produced chemicals and products. For those processes that do not work effectively (if at all) in bacterial (i.e. prokaryotic) cells, then eukaryotic cells such as yeast, insect cells, and mammalian cells can often be used as a viable option. All of these genetically engineered domesticated micro-organisms will likely be an invaluable complement to the increasing number of genetically modified plants and animals that are already being produced.

Mimetics

In the case of mimetics, scientists are discovering new ways of creating novel materials using a bottom-up approach at the nano-scale by utilizing some of the self-assembly techniques that natural selection has near-perfected over millions of years.  For example, mollusks form sea shells with incredibly strong structural/mechanical properties by their DNA coding for the synthesis of specific proteins, and those proteins bonding the raw materials of calcium and carbonate into alternating layers until a fully formed shell is produced.  The pearls produced by clams are produced with similar techniques. We could potentially use the same DNA sequence in combination with a scaffold of our choosing such that a similar product is formed with unique geometries, or through genetic engineering techniques, we could modify the DNA sequence so that it performs the same self-assembly with completely different materials (e.g. silicon, platinum, titanium, polymers, etc.).

By combining the capabilities of scaffolding as well as the production of unique genomic sequences, one can further increase the number of possible nanomaterials or nanostructures, although I’m confident that most if not all scaffolding needs could eventually be accomplished by the DNA sequence alone (much like the production of bone, exoskeleton, and other types of structural tissues in animals).  The same principles can be applied by looking at how silk is produced by spiders, how the cochlear hair cells are produced in mammals, etc.  Many of these materials are stronger, lighter, and more defect-free than some of the best human products ever engineered.  By mimicking and modifying these DNA-induced self-assembly techniques, we can produce entirely new materials with unprecedented properties.

If we realize that even the largest plants and animals use these same nano-scale assembly processes to build themselves, it isn’t hard to imagine using these genetic engineering techniques to effectively grow complete macro-scale consumer products.  This may sound incredibly unrealistic with our current capabilities, but imagine one day being able to grow finished products such as clothing, hardware, tools, or even a house.  There are already people working on these capabilities to some degree (for example using 3D printed scaffolding or other scaffolding means and having plant or animal tissue grow around it to form an environmentally integrated bio-structure).  If this is indeed realizable, then perhaps we could find a genetic sequence to produce almost anything we want, even a functional computer or other device.  If nature can use DNA and natural selection to produce macro-scale organisms with brains capable of pattern recognition, consciousness, and computation (and eventually the learned capability of genetic engineering in the case of the human brain), then it seems entirely reasonable that we could eventually engineer DNA sequences to produce things with at least that much complexity, if not far higher complexity, and using a much larger selection of materials.

Other advantages from using such an approach include the enormous energy savings gained by adopting the naturally selected economically efficient process of self-assembly (including less changes in the forms of energy used, and thus less loss) and a reduction in specific product manufacturing infrastructure. That is, whereas we’ve typically made industrial scale machines individually tailored to produce specific components which are later assembled into a final product, by using DNA (and the proteins it codes for) to do the work for us, we will no longer require nearly as much manufacturing capital, for the genetic engineering capital needed to produce any genetic sequence is far more versatile.

Transcending the Human Species

Perhaps the most important application of genetic engineering will be the modification of our own species.  Many of the world’s problems are caused by sudden environmental changes (many of them anthropogenic), and if we can change ourselves and/or other species biologically in order to adapt to these unexpected and sudden environmental changes (or to help prevent them altogether), then the severity of those problems can be reduced or eliminated.  In a sense, we would be selecting our own as well as other species by providing the proper genes to begin with, rather than relying on extremely slow genomic differentiation mechanisms and the greater rates of suffering and loss of life that natural selection normally follows.

Genetic Enhancement of Existing Features

With power over the genome, we may one day be able to genetically increase our life expectancy, for example, by modifying the DNA polymerase-g enzyme in our mitochondria such that they make less errors (i.e. mutations) during DNA replication, by genetically altering telomeres in our nuclear DNA such that they can maintain their length and handle more mitotic divisions, or by finding ways to preserve nuclear DNA, etc. If we also determine which genes lead to certain diseases (as well as any genes that help to prevent them), genetic engineering may be the key to extending the length of our lives perhaps indefinitely.  It may also be the key to improving the quality of that extended life by replacing the techniques we currently use for health and wellness management (including pharmaceuticals) with perhaps the most efficacious form of preventative medicine imaginable.

If we can optimize our brain’s ability to perform neuronal regeneration, reconnection, rewiring, and/or re-weighting based on the genetic instructions that at least partially mediate these processes, this optimization should drastically improve our ability to learn by improving the synaptic encoding and consolidation processes involved in memory and by improving the combinatorial operations leading to higher conceptual complexity.  Thinking along these lines, by increasing the number of pattern recognition modules that develop in the neo-cortex, or by optimizing their configuration (perhaps by increasing the number of hierarchies), our general intelligence would increase as well and would be an excellent complement to an optimized memory.  It seems reasonable to assume that these types of cognitive changes will likely have dramatic effects on how we think and thus will likely affect our philosophical beliefs as well.  Religious beliefs are also likely to change as the psychological comforts provided by certain beliefs may no longer be as effective (if those comforts continue to exist at all), especially as our species continues to phase out non-naturalistic explanations and beliefs as a result of seeing the world from a more objective perspective.

If we are able to manipulate our genetic code in order to improve the mechanisms that underlie learning, then we should also be able to alter our innate abilities through genetic engineering. For example, what if infants could walk immediately after birth (much like a newborn calf)? What if infants had adequate motor skills to produce (at least some) spoken language much more quickly? Infants normally have language acquisition mechanisms which allow them to eventually learn language comprehension and productivity but this typically takes a lot of practice and requires their motor skills to catch up before they can utter a single word that they do in fact understand. Circumventing the learning requirement and the motor skill developmental lag (at least to some degree) would be a phenomenal evolutionary advancement, and this type of innate enhancement could apply to a large number of different physical skills and abilities.

Since DNA ultimately controls the types of sensory receptors we have, we should eventually be able to optimize these as well.  For example, photoreceptors could be modified such that we would be able to see new frequencies of electro-magnetic radiation (perhaps a more optimized range of frequencies if not a larger range altogether).  Mechano-receptors of all types could be modified, for example, to hear a different if not larger range of sound frequencies or to increase tactile sensitivity (i.e. touch).  Olfactory or gustatory receptors could also be modified in order to allow us to smell and taste previously undetectable chemicals.  Basically, all of our sensations could be genetically modified and, when combined with the aforementioned genetic modifications to the brain itself, this would allow us to have greater and more optimized dimensions of perception in our subjective experiences.

Genetic Enhancement of Novel Features

So far I’ve been discussing how we may be able to use genetic engineering to enhance features we already possess, but there’s no reason we can’t consider using the same techniques to add entirely new features to the human repertoire. For example, we could combine certain genes from other animals such that we can re-grow damaged limbs or organs, have gills to breathe underwater, have wings in order to fly, etc.  For that matter, we may even be able to combine certain genes from plants such that we can produce (at least some of) our own chemical energy from the sun, that is, create at least partially photosynthetic human beings.  It is certainly science fiction at the moment, but I wouldn’t discount the possibility of accomplishing this one day after considering all of the other hybrid and transgenic species we’ve created already, and after considering the possible precedent mentioned in the endosymbiotic theory (where an ancient organism may have “absorbed” another to produce energy for it, e.g. mitochondria and chloroplasts in eukaryotic cells).

Above and beyond these possibilities, we could also potentially create advanced cybernetic organisms.  What if we were able to integrate silicon-based electronic devices (or something more biologically compatible if needed) into our bodies such that the body grows or repairs some of these technologies using biological processes?  Perhaps if the body is given the proper diet (i.e. whatever materials are needed in the new technological “organ”) and has the proper genetic code such that the body can properly assimilate those materials to create entirely new “organs” with advanced technological features (e.g. wireless communication or wireless access to an internet database activated by particular thoughts or another physiological command cue), we may eventually be able to get rid of external interface hardware and peripherals altogether.  It is likely that electronic devices will first become integrated into our bodies through surgical implantation in order to work with our body’s current hardware (including the brain), but having the body actually grow and/or repair these devices using DNA instruction would be the next logical step of innovation if it is eventually feasible.

Malleable Human Nature

When people discuss complex issues such as social engineering, sustainability, crime-reduction, etc., it is often mentioned that there is a fundamental barrier between our current societal state and where we want or need to be, and this barrier is none other than human nature itself.  Many people in power have tried to change human behavior with brute force while operating under the false assumption that human beings are analogous to some kind of blank slate that can simply learn or be conditioned to behave in any way without limits. This denial of human nature (whether implicit or explicit) has led to a lot of needless suffering and has also led to the de-synchronization of biological and cultural evolution.

Humans often think that they can adapt to any cultural change, but we often lose sight of the detrimental power that technology and other cultural inventions and changes can have over our physiological and psychological well-being. In a nutshell, the speed of cultural evolution can often make us feel like a fish out of water, perhaps better suited to live in an environment closer to our early human ancestors.  Whatever the case, we must embrace human nature and realize that our efforts to improve society (or ourselves) will only have long term efficacy if we work with human nature rather than against it.  So what can we do if our biological evolution is out-of-sync with our cultural evolution?  And what can we do if we have no choice but to accept human nature, that is, our (often selfish) biologically-driven motivations, tendencies, etc.?  Once again, genetic engineering may provide a solution to many of these previously insoluble problems.  To put it simply, if we can change our genome as desired, then we may be able to not only synchronize our biological and cultural evolution, but also change human nature itself in the process.  This change could not only make us feel better adjusted to the modern cultural environment we’re living in, but it could also incline us to instinctually behave in ways that are more beneficial to each other and to the world as a whole.

It’s often said that we have selfish genes in some sense, that is, many if not all of our selfish behaviors (as well as instinctual behaviors in general) are a reflection of the strategy that genes implement in their vehicles (i.e. our bodies) in order for the genes to maintain themselves and reproduce.  That genes possess this kind of strategy does not require us to assume that they are conscious in any way or have actual goals per se, but rather that natural selection simply selects genes that code for mechanisms which best maintain and spread those very genes.  Natural selection tends toward effective self-replicators, and that’s why “selfish” genes (in large part) cause many of our behaviors.  Improving reproductive fitness and successful reproduction has been the primary result of this strategy and many of the behaviors and motivations that were most advantageous to accomplish this are no longer compatible with modern culture including the long-term goals and greater good that humans often strive for.

Humans no longer exclusively live under the law of the jungle or “survival of the fittest” because our humanistic drives and their cultural reinforcements have expanded our horizons beyond simple self-preservation or a Machiavellian mentality.  Many humans have tried to propagate principles such as honesty, democracy, egalitarianism, immaterialism, sustainability, and altruism around the world, and they are often high-jacked by our often short-sighted sexual and survival-based instinctual motivations to gain sexual mates, power, property, a higher social status, etc.  Changing particular genes should also allow us to change these (now) disadvantageous aspects of human nature and as a result this would completely change how we look at every problem we face. No longer would we have to say “that solution won’t work because it goes against human nature”, or “the unfortunate events in human history tend to recur in one way or another because humans will always…”, but rather we could ask ourselves how we want or need to be and actually make it so by changing our human nature. Indeed, if genetic engineering is used to accomplish this, history would no longer have to repeat itself in the ways that we abhor. It goes without saying that a lot of our behavior can be changed for the better by an appropriate form of environmental conditioning, but for those behaviors that can’t be changed through conditioning, genetic engineering may be the key to success.

To Be or Not To Be?

It seems that we have been given a unique opportunity to use our ever increasing plethora of experiential data and knowledge and combine it with genetic engineering techniques to engineer a social organism that is by far the best adapted to its environment.  Additionally, we may one day find ourselves living in a true global utopia, if the barriers of human nature and the de-synchronization of biological and cultural evolution are overcome, and genetic engineering may be the only way of achieving such a goal.  One extremely important issue that I haven’t mentioned until now is the ethical concerns regarding the continued use and development of genetic engineering technology.  There are obviously concerns over whether or not we should even be experimenting with this technology.  There are many reasonable arguments both for and against using this technology, but I think that as a species, we have been driven to manipulate our environment in any way that we are capable of and this curiosity is a part of human nature itself.  Without genetic engineering, we can’t change any of the negative aspects of human nature but can only let natural selection run its course to modify our species slowly over time (for better or for worse).

If we do accept this technology, there are other concerns such as the fact that there are corporations and interested parties that want to use genetic engineering primarily if not exclusively for profit gain (often at the expense of actual universal benefits for our species) and which implement questionable practices like patenting plant and animal food sources in a potentially monopolized future agricultural market.  Perhaps an even graver concern is the potential to patent genes that become a part of the human genome, and the (at least short term) inequality that would ensue from the wealthier members of society being the primary recipients of genetic human enhancement. Some people may also use genetic engineering to create new bio-warfare weaponry and find other violent or malicious applications.  Some of these practices could threaten certain democratic or other moral principles and we need to be extremely cautious with how we as a society choose to implement and regulate this technology.  There are also numerous issues regarding how these technologies will affect the environment and various ecosystems, whether caused by people with admirable intentions or not.  So it is definitely prudent that we proceed with caution and get the public heavily involved with this cultural change so that our society can move forward as responsibly as possible.

As for the feasibility of the theoretical applications mentioned earlier, it will likely be computer simulation and computing power that catalyze the knowledge base and capability needed to realize many of these goals (by decoding the incredibly complex interactions between genes and the environment) and thus will likely be the primary limiting factor. If genetic engineering also involves expanding the DNA components we have to work with, for example, by expanding our base-four system (i.e. four nucleotides to choose from) to a higher based system through the use of other naturally occurring nucleotides or even the use of UBPs (i.e. “Unnatural Base Pairs”), while still maintaining low rates of base-pair mismatching and while maintaining adequate genetic information processing rates, we may be able to utilize previously inaccessible capabilities by increasing the genetic information density of DNA.  If we can overcome some of the chemical natural selection barriers that were present during abiogenesis and the evolution of DNA (and RNA), and/or if we can change the very structure of DNA itself (as well as the proteins and enzymes that are required for its implementation), we may be able to produce an entirely new type of genetic information storage and processing system, potentially circumventing many of the limitations of DNA in general, and thus creating a vast array of new species (genetically coded by a different nucleic acid or other substance).  This type of “nucleic acid engineering”, if viable, may complement the genetic engineering we’re currently performing on DNA and help us to further accomplish some of the aforementioned goals and applications.

Lastly, while some of the theoretical applications of genetic engineering that I’ve presented in this post may not sound plausible at all to some, I think it’s extremely important and entirely reasonable (based on historical precedent) to avoid underestimating the capabilities of our species.  We may one day be able to transform ourselves into whatever species we desire, effectively taking us from trans-humanism to some perpetual form of conscious evolution and speciation.  What I find most beautiful here is that the evolution of consciousness has actually led to a form of conscious evolution. Hopefully our species will guide this evolution in ways that are most advantageous to our species, and to the entire diversity of life on this planet.

Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

The Co-Evolution of Language and Complex Thought

Language appears to be the most profound feature that has arisen during the evolution of the human mind.  This feature of humanity has led to incredible thought complexity, and also provided the foundation for the most simplistic thoughts imaginable.  Many of us may wonder how exactly language is related to thought and also how the evolution of language has affected the evolution of thought complexity.  In this post, I plan to discuss what I believe to be some evolutionary aspects of psycholinguistics.

Mental Languages

It is clear that humans think in some form of language, whether it is accomplished as an interior monologue using our native spoken language and/or some form of what many call “mentalese” (i.e. a means of thinking about concepts and propositions without the use of words).  Our thoughts are likely accomplished by a combination of these two “types” of language.  The fact that young infants and aphasics (for example) are able to think, clearly implies that not all thoughts are accomplished through a spoken language.  It is also likely that the aforementioned “mentalese” is some innate form of mental symbolic representation that is primary in some sense, supported by the fact that it appears to be necessary in order for spoken language to develop or exist at all.  Considering that words and sentences do not have any intrinsic semantic content or value (at least non-iconic forms) illustrates that this “mentalese” is in fact a prerequisite for understanding or assigning the meaning of words and sentences.  Complex words can always be defined by a number of less complex words, but at some point a limit is reached whereby the most simple units of definition are composed of seemingly irreducible concepts and propositions.  Furthermore, those irreducible concepts and propositions do not require any words to have meaning (for if they did, we would have an infinite regress of words being defined by words being defined by words, ad infinitum).  The only time we theoretically require symbolic representation of semantic content using words is if the concepts are to be easily (if at all) communicated to others.

While some form of mentalese is likely the foundation or even ultimate form of thought, it is my contention that communicable language has likely had a considerable impact on the evolution of the human mind — not only in the most trivial or obvious way whereby communicated words affect our thoughts (e.g. through inspiration, imagination, and/or reflection of new knowledge or perspectives), but also by serving as a secondary multidimensional medium for symbolic representation. That is, spoken language (as well as its subsequent allotrope, written language) has provided a form of combinatorial leverage somewhat independent of (although working in harmony with) the mental or cognitive faculties that innately exist for thought.

To be sure, spoken language has likely co-evolved with our mentalese, as they seem to affect one another in various ways.  As new types or combinations of propositions and concepts are discovered, the spoken language has to adapt in order to make those new propositions and concepts communicable to others.  What interests me more however, is how communicable language (spoken or written) has affected the evolution of thought complexity itself.

Communicable Language and Thought Complexity

Words and sentences, which primarily developed in order to communicate instances of our mental language to others, have also undoubtedly provided a secondary multidimensional medium for symbolic representation.  For example, when we use words, we are able to compress a large amount of information (i.e. many concepts and propositions) into small tokens with varying densities.  This type of compression has provided a way to maximize our use of short-term and long-term memory in order for more complex thoughts and mental capabilities to develop (whether that increase in complexity is defined as longer strings of concepts or propositions, or otherwise).

When we think of a sentence to ourselves, we end up utilizing a phonological/auditory loop, whereby we can better handle and organize information at any single moment by internally “hearing” it.  We can also visualize the words in multiple ways including how the mouth movements of people speaking those words would look like (and we can use our tactile and/or motor memory to mentally simulate how our mouth feels when these words are spoken), and if a written form of the communicable language exists, we can actually visualize the words as they would appear in their written form (as well as the aforementioned tactile/motor memory to mentally simulate how it feels to write those words).  On top of this, we can often visualize each glyph in multiple formats (i.e. different sizes, shapes, fonts, etc.).  This has provided a multidimensional memory tool, because it serves to represent the semantic information in a way that our brain can perceive and associate with multiple senses (in this case through our auditory, visual, and somatosensory cortices).  In some cases, when a particular written language uses iconic glyphs (as opposed to arbitrary symbols), the user can also visualize the concept represented by the symbol in an idealized fashion.  Associating information with multiple cognitive faculties or perceptual systems means that more neural network patterns of the brain will be involved with the attainment, retention, and recall of that information.  For those of us that have successfully used various pneumonic devices and other memory-enhancing “tricks”, we can clearly see the efficacy and importance of communicable language and its relationship to how we think about and combine various concepts and propositions.

By enhancing our memory, communicable language has served as an epistemic catalyst allowing us to build upon our previous knowledge in ways that would have likely been impossible without said language.  Once written language was developed, we were no longer limited by our own short-term and long-term memory, for we had a way of recording as much information as possible, and this also allowed us to better formulate new ideas and consider thoughts that would have otherwise been too complex to mentally visualize or keep track of.  Mathematics, for example, exponentially increased in complexity once we were able to represent the relationships between variables in a written form.  While previously we would have been limited by our short-term and long-term memory, written language allowed us to eventually formulate incredibly long (sometimes painfully long) mathematical expressions.  Once written language was converted further into an electro-mechanical language (i.e. through the use of computers), our “writing” mediums, information acquisition mechanisms, and pattern recognition capabilities, were further aided and enhanced exponentially thus providing yet another platform for an increased epistemic or cognitive “breathing space”.  If our brains underwent particular mutations after communicable language evolved, it may have provided a way to ratchet our way into entirely new cognitive niches or capabilities.  That is, by communicable language providing us with new strings of concepts and propositions, there may have been an unprecedented natural selection pressure/opportunity (if an advantageous brain mutation accompanied this new cognitive input) in order for our brain to obtain an entirely new and possibly more complex fundamental concept or way of thinking.

Summary

It seems evident to me that communicable language, once it had developed, served as an extremely important epistemic catalyst and multidimensional cognitive tool that likely had a great influence on the evolution of the human brain.  While some form of mentalese was likely a prerequisite and precursor to any subsequent forms of communicable language, the cognitive “breathing space” that communicable language provided, seems to have had a marked impact on the evolution of human thought complexity, and on the amount of knowledge that we’ve been able to obtain from the world around us.  I have no doubt that the current external linguistic tools we use (i.e. written and electronic forms of handling information) will continue to significantly alter the ongoing evolution of the human mind.  Our biological forms of memory will likely adapt in order to be economically optimized and better work with those external media.  Likewise, our increasing access to new types of information may have provided (and may continue to provide) a natural selective pressure or opportunity for our brains to evolve in order to think about entirely new and potentially more complex concepts, thereby periodically increasing the lexicon or conceptual database of our “mentalese” (assuming that those new concepts provide a survival/reproductive advantage).