The Open Mind

Cogito Ergo Sum

Archive for the ‘Evolution’ Category

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part III)

with 9 comments

In the first post in this series I re-introduced the basic concepts underling the Predictive Processing (PP) theory of perception, in particular how the brain uses a form of active Bayesian inference to form predictive models that effectively account for perception as well as action.  I also mentioned how the folk psychological concepts of beliefs, desires and emotions can fit within the PP framework.  In the second post in this series I expanded the domain of PP a bit to show how it relates to language and ontology, and how we perceive the world to be structured as discrete objects, objects with “fuzzy boundaries”, and other more complex concepts all stemming from particular predicted causal relations.

It’s important to note that this PP framework differs from classical computational frameworks for brain function in a very big way, because the processing and learning steps are no longer considered separate stages within the PP framework, but rather work at the same time (learning is effectively occurring all the time).  Furthermore, classical computational frameworks for brain function treat the brain more or less like a computer which I think is very misguided, and the PP framework offers a much better alternative that is far more creative, economical, efficient, parsimonious and pragmatic.  The PP framework holds the brain to be more of a probabilistic system rather than a deterministic computational system as held in classical computationalist views.  Furthermore, the PP framework puts a far greater emphasis on the brain using feed-back loops whereas traditional computational approaches tend to suggest that the brain is primarily a feed-forward information processing system.

Rather than treating the brain like a generic information processing system that passively waits for new incoming data from the outside world, it stays one step ahead of the game, by having formed predictions about the incoming sensory data through a very active and creative learning process built-up by past experiences through a form of active Bayesian inference.  Rather than utilizing some kind of serial processing scheme, this involves primarily parallel neuronal processing schemes resulting in predictions that have a deeply embedded hierarchical structure and relationship with one another.  In this post, I’d like to explore how traditional and scientific notions of knowledge can fit within a PP framework as well.

Knowledge as a Subset of Predicted Causal Relations

It has already been mentioned that, within a PP lens, our ontology can be seen as basically composed of different sets of highly differentiated, probabilistic causal relations.  This allows us to discriminate one object or concept from another, as they are each “composed of” or understood by the brain as causes that can be “explained away” by different sets of predictions.  Beliefs are just another word for predictions, with higher-level beliefs (including those that pertain to very specific contexts) consisting of a conjunction of a number of different lower level predictions.  When we come to “know” something then, it really is just a matter of inferring some causal relation or set of causal relations about a particular aspect of our experience.

Knowledge is often described by philosophers using various forms of the Platonic definition: Knowledge = Justified True Belief.  I’ve talked about this to some degree in previous posts (here, here) and I came up with what I think is a much better working definition for knowledge which could be defined as such:

Knowledge consists of recognized patterns of causality that are stored into memory for later recall and use, that positively and consistently correlate with reality, and for which that correlation has been validated by empirical evidence (i.e. successful predictions made and/or goals accomplished through the use of said recalled patterns).

We should see right off the bat how this particular view of knowledge fits right in to the PP framework, where knowledge consists of predictions of causal relations, and the brain is in the very business of making predictions of causal relations.  Notice my little caveat however, where these causal relations should positively and consistently correlate with “reality” and be supported by empirical evidence.  This is because I wanted to distinguish all predicted causal relations (including those that stem from hallucinations or unreliable inferences) from the subset of predicted causal relations that we have a relatively high degree of certainty in.

In other words, if we are in the game of trying to establish a more reliable epistemology, we want to distinguish between all beliefs and the subset of beliefs that have a very high likelihood of being true.  This distinction however is only useful for organizing our thoughts and claims based on our level of confidence in their truth status.  And for all beliefs, regardless of the level of certainty, the “empirical evidence” requirement in my definition given above is still going to be met in some sense because the incoming sensory data is the empirical evidence (the causes) that support the brain’s predictions (of those causes).

Objective or Scientific Knowledge

Within a domain like science however, where we want to increase the reliability or objectivity of our predictions pertaining to any number of inferred causal relations in the world, we need to take this same internalized strategy of modifying the confidence levels of our predictions by comparing them to the predictions of others (third-party verification) and by testing these predictions with externally accessible instrumentation using some set of conventions or standards (including the scientific method).

Knowledge then, is largely dependent on or related to our confidence levels in our various beliefs.  And our confidence level in the truth status of a belief is just another way of saying how probable such a belief is to explain away a certain set of causal relations, which is equivalent to our brain’s Bayesian prior probabilities (our “priors”) that characterize any particular set of predictions.  This means that I would need a lot of strong sensory evidence to overcome a belief with high Bayesian priors, not least because it is likely to be associated with a large number of other beliefs.  This association between different predictions seems to me to be a crucial component not only for knowledge generally (or ontology as mentioned in the last post), but also for our reasoning processes.  In the next post of this series, I’m going to expand a bit on reasoning and how I view it through a PP framework.

Advertisements

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part II)

leave a comment »

In the first post of this series I introduced some of the basic concepts involved in the Predictive Processing (PP) theory of perception and action.  I briefly tied together the notions of belief, desire, emotion, and action from within a PP lens.  In this post, I’d like to discuss the relationship between language and ontology through the same framework.  I’ll also start talking about PP in an evolutionary context as well, though I’ll have more to say about that in future posts in this series.

Active (Bayesian) Inference as a Source for Ontology

One of the main themes within PP is the idea of active (Bayesian) inference whereby we physically interact with the world, sampling it and modifying it in order to reduce our level of uncertainty in our predictions about the causes of the brain’s inputs.  Within an evolutionary context, we can see why this form of embodied cognition is an ideal schema for an information processing system to employ in order to maximize chances of survival in our highly interactive world.

In order to reduce the amount of sensory information that has to be processed at any given time, it is far more economical for the brain to only worry about the prediction error that flows upward through the neural system, rather than processing all incoming sensory data from scratch.  If the brain is employing a set of predictions that can “explain away” most of the incoming sensory data, then the downward flow of predictions can encounter an upward flow of sensory information (effectively cancelling each other out) and the only thing that remains to propagate upward through the system and do any “cognitive work” (i.e. the only thing that needs to be processed) on the predictive models flowing downward is the remaining prediction error (prediction error = predictions of sensory input minus the actual sensory input).  This is similar to data compression strategies for video files (for example) that only worry about the information that changes over time (pixels that change brightness/color) and then simply compress the information that remains constant (pixels that do not change from frame-to-frame).

The ultimate goal for this strategy within an evolutionary context is to allow the organism to understand its environment in the most salient ways for the pragmatic purposes of accomplishing goals relating to survival.  But once humans began to develop culture and evolve culturally, the predictive strategy gained a new kind of evolutionary breathing space, being able to predict increasingly complex causal relations and developing technology along the way.  All of these inferred causal relations appear to me to be the very source of our ontology, as each hierarchically structured prediction and its ability to become associated with others provides an ideal platform for differentiating between any number of spatio-temporal conceptions and their categorical or logical organization.

An active Bayesian inference system is also ideal to explain our intellectual thirst, human curiosity, and interest in novel experiences (to some degree), because we learn more about the world (and ourselves) by interacting with it in new ways.  In doing so, we are provided with a constant means of fueling and altering our ontology.

Language & Ontology

Language is an important component as well and it fits well within a PP framework as it serves to further link perception and action together in a very important way, allowing us to make new kinds of predictions about the world that wouldn’t have been possible without it.   A tool like language makes a lot of sense from an evolutionary perspective as well since better predictions about the world result in a higher chance of survival.

When we use language by speaking or writing it, we are performing an action which is instantiated by the desire to do so (see previous post about “desire” within a PP framework).  When we interpret language by listening to it or by reading, we are performing a perceptual task which is again simply another set of predictions (in this case, pertaining to the specific causes leading to our sensory inputs).  If we were simply sending and receiving non-lingual nonsense, then the same basic predictive principles underlying perception and action would still apply, but something new emerges when we send and receive actual language (which contains information).  With language, we begin to associate certain sounds and visual information with some kind of meaning or meaningful information.  Once we can do this, we can effectively share our thoughts with one another, or at least many aspects of our thoughts with one another.  This provides for an enormous evolutionary advantage as now we can communicate almost anything we want to one another, store it in external forms of memory (books, computers, etc.), and further analyze or manipulate the information for various purposes (accounting, inventory, science, mathematics, etc.).

By being able to predict certain causal outcomes through the use of language, we are effectively using the lower level predictions associated with perceiving and emitting language to satisfy higher level predictions related to more complex goals including those that extend far into the future.  Since the information that is sent and received amounts to testing or modifying our predictions of the world, we are effectively using language to share and modulate one brain’s set of predictions with that of another brain.  One important aspect of this process is that this information is inherently probabilistic which is why language often trips people up with ambiguities, nuances, multiple meanings behind words and other attributes of language that often lead to misunderstanding.  Wittgenstein is one of the more prominent philosophers who caught onto this property of language and its consequence on philosophical problems and how we see the world structured.  I think a lot of the problems Wittgenstein elaborated on with respect to language can be better accounted for by looking at language as dealing with probabilistic ontological/causal relations that serve some pragmatic purpose, with the meaning of any word or phrase as being best described by its use rather than some clear-cut definition.

This probabilistic attribute of language in terms of the meanings of words having fuzzy boundaries also tracks very well with the ontology that a brain currently has access to.  Our ontology, or what kinds of things we think exist in the world, are often categorized in various ways with some of the more concrete entities given names such as: “animals”, “plants”, “rocks”, “cats”, “cups”, “cars”, “cities”, etc.  But if I morph a wooden chair (say, by chipping away at parts of it with a chisel), eventually it will no longer be recognizable as a chair, and it may begin to look more like a table than a chair or like nothing other than an oddly shaped chunk of wood.  During this process, it may be difficult to point to the exact moment that it stopped being a chair and instead became a table or something else, and this would make sense if what we know to be a chair or table or what-have-you is nothing more than a probabilistic high-level prediction about certain causal relations.  If my brain perceives an object that produces too high of a prediction error based on the predictive model of what a “chair” is, then it will try another model (such as the predictive model pertaining to a “table”), potentially leading to models that are less and less specific until it is satisfied with recognizing the object as merely a “chunk of wood”.

From a PP lens, we can consider lower level predictions pertaining to more basic causes of sensory input (bright/dark regions, lines, colors, curves, edges, etc.) to form some basic ontological building blocks and when they are assembled into higher level predictions, the amount of integrated information increases.  This information integration process leads to condensed probabilities about increasingly complex causal relations, and this ends up reducing the dimensionality of the cause-effect space of the predicted phenomenon (where a set of separate cause-effect repertoires are combined into a smaller number of them).

You can see the advantage here by considering what the brain might do if it’s looking at a black cat sitting on a brown chair.  What if the brain were to look at this scene as merely a set of pixels on the retina that change over time, where there’s no expectations of any subset of pixels to change in ways that differ from any other subset?  This wouldn’t be very useful in predicting how the visual scene will change over time.  What if instead, the brain differentiates one subset of pixels (that correspond to what we call a cat) from all the rest of the pixels, and it does this in part by predicting proximity relations between neighboring pixels in the subset (so if some black pixels move from the right to the left visual field, then some number of neighboring black pixels are predicted to move with it)?

This latter method treats the subset of black-colored pixels as a separate object (as opposed to treating the entire visual scene as a single object), and doing this kind of differentiation in more and more complex ways leads to a well-defined object or concept, or a large number of them.  Associating sounds like “meow” with this subset of black-colored pixels, is just one example of yet another set of properties or predictions that further defines this perceived object as distinct from the rest of the perceptual scene.  Associating this object with a visual or auditory label such as “cat” finally links this ontological object with language.  As long as we agree on what is generally meant by the word “cat” (which we determine through its use), then we can share and modify the predictive models associated with such an object or concept, as we can do with any other successful instance of linguistic communication.

Language, Context, and Linguistic Relativism

However, it should be noted that as we get to more complex causal relations (more complex concepts/objects), we can no longer give these concepts a simple one word label and expect to communicate information about them nearly as easily as we could for the concept of a “cat”.  Think about concepts like “love” or “patriotism” or “transcendence” and realize how there’s many different ways that we use those terms and how they can mean all sorts of different things and so our meaning behind those words will be heavily conveyed to others by the context that they are used in.  And context in a PP framework could be described as simply the (expected) conjunction of multiple predictive models (multiple sets of causal relations) such as the conjunction of the predictive models pertaining to the concepts of food, pizza, and a desirable taste, and the word “love” which would imply a particular use of the word “love” as used in the phrase “I love pizza”.  This use of the word “love” is different than one which involves the conjunction of predictive models pertaining to the concepts of intimacy, sex, infatuation, and care, implied in a phrase like “I love my wife”.  In any case, conjunctions of predictive models can get complicated and this carries over to our stretching our language to its very limits.

Since we are immersed in language and since it is integral in our day-to-day lives, we also end up being conditioned to think linguistically in a number of ways.  For example, we often think with an interior monologue (e.g. “I am hungry and I want pizza for lunch”) even when we don’t plan on communicating this information to anyone else, so it’s not as if we’re simply rehearsing what we need to say before we say it.  I tend to think however that this linguistic thinking (thinking in our native language) is more or less a result of the fact that the causal relations that we think about have become so strongly associated with certain linguistic labels and propositions, that we sort of automatically think of the causal relations alongside the labels that we “hear” in our head.  This seems to be true even if the causal relations could be thought of without any linguistic labels, in principle at least.  We’ve simply learned to associate them so strongly to one another that in most cases separating the two is just not possible.

On the flip side, this tendency of language to associate itself with our thoughts, also puts certain barriers or restrictions on our thoughts.  If we are always preparing to share our thoughts through language, then we’re going to become somewhat entrained to think in ways that can be most easily expressed in a linguistic form.  So although language may simply be along for the ride with many non-linguistic aspects of thought, our tendency to use it may also structure our thinking and reasoning in large ways.  This would account for why people raised in different cultures with different languages see the world in different ways based on the structure of their language.  While linguistic determinism seems to have been ruled out (the strong version of the Sapir-Whorf hypothesis), there is still strong evidence to support linguistic relativism (the weak version of the Sapir-Whorf hypothesis), whereby one’s language effects their ontology and view of how the world is structured.

If language is so heavily used day-to-day then this phenomenon makes sense as viewed through a PP lens since we’re going to end up putting a high weight on the predictions that link ontology with language since these predictions have been demonstrated to us to be useful most of the time.  Minimal prediction error means that our Bayesian evidence is further supported and the higher the weight carried by these predictions, the more these predictions will restrict our overall thinking, including how our ontology is structured.

Moving on…

I think that these are but a few of the interesting relationships between language and ontology and how a PP framework helps to put it all together nicely, and I just haven’t seen this kind of explanatory power and parsimony in any other kind of conceptual framework about how the brain functions.  This bodes well for the framework and it’s becoming less and less surprising to see it being further supported over time with studies in neuroscience, cognition, psychology, and also those pertaining to pathologies of the brain, perceptual illusions, etc.  In the next post in this series, I’m going to talk about knowledge and how it can be seen through the lens of PP.

Predictive Processing: Unlocking the Mysteries of Mind & Body (Part I)

leave a comment »

I’ve been away from writing for a while because I’ve had some health problems relating to my neck.  A few weeks ago I had double-cervical-disc replacement surgery and so I’ve been unable to write and respond to comments and so forth for a little while.  I’m in the second week following my surgery now and have finally been able to get back to writing, which feels very good given that I’m unable to lift or resume martial arts for the time being.  Anyway, I want to resume my course of writing beginning with a post-series that pertains to Predictive Processing (PP) and the Bayesian brain.  I’ve written one post on this topic a little over a year ago (which can be found here) as I’ve become extremely interested in this topic for the last several years now.

The Predictive Processing (PP) theory of perception shows a lot of promise in terms of finding an overarching schema that can account for everything that the brain seems to do.  While its technical application is to account for the acts of perception and active inference in particular, I think it can be used more broadly to account for other descriptions of our mental life such as beliefs (and knowledge), desires, emotions, language, reasoning, cognitive biases, and even consciousness itself.  I want to explore some of these relationships as viewed through a PP lens more because I think it is the key framework needed to reconcile all of these aspects into one coherent picture, especially within the evolutionary context of an organism driven to survive.  Let’s begin this post-series by first looking at how PP relates to perception (including imagination), beliefs, emotions, and desires (and by extension, the actions resulting from particular desires).

Within a PP framework, beliefs can be best described as simply the set of particular predictions that the brain employs which encompass perception, desires, action, emotion, etc., and which are ultimately mediated and updated in order to reduce prediction errors based on incoming sensory evidence (and which approximates a Bayesian form of inference).  Perception then, which is constituted by a subset of all our beliefs (with many of them being implicit or unconscious beliefs), is more or less a form of controlled hallucination in the sense that what we consciously perceive is not the actual sensory evidence itself (not even after processing it), but rather our brain’s “best guess” of what the causes for the incoming sensory evidence are.

Desires can be best described as another subset of one’s beliefs, and a set of beliefs which has the special characteristic of being able to drive action or physical behavior in some way (whether driving internal bodily states, or external ones that move the body in various ways).  Finally, emotions can be thought of as predictions pertaining to the causes of internal bodily states and which may be driven or changed by changes in other beliefs (including changes in desires or perceptions).

When we believe something to be true or false, we are basically just modeling some kind of causal relationship (or its negation) which is able to manifest itself into a number of highly-weighted predicted perceptions and actions.  When we believe something to be likely true or likely false, the same principle applies but with a lower weight or precision on the predictions that directly corresponds to the degree of belief or disbelief (and so new sensory evidence will more easily sway such a belief).  And just like our perceptions, which are mediated by a number of low and high-level predictions pertaining to incoming sensory data, any prediction error that the brain encounters results in either updating the perceptual predictions to new ones that better reduce the prediction error and/or performing some physical action that reduces the prediction error (e.g. rotating your head, moving your eyes, reaching for an object, excreting hormones in your body, etc.).

In all these cases, we can describe the brain as having some set of Bayesian prior probabilities pertaining to the causes of incoming sensory data, and these priors changing over time in response to prediction errors arising from new incoming sensory evidence that fails to be “explained away” by the predictive models currently employed.  Strong beliefs are associated with high prior probabilities (highly-weighted predictions) and therefore need much more counterfactual sensory evidence to be overcome or modified than for weak beliefs which have relatively low priors (low-weighted predictions).

To illustrate some of these concepts, let’s consider a belief like “apples are a tasty food”.  This belief can be broken down into a number of lower level, highly-weighted predictions such as the prediction that eating a piece of what we call an “apple” will most likely result in qualia that accompany the perception of a particular satisfying taste, the lower level prediction that doing so will also cause my perception of hunger to change, and the higher level prediction that it will “give me energy” (with these latter two predictions stemming from the more basic category of “food” contained in the belief).  Another prediction or set of predictions is that these expectations will apply to not just one apple but a number of apples (different instances of one type of apple, or different types of apples altogether), and a host of other predictions.

These predictions may even result (in combination with other perceptions or beliefs) in an actual desire to eat an apple which, under a PP lens could be described as the highly weighted prediction of what it would feel like to find an apple, to reach for an apple, to grab it, to bite off a piece of it, to chew it, and to swallow it.  If I merely imagine doing such things, then the resulting predictions will necessarily carry such a small weight that they won’t be able to influence any actual motor actions (even if these imagined perceptions are able to influence other predictions that may eventually lead to some plan of action).  Imagined perceptions will also not carry enough weight (when my brain is functioning normally at least) to trick me into thinking that they are actual perceptions (by “actual”, I simply mean perceptions that correspond to incoming sensory data).  This low-weighting attribute of imagined perceptual predictions thus provides a viable way for us to have an imagination and to distinguish it from perceptions corresponding to incoming sensory data, and to distinguish it from predictions that directly cause bodily action.  On the other hand, predictions that are weighted highly enough (among other factors) will be uniquely capable of affecting our perception of the real world and/or instantiating action.

This latter case of desire and action shows how the PP model takes the organism to be an embodied prediction machine that is directly influencing and being influenced by the world that its body interacts with, with the ultimate goal of reducing any prediction error encountered (which can be thought of as maximizing Bayesian evidence).  In this particular example, the highly-weighted prediction of eating an apple is simply another way of describing a desire to eat an apple, which produces some degree of prediction error until the proper actions have taken place in order to reduce said error.  The only two ways of reducing this prediction error are to change the desire (or eliminate it) to one that no longer involves eating an apple, and/or to perform bodily actions that result in actually eating an apple.

Perhaps if I realize that I don’t have any apples in my house, but I realize that I do have bananas, then my desire will change to one that predicts my eating a banana instead.  Another way of saying this is that my higher-weighted prediction of satisfying hunger supersedes my prediction of eating an apple specifically, thus one desire is able to supersede another.  However, if the prediction weight associated with my desire to eat an apple is high enough, it may mean that my predictions will motivate me enough to avoid eating the banana, and instead to predict what it is like to walk out of my house, go to the store, and actually get an apple (and therefore, to actually do so).  Furthermore, it may motivate me to predict actions that lead me to earn the money such that I can purchase the apple (if I don’t already have the money to do so).  To do this, I would be employing a number of predictions having to do with performing actions that lead to me obtaining money, using money to purchase goods, etc.

This is but a taste of what PP has to offer, and how we can look at basic concepts within folk psychology, cognitive science, and theories of mind in a new light.  Associated with all of these beliefs, desires, emotions, and actions (which again, are simply different kinds of predictions under this framework), is a number of elements pertaining to ontology (i.e. what kinds of things we think exist in the world) and pertaining to language as well, and I’d like to explore this relationship in my next post.  This link can be found here.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part III)

leave a comment »

In the first two posts that I wrote in this series (part I and part II) concerning some concepts and themes mentioned in Dostoyevksy’s The Brothers Karamazov, I talked about moral realism and how it pertains to theism and atheism (and the character Ivan’s own views), and I also talked about moral responsibility and free will to some degree (and how this related to the interplay between Ivan and Smerdyakov).  In this post, I’m going to look at the concept of moral conscience and intuition, and how they apply to Ivan’s perspective and his experiencing an ongoing hallucination of a demonic apparition.  This demonic apparition only begins to haunt Ivan after hearing that his influence on his brother Smerdyakov led him to murder their father Fyodor.  The demon continues to torment Ivan until just before his other brother Alyosha informs him that Smerdyakov has committed suicide.  Then I’ll conclude with some discussion on the concept of moral desert (justice).

It seems pretty clear that the demonic apparition that appears to Ivan is a psychosomatic hallucination brought about as a manifestation of Ivan’s overwhelming guilt for what his brother has done, since he feels that he bears at least some of the responsibility for his brothers actions.  We learn earlier in the story that Zosima, a wise elder living at a monastery who acts as a mentor and teacher to Alyosha, had explained to Ivan that everyone bears at least some responsibility for the actions of everyone around them because human causality is so heavily intertwined with one person’s actions having a number of complicated effects on the actions of everyone else.  Despite Ivan’s strong initial reservations against this line of reasoning, he seems to have finally accepted that Zosima was right — hence him suffering a nervous breakdown as a result of realizing this.

Obviously Ivan’s moral conscience seems to be driving this turn of events and this is the case whether or not Ivan explicitly believes that morality is real.  And so we can see that despite Ivan’s moral skepticism, his moral intuitions and/or his newly accepted moral dispositions as per Zosima, have led him to his current state of despair.  Similarly, Ivan’s views on the problem of evil — whereby the vast amount of suffering in the world either refutes the existence of God, or shows that this God (if he does exist) must be a moral monster — betray even more of Ivan’s moral views with respect to how he wants the world to be.  His wanting the world to have less suffering in it, along with his wishing that his brother had not committed murder (let alone as a result of his influence on his brother), illustrates a number of moral “oughts” that Ivan subscribes to.  And whether they’re simply based on his moral intuitions or also rational moral reflection, they illustrate the deeply rooted psychological aspects of morality that are an inescapable facet of the human condition.

This situation also helps to explain some of the underlying motivations behind my own reversion back toward some form of moral realism, after becoming an atheist myself, initially catalyzed by my own moral intuitions and then later solidified and justified by rational moral reflection on objective facts pertaining to human psychology and other factors.  Now it should be said that moral intuitions on their own are only a generally useful heuristic as they are often misguiding (and incorrect) which is why it is imperative that they are checked by a rational assessment of the facts at hand.  But, nevertheless, they help to illustrate how good and evil can be said to be real (in at least some sense), even to someone like Ivan that doesn’t think they have an objective foundation.  They may not be conceptions of good and evil as described in many religions, with supernatural baggage attached, but they are real nonetheless.

Another interesting point worth noting is in regard to Zosima’s discussion about mutual moral responsibility.  While I already discussed moral responsibility in the last post along with its relation to free will, there’s something rather paradoxical about Dostoyevsky’s reasoning as expressed through Zosima that I found quite interesting.  Zosima talks about how love and forgiveness are necessary because everyone’s actions are intertwined with everyone else’s and therefore everyone bears some responsibility for the sins of others.  This idea of shared responsibility is abhorrent to those in the story that doubt God and the Christian religion (such as Ivan), who only want to be responsible for their own actions, but the complex intertwined causal chain that Zosima speaks of is the same causal chain that many determinists invoke to explain our lack of libertarian free will and how we can’t be held responsible in a causa sui manner for our actions.

Thus, if someone dies and there is in fact an afterlife, by Zosima’s own reasoning that person should not be judged as an individual solely responsible for their actions either.  That person should instead receive unconditional love and forgiveness and be redeemed rather than punished.  But this idea is anathema to standard Christian theology where one is supposed to be judged and given eternal paradise or eternal torment (with vastly disproportionate consequences given the finite degree of one’s actions).  It’s no surprise that Zosima isn’t looked upon as a model clergyman by some of his fellow monks in the monastery because his emphatic preaching about love and forgiveness undermines the typical heavy-handed judgemental aspects of God within Christianity.  But in any case, if God exists and understood that people were products of their genes and their environment which is causally interconnected with everyone else’s (i.e. libertarian free will is logically impossible), then a loving God would grant everyone forgiveness after death and grant them eternal paradise based on that understanding.  And oddly enough, this also undermines Ivan’s own reasoning that good and evil can only exist with an afterlife that undergoes judgement, because forgiveness and eternal paradise should be granted to everyone in the afterlife (by a truly loving God) if Zosima’s reasoning was taken to it’s logical conclusions.  So not only does Zosima’s reasoning seem to undermine the justification for unequal treatment of souls in the afterlife, but it also undermines the Christian conception of free will to boot (which is logically impossible regardless of Zosima’s reasoning).

And this brings me to the concept of moral desert.  In some ways I agree with Zosima, at least in the sense that love (or more specifically compassion) and forgiveness are extremely important in proper moral reasoning. And once one realizes the logical impossibility of libertarian free will, this should only encourage one’s use of love and forgiveness in the sense that people should never be trying to punish a wrongdoer (or hope for their punishment) for the sake of retributive justice or vengeance.  Rather, people should only punish (or hope that one is punished) as much as is necessary to compensate the victim as best as the circumstances allow and (more importantly) to rehabilitate the wrongdoer by reprogramming them through behavioral conditioning.  Anything above and beyond this is excessive, malicious, and immoral.  Similarly, a loving God (if one existed) would never punish anyone in the afterlife beyond what is needed to rehabilitate them (and it would seem that no punishment at all should really be needed if this God had the power to accomplish these feats on immaterial souls using magic), and if this God had no magic to accomplish this, then at the very least, it would still mean that there should never by any eternal punishments, since punishing someone forever (let alone torturing them forever), not only illustrates that there is no goal to rehabilitate the wrongdoer, but also that this God is beyond psychopathic and malevolent.  Again, think of Zosima’s reasoning as it applies here.

Looking back at the story with Smerdyakov, why does the demonic apparition disappear from Ivan right around the time that he learns that Smerdyakov killed himself?  It could be because Ivan thinks that Smerdyakov has gotten what he deserved, and that he’s no longer roaming free (so to speak) after his heinous act of murder.  And it could also be because Ivan seemed sure at that point that he would confess to the murder (or at least motivating Smerdyakov to do it).  But if either of these notions are true, then once again Ivan has betrayed yet another moral disposition of his, that murder is morally wrong.  It may also imply that Ivan, deep down, may in fact believe in an afterlife, and that Smerdyakov will now be judged for his actions.

It no doubt feels good to a lot of people when they see someone that has wronged another, getting punished for their bad deeds.  The feeling of justice and even vengeance can be so emotionally powerful, especially if the wrongdoer took the life of someone that you or someone else loved very much.  It’s a common feeling to want that criminal to suffer, perhaps to rot in jail until they die, perhaps to be tortured, or what-have-you.  And these intuitions illustrate why so many religious beliefs surrounding judgment in the afterlife share many of these common elements.  People invented these religious beliefs (whether unconsciously or not) because it makes them feel better about wrongdoers that may otherwise die without having been judged for their actions.  After all, when is justice going to be served?  It is also a motivating factor for a lot of people to keep their behaviors in check (as per Ivan’s rationale regarding an afterlife requirement in order for good and evil to be meaningful to people).  Even though I don’t think that this particular motivation is necessary (and therefore Ivan’s argument is incorrect) — due to other motivating forces such as the level of fulfillment and personal self-worth in one’s life, gained through living a life of moral virtue, or the lack thereof by those that fail to live virtuously — it is still a motivation that exists with many people and strongly intersects with the concept of moral desert.  Due to its pervasiveness in our intuitions and how we perceive other human beings and its importance in moral theory in general, people should spend a lot more time critically reflecting on this concept.

In the next part of this post series, I’m going to talk about the conflict between faith and doubt, perhaps the most ubiquitous theme found in The Brothers Karamazov, and how it ties all of these other concepts together.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part II)

leave a comment »

In my last post in this series, concerning Dostoyevsky’s The Brothers Karamazov, I talked about the concept of good and evil and the character Ivan’s personal atheistic perception that they are contingent on God existing (or at least an afterlife of eternal reward or punishment).  While there may even be a decent percentage of atheists that share this view (objective morality being contingent on God’s existence), I briefly explained my own views (being an atheist myself) which differs from Ivan’s in that I am a moral realist and believe that morality is objective independent of any gods or immortal souls existing.  I believe that moral facts exist (and science is the best way to find them), that morality is ultimately grounded on objective facts pertaining to human psychology, biology, sociology, neurology, and other facts about human beings, and thus that good and evil do exist in at least some sense.

In this post, I’m going to talk about Ivan’s influence on his half-brother, Smerdyakov, who ends up confessing to Ivan that he murdered their father Fyodor Pavlovich, as a result of Ivan’s philosophical influence on him.  In particular, Smerdyakov implicates Ivan as at least partially responsible for his own murderous behavior since Ivan successfully convinced him that evil wasn’t possible in a world without a God.  Ivan ends up becoming consumed with guilt, basically suffers a nervous breakdown, and then is incessantly taunted by a demonic apparition.  The hallucinations continue up until the moment Ivan comes to find out, from his very religious brother Alyosha, that Smerdyakov has hung himself.  This scene highlights a number of important topics beyond the moral realism I discussed in the first post, such as moral responsibility, free will, and even moral desert (I’ll discuss this last topic in my next post).

As I mentioned before, beyond the fact that we do not need a god to ground moral values, we also don’t need a god or an afterlife to motivate us to behave morally either.  By cultivating moral virtues such as compassion, honesty, and reasonableness, and analyzing a situation using a rational assessment of as many facts as are currently accessible, we can maximize our personal satisfaction and thus our chances of living a fulfilling life.  Behavioral causal factors that support this goal are “good” and those that detract from it are “evil”.  Aside from these labels though, we actually experience a more or less pleasing life depending on our behaviors and therefore we do have real-time motivations for behaving morally (such as acting in ways that conform to various cultivated virtues).  Aristotle claimed this more than 2000 years ago and moral psychology has been confirming it time and time again.

Since we have evolved as a particular social species with a particular psychology, not only do we have particular behaviors that best accomplish a fulfilling life, but there are also various behavioral conditioning algorithms and punishment/reward systems that are best at modifying our behavior.  And this brings me to Smerdyakov.  By listening to Ivan and ultimately becoming convinced by Ivan’s philosophical arguments, he seems to have been conditioned out of his previous views on moral responsibility.  In particular, he ended up adopting the belief that if God does not exist, then anything is permissible.  Since he also rejected a belief in God, he therefore thought he could do whatever he wanted.

One thing this turn of events highlights is that there are a number of different factors that influence people’s behaviors and that lead to their being reasoned into doing (or not doing) all sorts of things.  As Voltaire once said “Those who can make you believe absurdities can also make you commit atrocities.  And I think what Ivan told Smerdyakov was in fact absurd — although it was a belief that I once held as well not long after becoming an atheist.  For it is quite obviously absurd that anything is permissible without a God existing for at least two types of reasons: pragmatic considerations and moral considerations (with the former overlapping with the latter).  Pragmatic reasons include things like not wanting to be fined, incarcerated, or even executed by a criminal justice system that operates to minimize illegal behaviors.  It includes not wanting to be ostracized from your circle of friends, your social groups, or your community (and risking the loss of beneficial reciprocity, safety nets, etc.).  Moral reasons include everything that detracts from your overall psychological well-being, the very thing that is needed to live a maximally fulfilling life given one’s circumstances.  Behaving in ways that degrade your sense of inner worth, your integrity, self-esteem, and that diminish a good conscience, is going to make you feel miserable compared to behaving in ways that positively impact these fundamental psychological goals.

Furthermore, this part of the story illustrates that we have a moral responsibility not only to ourselves and our own behavior, but also in terms of how we influence the behavior of those around us, based on what we say, how we treat them, and more.  This also reinforces the importance of social contract theory and how it pertains to moral behavior.  If we follow simple behavioral heuristics like the Golden Rule and mutual reciprocity, then we can work together to obtain and secure common social goods such as various rights, equality, environmental sustainability, democratic legislation (ideally based on open moral deliberation), and various social safety nets.  We also can punish those that violate the social contract, as we already do with the criminal justice system and various kinds of social ostracization.  While our system of checks is far from perfect, having some system that serves such a purpose is necessary because not everybody behaves in ways that are ultimately beneficial to themselves nor everyone else around them.  People need to be conditioned to behave in ways that are more conducive to their own well being and that of others, and if all reasonable efforts to achieve that fails, they may simply need to be quarantined through incarceration (for example psychopaths or other violent criminals that society needs to be protected from, and that aren’t responding to rehabilitation efforts).

In any case, we do have a responsibility to others and that means we need to be careful what we say, such as the case with Ivan and his brother.  And this includes how we talk about concepts like free will, moral responsibility, and moral desert (justice).  If we tell people that all of their behaviors are determined and therefore don’t matter, that’s not a good way to get people to behave in ways that are good for them or for others.  Nor is telling them that because a God doesn’t exist, that their actions don’t matter.  In the case of deterministic nihilism, it’s a way to get people to lose much if not all of their motivation to put forward effort in achieving useful goals.  And both deterministic and atheistic moral nihilism are dangerous ideas that can get some people to commit heinous crimes such as mass shootings (or murdering their own father as Smerdyakov did), because they simply cause people to think that all behaviors are on equal footing in any way that matters.  And quite frankly, those nihilistic ideas are not only dangerous but also absurd.

While I’ve written a bit on free will in the past, my views have become more refined over the years, and my overall attitude towards the issue has been co-evolving alongside my views on morality.  The main crux of the free will issue is that libertarian free will is logically impossible because our actions are never free from both determinism and indeterminism (randomness) since one or the other must underlie how our universe operates (depending on which interpretation of Quantum Mechanics is correct).  Neither option from this logical dichotomy gives us “the freedom to have chosen to behave differently given the same initial conditions in a non-random way”.  Therefore free will in this sense is logically impossible.  However, this does not mean that our behavior isn’t operating under some sets of rules and patterns that we can discover and modify.  That is to say, we can effectively reprogram many of our behavioral tendencies using various forms of conditioning through punishment/reward systems.  These are the same systems we use to teach children how to behave and to rehabilitate criminals.

The key thing to note here is that we need to acknowledge that even if we don’t have libertarian free will, we still have a form of “free will” that matters (as philosophers like Daniel Dennett have said numerous times) whereby we have the ability to be programmed and reprogrammed in certain ways, thus allowing us to take responsibility for our actions and design ways to modify future actions as needed.  We have more degrees of freedom than a person who is insane for example, or a child, or a dog, and these degrees of freedom or autonomy — the flexibility we have in our decision-making algorithms — can be used as a rough guideline for determining how “morally responsible” a person is for their actions.  That is to say, the more easily a person can be conditioned out of a particular behavior, and the more rational decision making processes are involved in governing that behavior, the more “free will” this person has in a sense that applies to a criminal justice system and that applies to most of our everyday lives.

In the end, it doesn’t matter whether someone thinks that their behavior doesn’t matter because there’s no God, or because they have no libertarian free will.  What needs to be pointed out is the fact that we are able to behave in ways (or be conditioned to behave in ways) that lead to more happiness, more satisfaction and more fulfilling lives.  And we are able to behave in ways that detract from this goal.  So which behaviors should we aim for?  I think the answer is obvious.  And we also need to realize that as a part of our behavioral patterns, we need to realize that ideas have consequences on others and their subsequent behaviors.  So we need to be careful about what ideas we choose to spread and to make sure that they are put into a fuller context.  If a person hasn’t given some critical reflection about the consequences that may ensue from spreading their ideas to others, especially to others that may misunderstand it, then they need to keep those ideas to themselves until they’ve reflected on them more.  And this is something that I’ve discovered and applied for myself as well, as I was once far less careful about this than I am now.  In the next post, I’m going to talk about the concept of moral desert and how it pertains to free will.  This will be relevant to the scene described above regarding Ivan’s demonic apparition that haunts him as a result of his guilt over Smerdyakov’s murder of their father, as well as why the demonic apparition disappeared once Ivan heard that Smerdyakov had taken his own life.

CGI, Movies and Truth…

with 3 comments

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Co-evolution of Humans & Artificial Intelligence

leave a comment »

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.