The Open Mind

Cogito Ergo Sum

Archive for the ‘Brain’ Category

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

On Moral Desert: Intuition vs Rationality

leave a comment »

So what exactly is moral desert?  Well, in a nutshell, it is what someone deserves as a result of their actions as defined within the framework of some kind of moral theory.  Generally when people talk about moral desert, it’s often couched in terms of punishment and reward, and our intuitions (whether innate or culturally inherited) often produce strong feelings of knowing exactly what kinds of consequences people deserve in response to their actions.  But if we think about our ultimate goals in implementing any reasonable moral theory, we should quickly recognize the fact that our ultimate moral goal is to have ourselves and everybody else simply abide by that moral theory.  And we want that in order to guide our behavior and the behavior of those around us in ways that are conducive to our well being.  Ultimately, we want ourselves and others to act in ways that maximize our personal satisfaction — and not in a hedonistic sense — but rather to maximize our sense of contentment and living a fulfilled life.

If we think about scenarios that seem to merit punishment or reward, it would be useful to keep our ultimate moral goal in mind.  The reason I mention this is because, in particular, our feelings of resentment toward those that have wronged us can often lead one to advocate for an excessive amount of punishment to the wrongdoer.  Among other factors, vengeance and retribution often become incorporated into our intuitive sense of justice.  Many have argued that retribution itself (justifying “proportionate” punishment by appealing to concepts like moral desert and justice) isn’t a bad thing, even if vengeance — which lacks inherent limits on punishment, involves personal emotions from the victim, and other distinguishing factors — is in fact a bad thing.  While thinking about such a claim, I think it’s imperative that we analyze our reasons for punishing a wrongdoer in the first place and then analyze the concept of moral desert more closely.

Free Will & It’s Implications for Moral Desert

Another relevant topic I’ve written about in several previous posts is the concept of free will.  This is an extremely important concept to parse out here, because moral desert is most often intimately tied to the positive claim of our having free will.  That is to say, most concepts of moral desert, whereby it is believed that people deserve punishment and reward for actions that warrant it, fundamentally relies on the premise that people could have chosen to do otherwise but instead chose the path they did out of free choice.  While there are various versions of free will that philosophers have proposed, they all tend to revolve around some concept of autonomous agency.  The folk psychological conception of free will that most people subscribe to is some form of deliberation that is self-caused in some way thus ruling out randomness or indeterminism as the “cause”, since randomness can’t be authored by the autonomous agent, and also ruling out non-randomness or determinism as well, since an unbroken chain of antecedent causes can’t be authored by the autonomous agent either.

So as to avoid a long digression, I’m not going to expound upon all the details of free will and the various versions that others have proposed, but will only mention that the most relevant version that is tied to moral desert is generally some form of having the ability to have chosen to do otherwise (ignoring randomness).  Notice that because indeterminism or determinism is a logical dichotomy, these are the only two options that can possibly exist to describe the ontological underpinnings of our universe (in terms of causal laws that describe how the state of the universe changes over time).  Quantum mechanics allows either of these two options to exist given their consistency with the various interpretations therein that are all empirically identical with one another, but there is no third option available, so quantum mechanics doesn’t buy us any room for this kind of free will either.  Since neither option can produce any form of self-caused or causa sui free will (sometimes referred to as libertarian free will), then the intuitive concept of moral desert that relies on said free will is also rendered impossible if not altogether meaningless.  Therefore moral desert can only exist as a coherent concept if it no longer contains within it any assumptions of the moral agent having an ability to have chosen to do otherwise (again, ignoring randomness).  So what does this realization imply for our preconceptions of justified punishment or even justice itself?

At the very least, the concept of moral desert that is involved in these other concepts needs to be reformulated or restricted given the impossibility and thus the non-existence of libertarian free will.  So if we are to say that people “deserve” anything at all morally speaking (such as a particular punishment), it can only be justified let alone meaningful in some other sense, such as a consequentialist goal that the implementation of the “desert” (in this case, the punishment) effectively accomplishes.  Punishing the wrongdoer can no longer be a means of their getting their due so to speak, but rather needs to be justified by some other purpose such as rehabilitation, future crime deterrence, and/or restitution for the victim (to compensate for physical damages, property loss, etc.)  With respect to this latter factor, restitution, there is plenty of wiggle room here for some to argue for punishment on the grounds of it simply making the victim feel better (which I suppose we could call a form of psychological restitution).  People may try to justify some level of punishment based on making the victim feel better, but vengeance should be avoided at all costs, and one needs to carefully consider what justifications are sufficient (if any) for punishing another with the intention of simply making the victim feel better.

Regarding psychological restitution, it’s useful to bring up the aforementioned concepts of retribution and vengeance, and appreciate the fact that vengeance can easily result in cases where no disinterested party performs the punishment or decides its severity, and instead the victim (or another interested party) is involved with these decisions and processes.  Given the fact that we lack libertarian free will, we can also see how vengeance is not rationally justifiable and therefore why it is important that we take this into account not only in terms of society’s methods of criminal behavioral correction but also in terms of how we behave toward others that we think have committed some wrongdoing.

Deterrence & Fairness of Punishment

As for criminal deterrence, I was thinking about this concept the other day and thought about a possible conundrum concerning its justification (certain forms of deterrence anyway).  If a particular punishment is agreed upon within some legal system on the grounds that it will be sufficient to rehabilitate the criminal (and compensate the victim sufficiently) and an additional amount of punishment is tacked on to it merely to serve as a more effective deterrent, it seems that it would lack justification, with respect to treating the criminal in a fair manner.

To illustrate this, consider the following: if the criminal commits the crime, they are in one of two possible epistemic states — either they knew about the punishment that would follow from committing the crime beforehand, or they didn’t.  If they didn’t know this, then the deterrence addition of the punishment wouldn’t have had the opportunity to perform its intended function on the potential criminal, in which case the criminal would be given a harsher sentence than is necessary to rehabilitate them (and to compensate the victim) which should be the sole purpose of punishing them in the first place (to “right” a “wrong” and to minimize behavioral recurrences).  How could this be justified in terms of what is a fair and just treatment of the criminal?

And then, on the other hand, if they did know the degree of punishment that would follow committing such a crime, but they committed the crime anyway, then the deterrence addition of the punishment failed to perform its intended function even if it had the opportunity to do so.  This would mean that the criminal is once again, given a punishment that is harsher than what is needed to rehabilitate them (and also to compensate the victim).

Now one could argue in the latter case that there are other types of justification to ground the harsher deterrence addition of the punishment.  For example, one could argue that the criminal knew beforehand what the consequences would be, so they can’t plead ignorance as in the first example.  But even in the first example, it was the fact that the deterrence addition was never able to perform its function that turned out to be most relevant even if this directly resulted from the criminal lacking some amount of knowledge.  Likewise, in the second case, even with the knowledge at their disposal, the knowledge was useless in actualizing a functional deterrent.  Thus, in both cases the deterrent failed to perform its intended function, and once we acknowledge that, then we can see that the only purposes of punishment that remain are rehabilitation and compensation for the victim.  One could still try and argue that the criminal had a chance to be deterred, but freely chose to commit the crime anyway so they are in some way more deserving of the additional punishment.  But then again, we need to understand that the criminal doesn’t have libertarian free will so it’s not as if they could have done otherwise given those same conditions, barring any random fluctuations.  That doesn’t mean we don’t hold them responsible for their actions — for they are still being justifiably punished for their crime — but it is the degree of punishment that needs to be adjusted given our knowledge that they lack libertarian free will.

Now one could further object and say that the deterrence addition of the punishment isn’t intended solely for the criminal under our consideration but also for other possible future criminals that may be successfully deterred from the crime given such a deterrence addition (even if this criminal was not).  Regardless of this pragmatic justification, that argument still doesn’t justify punishing the criminal, in such a way, if we are to treat the criminal in a fair way based on their actions alone.  If we bring other possible future criminals into the justification, then the criminal is being punished not only for their wrongdoing but in excess for hypothetical reasons concerning other hypothetical offenders — which is not at all fair.  So we can grant the fact that some may justify these practices on pragmatic consequentialist grounds, but they aren’t consistent with a Rawslian conception of justice as fairness.  Which means they aren’t consistent with many anti-consequentialist views (such as Kantian deontologists for example) that often promote strong conceptions of justice and moral desert in their ethical frameworks.

Conclusion

In summary, I wanted to reiterate the fact that even if our intuitive conceptions of moral desert and justice sometimes align with our rational moral goals, they often lack rational justification and thus often serve to inhibit the implementation of any kind of rational moral theory.  They often produce behaviors that are vengeful, malicious, sadistic, and most often counter-productive to our actual moral goals.  We need to incorporate the fact that libertarian free will does not (and logically can not) exist, into our moral framework, so that we can better strive to treat others fairly even if we still hold people responsible in some sense for their actions.

We can still hold people responsible for their actions (and ought to) by replacing the concept of libertarian free will with a free will conception that is consistent with the laws of physics, with psychology, and neurology, by proposing for example that people’s degree of “free will” with respect to some action is inversely proportional to the degree of conditioning needed to modify such behavior.  That is to say, the level of free will that we have with respect to some kind of behavior is related to our ability to be programmed and reprogrammed such that the behavior can (at least in principle) be changed.

Our punishment-reward systems then (whether in legal, social, or familial domains), should treat others as responsible agents only insofar as to protect the members of that society (or group) from harm and also to induce behaviors that are conducive to our physical and psychological well being — which is the very purpose of our having any reasonable moral theory (that is sufficiently motivating to follow) in the first place.  Anything that goes above and beyond what is needed to accomplish this is excessive and therefore not morally justified.  Following this logic, we should see that many types of punishment including, for example, the death penalty, are entirely unjustified in terms of our moral goals and the strategies of punishment that we should implement to accomplish those goals.  As the saying goes, an eye for an eye makes the whole world blind, and thus barbaric practices such as inflicting pain or suffering (or death sentences) simply to satisfy some intuitions need to be abolished and replaced with an enlightened system that relies on rational justifications rather than intuition.  Only then can we put the primitive, inhumane moral systems of the past to rest once and for all.

We need to work with our psychology (not only the common trends between most human beings but also our individual idiosyncrasies) and thus work under the pretense of our varying degrees of autonomy and behavioral plasticity.  Only then can we maximize our chances and optimize our efforts in attaining fulfilling lives for as many people as possible living in a society.  It is our intuitions (products of evolution and culture) that we must be careful of, as they can (and often have throughout history) led us astray to commit various moral atrocities.  All we can do is try to overcome these moral handicaps the best we can through means of reason and rationality, but we have to acknowledge that these problems exist before we can face them head on and subsequently engineer the right kinds of societal changes to successfully reach our moral goals.

The Brain as a Prediction Machine

leave a comment »

Over the last year, I’ve been reading a lot about Karl Friston and Andy Clark’s work on the concept of perception and action being mediated by a neurological schema centered on “predictive coding”, what Friston calls “active inference”, the “free energy principle”, and Bayesian inference in general as it applies to neuro-scientific models of perception, attention, and action.  Here’s a few links (Friston here; Clark here, here, and here) to some of their work as it is worth reading for those interested in neural modeling, information theory, and learning more about meta-theories pertaining to how the brain integrates and processes information.

I find it fascinating how this newer research and these concepts relate to and help to bring together some content from several of my previous blog posts, in particular, those that mention the concept of hierarchical neurological hardware and those that mention my own definition of knowledge “as recognized causal patterns that allow us to make successful predictions.”  For those that may be interested, here’s a list of posts I’ve made over the last few years that I think contain some relevant content (in chronological order).

The ideas formulated by Friston and expanded on by Clark center around the brain being (in large part) a prediction generating machine.  This fits in line with my own conclusions about what the brain seems to be doing when it’s acquiring knowledge over time (however limited my reading is on the subject).  Here’s an image of the basic predictive processing schema:

PPschema

The basic Predictive Processing schema (adapted from Lupyan and Clark (2014))

One key element in Friston and Clark’s work (among the work of some others) is the amalgamation of perception and action.  In this framework, perception itself is simply the result of the brain’s highest level predictions of incoming sensory data.  But also important in this framework is that prediction error minimization is accomplished through embodiment itself.  That is to say, their models posit that the brain not only tries to reduce prediction errors by updating its prediction models based on the actual incoming sensory information (with only the error feeding forward to update the models, similar to data compression schema), but the concept of active inference involves the minimization of prediction error through the use of motor outputs.  This could be taken to mean that motor outputs themselves are, in a sense, caused by the brain trying to reduce prediction errors pertaining to predicted sensory input — specifically sensory input that we would say stems from our desires and goals (e.g. desire to fulfill hunger, commuting to work, opening the car door, etc.).

To give a simple example of this model in action, let’s consider an apple resting on a table in front of me.  If I see the apple in front of me and I have a desire to grab it, my brain would not only predict what that apple looks like and how it is perceived over time (and how my arm looks while reaching for it), but it would also predict what it should feel like to reach for the apple.  So if I reach for it based on the somato-sensory prediction and there is some error in that prediction, corroborated by my visual cortex observing my arm moving in some wrong direction, the brain would respond by updating its models that predict what it should feel so that my arm starts moving in the proper direction.  This prediction error minimization is then fine-tuned as I get closer to the apple and can finally grab it.

This embodiment ingrained in the predictive processing models of Friston and Clark can also be well exemplified by the so-called “Outfielder’s Problem”.  In this problem, an outfielder is trying to catch a fly ball.  Now we know that outfielders are highly skilled at doing this rather effectively.  But if we ask the outfielder to merely stand still and watch a batted ball and predict where it will land, their accuracy is generally pretty bad.  So when we think about what strategy the brain takes to accomplish this when moving the body quickly, we begin to see the relevance of active inference and embodiment in the brain’s prediction schema.  The outfielder’s brain employs a brilliant strategy called “optical acceleration cancellation” (OAC).  Here, the well-trained outfielder sees the fly ball, and moves his or her body (while watching the ball) in order to cancel out any optical acceleration observed during the ball’s flight.  If they do this, then they will end up exactly where the ball was going to land, and then they’re able to catch it successfully.

We can imagine fine-grained examples of this active inference during everyday tasks, where I may simply be looking at a picture on my living room wall, and when my brain is predicting how it will look over the span of a few seconds, my head may slightly change its tilt, or direction, or my eyes may slowly move a fraction of a degree this way or that way, however imperceptible to me.  My brain in this case is predicting what the picture on the wall will look like over time and this prediction (according to my understanding of Clark) is identical to what we actually perceive.  One key thing to note here is that the prediction models are not simply updated based on the prediction error that is fed forward through the brain’s neurological hierarchies, but it is also getting some “help” from various motor movements to correct for the errors through action, rather than simply freezing all my muscles and updating the model itself (which may in fact be far less economical for the brain to do).

Another area of research that pertains to this framework, including ways of testing its validity, is that of evolutionary psychology and biology, where one would surmise (if these models are correct) that evolution likely provided our brains with certain hard-wired predictive models and our learning processes over time use these as starting points to produce innate reflexes (such as infant suckling to give a simple example) that allow us to survive long enough to update our models with actual new acquired information.  There are many different facets to this framework and I look forward to reading more about Friston and Clark’s work over the next few years.  I have a feeling that they have hit on something big, something that will help to answer a lot of questions about embodied cognition, perception, and even consciousness itself.

I encourage you to check out the links I provided pertaining to Friston and Clark’s work, to get a taste of the brilliant ideas they’ve been working on.

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Transcendental Argument For God’s Existence: A Critique

with 2 comments

Theist apologists and theologians have presented many arguments for the existence of God throughout history including the Ontological Argument, Cosmological Argument, Fine-Tuning Argument, the Argument from Morality, and many others — all of which having been refuted with various counter arguments.  I’ve written about a few of these arguments in the past (1, 2, 3), but one that I haven’t yet touched on is that of the Transcendental Argument for God (or simply TAG).  Not long ago I heard the Christian apologist Matt Slick conversing/debating with the well renowned atheist Matt Dillahunty on this topic and then I decided to look deeper into the argument as Slick presents it on his website.  I have found a number of problems with his argument, so I decided to iterate them in this post.

Slick’s basic argument goes as follows:

  1. The Laws of Logic exist.
    1. Law of Identity: Something (A) is what it is and is not what it is not (i.e. A is A and A is not not-A).
    2. Law of Non-contradiction: A cannot be both A and not-A, or in other words, something cannot be both true and false at the same time.
    3. Law of the Excluded Middle: Something must either be A or not-A without a middle ground, or in other words, something must be either true or false without a middle ground.
  2. The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.
  3. They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.
  4. The Laws of Logic are not the product of human minds because human minds are different — not absolute.
  5. But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.
  6. Furthermore, if there are only two options to account for something, i.e., God and no God, and one of them is negated, then by default the other position is validated.
  7. Therefore, part of the argument is that the atheist position cannot account for the existence of The Laws of Logic from its worldview.
  8. Therefore God exists.

Concepts are Dependent on and the Product of Physical Brains

Let’s begin with number 2, 3, and 4 from above:

The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.  They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.  The Laws of Logic are not the product of human minds because human minds are different — not absolute.

Now I’d like to first mention that Matt Dillahunty actually rejected the first part of Slick’s premise here, as Dillahunty explained that while logic (the concept, our application of it, etc.) may in fact be conceptual in nature, the logical absolutes themselves (i.e. the laws of logic) which logic is based on are in fact neither conceptual nor physical.  My understanding of what Dillahunty was getting at here is that he was basically saying that just as the concept of an apple points to or refers to something real (i.e. a real apple) which is not equivalent to the concept of an apple, so also does the concept of the logical absolutes refer to something that is not the same as the concept itself.  However, what it points to, Dillahunty asserted, is something that isn’t physical either.  Therefore, the logical absolutes themselves are neither physical nor conceptual (as a result, Dillahunty later labeled “the essence” of the LOL as transcendent).  When Dillahunty was pressed by Slick to answer the question, “then what caused the LOL to exist?”, Dillahunty responded by saying that nothing caused them (or we have no reason to believe so) because they are transcendent and are thus not a product of anything physical nor conceptual.

If this is truly the case, then Dillahunty’s point here does undermine the validity of the logical structure of Slick’s argument, because Slick would then be beginning his argument by referencing the content and the truth of the logical absolutes themselves, and then later on switching to the concept of the LOL (i.e. their being conceptual in their nature, etc.).  For the purposes of this post, I’m going to simply accept Slick’s dichotomy that the logical absolutes (i.e. the laws of logic) are in fact either physical or conceptual by nature and then I will attempt to refute the argument anyway.  This way, if “conceptual or physical” is actually a true dichotomy (i.e. if there are no other options), despite the fact that Slick hasn’t proven this to be the case, his argument will be undermined anyway.  If Dillahunty is correct and “conceptual or physical” isn’t a true dichotomy, then even if my refutation here fails, Slick’s argument will still be logically invalid based on the points Dillahunty raised.

I will say however that I don’t think I agree with the point that Dillahunty made that the LOL are neither physical nor conceptual, and for a few reasons (not least of all because I am a physicalist).  My reasons for this will become more clear throughout the rest of this post, but in a nutshell, I hold that concepts are ultimately based on and thus are a subset of the physical, and the LOL would be no exception to this.  Beyond the issue of concepts, I believe that the LOL are physical in their nature for a number of other reasons as well which I’ll get to in a moment.

So why does Slick think that the LOL can’t be dependent on space?  Slick mentions in the expanded form of his argument that:

They do not stop being true dependent on location. If we travel a million light years in a direction, The Laws of Logic are still true.

Sure, the LOL don’t depend on a specific location in space, but that doesn’t mean that they aren’t dependent on space in general.  I would actually argue that concepts are abstractions that are dependent on the brains that create them based on those brains having recognized properties of space, time, and matter/energy.  That is to say that any concept such as the number 3 or the concept of redness is in fact dependent on a brain having recognized, for example, a quantity of discrete objects (which when generalized leads to the concept of numbers) or having recognized the color red in various objects (which when generalized leads to the concept of red or redness).  Since a quantity of discrete objects or a color must be located in some kind of space — even if three points on a one dimensional line (in the case of the number 3), or a two-dimensional red-colored plane (in the case of redness), then we can see that these concepts are ultimately dependent on space and matter/energy (of some kind).  Even if we say that concepts such as the color red or the number 3 do not literally exist in actual space nor are made of actual matter, they do have to exist in a mental space as mental objects, just as our conception of an apple floating in empty space doesn’t actually lie in space nor is made of matter, it nevertheless exists as a mental/perceptual representation of real space and real matter/energy that has been experienced by interaction with the physical universe.

Slick also mentions in the expanded form of his argument that the LOL can’t be dependent on time because:

They do not stop being true dependent on time. If we travel a billion years in the future or past, The Laws of Logic are still true.

Once again, sure, the LOL do not depend on a specific time, but rather they are dependent on time in general, because minds depend on time in order to have any experience of said concepts at all.  So not only are concepts only able to be formed by a brain that has created abstractions from physically interacting with space, matter, and energy within time (so far as we know), but the mind/concept-generating brain itself is also made of actual matter/energy, lying in real space, and operating/functioning within time.  So concepts are in fact not only dependent on space, time, and matter/energy (so far as we know), but are in fact also the product of space, time, and matter/energy, since it is only certain configurations of such that in fact produce a brain and the mind that results from said brain.  Thus, if the LOL are conceptual, then they are ultimately the product of and dependent on the physical.

Can Truth Exist Without Brains and a Universe?  Can Identities Exist Without a Universe?  I Don’t Think So…

Since Slick himself even claims that The Laws of Logic (LOL) are conceptual by nature, then that would mean that they are in fact also dependent on and the product of the physical universe, and more specifically are dependent on and the product of the human mind (or natural minds in general which are produced by a physical brain).  Slick goes on to say that the LOL can’t be dependent on the physical universe (which contains the brains needed to think or produce those concepts) because “…if the physical universe were to disappear, The Laws of Logic would still be true.”  It seems to me that without a physical universe, there wouldn’t be any “somethings” with any identities at all and so the Law of Identity which is the root of the other LOL wouldn’t apply to anything because there wouldn’t be anything and thus no existing identities.  Therefore, to say that the LOL are true sans a physical universe would be meaningless because identities themselves wouldn’t exist without a physical universe.  One might argue that abstract identities would still exist (like numbers or properties), but abstractions are products of a mind and thus need a brain to exist (so far as we know).  If one argued that supernatural identities would still exist without a physical universe, this would be nothing more than an ad hoc metaphysical assertion about the existence of the supernatural which carries a large burden of proof that can’t be (or at least hasn’t been) met.  Beyond that, if at least one of the supernatural identities was claimed to be God, this would also be begging the question.  This leads me to believe that the LOL are in fact a property of the physical universe (and appear to be a necessary one at that).

And if truth is itself just another concept, it too is dependent on minds and by extension the physical brains that produce those minds (as mentioned earlier).  In fact, the LOL seem to be required for any rational thought at all (hence why they are often referred to as the Laws of Thought), including the determination of any truth value at all.  So our ability to establish the truth value of the LOL (or the truth of anything for that matter) is also contingent on our presupposing the LOL in the first place.  So if there were no minds to presuppose the very LOL that are needed to establish its truth value, then could one say that they would be true anyway?  Wouldn’t this be analogous to saying that 1 + 1 = 2 would still be true even if numbers and addition (constructs of the mind) didn’t exist?  I’m just not sure that truth can exist in the absence of any minds and any physical universe.  I think that just as physical laws are descriptions of how the universe changes over time, these Laws of Thought are descriptions that underlie what our rational thought is based on, and thus how we arrive at the concept of truth at all.  If rational thought ceases to exist in the absence of a physical universe (since there are no longer any brains/minds), then the descriptions that underlie that rational thought (as well as their truth value) also cease to exist.

Can Two Different Things Have Something Fundamental in Common?

Slick then erroneously claims that the LOL can’t be the product of human minds because human minds are different and thus aren’t absolute, apparently not realizing that even though human minds are different from one another in many ways, they also have a lot fundamentally in common, such as how they process information and how they form concepts about the reality they interact with generally.  Even though our minds differ from one another in a number of ways, we nevertheless only have evidence to support the claim that human brains produce concepts and process information in the same general way at the most fundamental neurological level.  For example, the evidence suggests that the concept of the color red is based on the neurological processing of a certain range of wavelengths of electromagnetic radiation that have been absorbed by the eye’s retinal cells at some point in the past.  However, in Slick’s defense, I’ll admit that it could be the case that what I experience as the color red may be what you would call the color blue, and this would in fact suggest that concepts that we think we mutually understand are actually understood or experienced differently in a way that we can’t currently verify (since I can’t get in your mind and compare it to my own experience, and vice versa).

Nevertheless, just because our minds may experience color differently from one another or just because we may differ slightly in terms of what range of shades/tints of color we’d like to label as red, this does not mean that our brains/minds (or natural minds in general) are not responsible for producing the concept of red, nor does it mean that we don’t all produce that concept in the same general way.  The number 3 is perhaps a better example of a concept that is actually shared by humans in an absolute sense, because it is a concept that isn’t dependent on specific qualia (like the color red is).  The concept of the number 3 has a universal meaning in the human mind since it is derived from the generalization of a quantity of three discrete objects (which is independent of how any three specific objects are experienced in terms of their respective qualia).

Human Brains Have an Absolute Fundamental Neurology Which Encompasses the LOL

So I see no reason to believe that human minds differ at all in their conception of the LOL, especially if this is the foundation for rational thought (and thus any coherent concept formed by our brains).  In fact, I also believe that the evidence within the neurosciences suggests that the way the brain recognizes different patterns and thus forms different/unique concepts and such is dependent on the fact that the brain uses a hardware configuration schema that encompasses the logical absolutes.  In a previous post, my contention was that:

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

So I believe that our brain produces and distinguishes these different “object” identities by having a neurological scheme that represents each perceived identity (each object) with a unique set of neurons that function in a unique way and thus which have their own unique identity.  Therefore, it would seem that the absolute nature of the LOL can easily be explained by how the brain naturally encompasses them through its fundamental hardware schema.  In other words, my contention is that our brain uses this wiring schema because it is the only way that it can be wired to make any discriminations at all and validly distinguish one identity from another in perception and thought, and this ability to discriminate various aspects of reality would be evolutionarily naturally-selected for based on the brain accurately modeling properties of the universe (in this case different identities/objects/causal-interactions existing) as it interacts with that environment via our sensory organs.  Which would imply that the existence of discrete identities is a property of the physical universe, and the LOL would simply be a description of what identities are.  This would explain why we see the LOL as absolute and fundamental and presuppose them.  Our brains simply encompass them in the most fundamental aspect of our neurology as it is a fundamental physical property of the universe that our brains model.

I believe that this is one of the reasons that Dillahunty and others believe that the LOL are transcendent (neither physical nor conceptual), because natural brains/minds are neurologically incapable of imagining a world existing without them.  The problem then only occurs because Dillahunty is abstracting a hypothetical non-physical world or mode of existence, yet doesn’t realize that he is unable to remove every physical property from any abstracted world or imagined mode of existence.  In this case, the physical property that he is unable to remove from his hypothetical non-physical world is his own neurological foundation, the very foundation that underlies all concepts (including that of existential identities) and which underlies all rational thought.  I may be incorrect about Dillahunty’s position here, but this is what I’ve inferred anyway based on what I’ve heard him say while conversing with Slick about this topic.

Human (Natural) Minds Can’t Account for the LOL, But Disembodied Minds Can?

Slick even goes on to say in point 5 that:

But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.

We can see here that he concedes that the LOL is in fact the product of a mind, only he rejects the possibility that it could be a human mind (and by implication any kind of natural mind).  Rather, he insists that it must be a transcendent mind of some kind, which he calls God.  The problem with this conclusion is that we have no evidence or argument that demonstrates that minds can exist without physical brains existing in physical space within time.  To assume so is simply to beg the question.  Thus, he is throwing in an ontological/metaphysical assumption of substance dualism as well as that of disembodied minds, not only claiming that there must exist some kind of supernatural substance, but that this mysterious “non-physical” substance also has the ability to constitute a mind, and somehow do so without any dependence on time (even though mental function and thinking is itself a temporal process).  He assumes all of this of course without providing any explanation of how this mind could work even in principle without being made of any kind of stuff, without being located in any kind of space, and without existing in any kind of time.  As I’ve mentioned elsewhere concerning the ad hoc concept of disembodied minds:

…the only concept of a mind that makes any sense at all is that which involves the properties of causality, time, change, space, and material, because minds result from particular physical processes involving a very complex configuration of physical materials.  That is, minds appear to be necessarily complex in terms of their physical structure (i.e. brains), and so trying to conceive of a mind that doesn’t have any physical parts at all, let alone a complex arrangement of said parts, is simply absurd (let alone a mind that can function without time, change, space, etc.).  At best, we are left with an ad hoc, unintelligible combination of properties without any underlying machinery or mechanism.

In summary, I believe Slick has made several errors in his reasoning, with the most egregious being his unfounded assumption that natural minds aren’t capable of producing an absolute concept such as the LOL simply because natural minds have differences between one another (not realizing that all minds have fundamental commonalities), and also his argument’s reliance on the assumption that an ad hoc disembodied mind not only exists (whatever that could possibly mean) but that this mind can somehow account for the LOL in a way that natural minds can not, which is nothing more than an argument from ignorance, a logical fallacy.  He also insists that the Laws of Logic would be true without any physical universe, not realizing that the truth value of the Laws of Logic can only be determined by presupposing the Laws of Logic in the first place, which is circular, thus showing that the truth value of the Laws of Logic can’t be used to prove that they are metaphysically transcendent in any way (even if they actually happen to be metaphysically transcendent).  Lastly, without a physical universe of any kind, I don’t see how identities themselves can exist, and identities seem to be required in order for the LOL to be meaningful at all.

Substance Dualism, Interactionism, & Occam’s razor

with 13 comments

Recently I got into a discussion with Gordon Hawkes on A Philosopher’s Take about the arguments and objections for substance dualism, that is, the position that there are actually two ontological substances that exist in the world: physical substances and mental (or non-physical) substances.  Here’s the link to part 1, and part 2 of that author’s post series (I recommend taking a look at this group blog and see what he and others have written over there on various interesting topics).  Many dualists would identify or liken this supposed second substance as a soul or the like, but I’m not going to delve into that particular detail within this post.  I’d prefer to focus on the basics of the arguments presented rather than those kinds of details.  Before I dive into the topic, I want to mention that Hawkes was by no means making an argument for substance dualism, but rather he was merely pointing out some flaws in common arguments against substance dualism.  Now that that’s been said, I’m going to get to the heart of the topic, but I will also be providing evidence and arguments against substance dualism.  The primary point raised in the first part of that series was the fact that just because neuroscience is continuously finding correlations between particular physical brain states and particular mental states, this doesn’t mean that these empirical findings show that dualism is necessarily false — since some forms of dualism seem to be entirely compatible with these empirical findings (e.g. interactionist dualism).  So the question ultimately boils down to whether or not mental processes are identical with, or metaphysically supervenient upon, the physical processes of the brain (if so, then substance dualism is necessarily false).

Hawkes talks about how the argument from neuroscience (as it is sometimes referred to) is fallacious because it is based on the mistaken belief that correlation (between mental states and brain states) is equivalent with identity or supervenience of mental and physical states.  Since this isn’t the case, then one can’t rationally use the neurological correlation to disprove (all forms of) substance dualism.  While I agree with this, that is, that the argument from neuroscience can’t be used to disprove (all forms of) substance dualism, it is nevertheless strong evidence that a physical foundation exists for the mind and it also provides evidence against all forms of substance dualism that posit that the mind can exist independently of the physical brain.  At the very least, it shows that the prior probability of minds existing without brains is highly unlikely.  This would seem to suggest that any supposed mental substance is necessarily dependent on a physical substance (so disembodied minds would be out of the question for the minimal substance dualist position).  Even more damning for substance dualists though, is the fact that since the argument from neuroscience suggests that minds can’t exist without physical brains, this would mean that prior to brains evolving in any living organisms within our universe, at some point in the past there weren’t any minds at all.  This in turn would suggest that the second substance posited by dualists isn’t at all conserved like the physical substances we know about are conserved (as per the Law of Conservation of Mass and Energy).  Rather, this second substance would have presumably had to have come into existence ex nihilo once some subset of the universe’s physical substances took on a particular configuration (i.e. living organisms that eventually evolved a brain complex enough to afford mental experiences/consciousness/properties).  Once all the brains in the universe disappear in the future (after the heat death of the universe guarantees such a fate), then this second substance will once again disappear from our universe.

The only way around this (as far as I can tell) is to posit that the supposed mental substance had always existed and/or will always continue to exist, but in an entirely undetectable way somehow detached from any physical substance (which is a position that seems hardly defensible given the correlation argument from neuroscience).  Since our prior probabilities of any hypothesis are based on all our background knowledge, and since the only substance we can be sure of exists (a physical substance) has been shown to consistently abide by conservation laws (within the constraints of general relativity and quantum mechanics), it is more plausible that any other ontological substance would likewise be conserved rather than not conserved.  If we had evidence to the contrary, that would change the overall consequent probability, but without such evidence, we only have data points from one ontological substance, and it appears to follow conservation laws.  For this reason alone, it is less likely that a second substance exists at all, if it isn’t itself conserved as that of the physical.

Beyond that, the argument from neuroscience also provides at least some evidence against interactionism (the idea that the mind and brain can causally interact with each other in both causal directions), and interactionism is something that substance dualists would likely need in order to have any reasonable defense of their position at all.  To see why this is true, one need only recognize the fact that the correlates of consciousness found within neuroscience consist of instances of physical brain activity that are observed prior to the person’s conscious awareness of any experience, intentions, or willed actions produced by said brain activity.  For example, studies have shown that when a person makes a conscious decision to do something (say, to press one of two possible buttons placed in front of them), there are neurological patterns that can be detected prior to their awareness of having made a decision and so these patterns can be used to correctly predict which choice the person will make even before they do!  I would say that this is definitely evidence against interactionism, because we have yet to find any cases of mental experiences occurring prior to the brain activity that is correlated with it.  We’ve only found evidence of brain activity preceding mental experiences, never the other way around.  If the mind was made from a different substance, existing independently of the physical brain (even if correlated with it), and able to causally interact with the physical brain, then it seems reasonable to expect that we should be able to detect and confirm instances of mental processes/experiences occurring prior to correlated changes in physical brain states.  Since this hasn’t been found yet in the plethora of brain studies performed thus far, the prior probability of interactionism being true is exceedingly low.  Additionally, the conservation of mass and energy that we observe (as well as the laws of physics in general) in our universe also challenges the possibility of any means of causal energy transfer between a mind to a brain or vice versa.  For the only means of causal interaction we’ve observed thus far in our universe is by way of energy/momentum transfer from one physical particle/system to another.  If a mind is non-physical, then by what means can it interact at all with a brain or vice versa?

The second part of the post series from Hawkes talked about Occam’s razor and how it’s been applied in arguments against dualism.  Hawkes argues that even though one ontological substance is less complex and otherwise preferred over two substances (when all else is equal), Occam’s razor apparently isn’t applicable in this case because physicalism has been unable to adequately address what we call a mind, mental properties, etc.  My rebuttal to this point is that dualism doesn’t adequately address what we call a mind, mental properties, etc., either.  In fact it offers no additional explanatory power than physicalism does because nobody has proposed how it could do so.  That is, nobody has yet demonstrated (as far as I know) what any possible mechanisms would be for this new substance to instantiate a mind, how this non-physical substance could possibly interact with the physical brain, etc.  Rather it seems to have been posited out of an argument from ignorance and incredulity, which is a logical fallacy.  Since physicalism hasn’t yet provided a satisfactory explanation for what some call the mind and mental properties, it is therefore supposed by dualists that a second ontological substance must exist that does explain it or account for it adequately.

Unfortunately, because of the lack of any proposed mechanism for the second substance to adequately account for the phenomena, one could simply replace the term “second substance” with “magic” and be on the same epistemic footing.  It is therefore an argument from ignorance to presume that a second substance exists, for the sole reason that nobody has yet demonstrated how the first substance can fully explain what we call mind and mental phenomena.  Just as we’ve seen throughout history where an unknown phenomena is attributed to magic or the supernatural and later found to be accounted for by a physical explanation, this means that the prior probability that this will also be the case for the phenomena of the mind and mental properties is extraordinarily high.  As a result, I think that Occam’s razor is applicable in this case, because I believe it is always applicable to an argument from ignorance that’s compared to an (even incomplete) argument from evidence.  Since physicalism accounts for many aspects of mental phenomena (such as the neuroscience correlations, etc.), dualism needs to be supported by at least some proposed mechanism (that is falsifiable) in order to nullify the application of Occam’s razor.

Those are my thoughts on the topic for now.  I did think that Hawkes made some valid points in his post series — such as the fact that correlation doesn’t equal identity or supervenience and also the fact that Occam’s razor is only applicable under particular circumstances (such as when both hypotheses explain the phenomena equally, good or bad, with one containing a more superfluous ontology).  However, I think that overall the arguments and evidence against substance dualism are strong enough to eliminate any reasonable justification for supposing that dualism is true (not that Hawkes was defending dualism as he made clear at the beginning of his post) and I also believe that both physicalism and dualism explain the phenomena equally well such that Occam’s razor is applicable (since dualism doesn’t seem to add any explanatory power to that already provided by physicalism).  So even though correlation doesn’t equal identity or supervenience, the arguments and evidence from neuroscience and physics challenge the possibility of any interactionism between the physical and supposed non-physical substance, and it challenges the existence of the second substance in general (due to it’s apparent lack of conservation over time among other reasons).