The Open Mind

Cogito Ergo Sum

Archive for the ‘Morality’ Category

Atheism, Morality, and Various Thoughts of the Day…

with 26 comments

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

On Moral Desert: Intuition vs Rationality

leave a comment »

So what exactly is moral desert?  Well, in a nutshell, it is what someone deserves as a result of their actions as defined within the framework of some kind of moral theory.  Generally when people talk about moral desert, it’s often couched in terms of punishment and reward, and our intuitions (whether innate or culturally inherited) often produce strong feelings of knowing exactly what kinds of consequences people deserve in response to their actions.  But if we think about our ultimate goals in implementing any reasonable moral theory, we should quickly recognize the fact that our ultimate moral goal is to have ourselves and everybody else simply abide by that moral theory.  And we want that in order to guide our behavior and the behavior of those around us in ways that are conducive to our well being.  Ultimately, we want ourselves and others to act in ways that maximize our personal satisfaction — and not in a hedonistic sense — but rather to maximize our sense of contentment and living a fulfilled life.

If we think about scenarios that seem to merit punishment or reward, it would be useful to keep our ultimate moral goal in mind.  The reason I mention this is because, in particular, our feelings of resentment toward those that have wronged us can often lead one to advocate for an excessive amount of punishment to the wrongdoer.  Among other factors, vengeance and retribution often become incorporated into our intuitive sense of justice.  Many have argued that retribution itself (justifying “proportionate” punishment by appealing to concepts like moral desert and justice) isn’t a bad thing, even if vengeance — which lacks inherent limits on punishment, involves personal emotions from the victim, and other distinguishing factors — is in fact a bad thing.  While thinking about such a claim, I think it’s imperative that we analyze our reasons for punishing a wrongdoer in the first place and then analyze the concept of moral desert more closely.

Free Will & It’s Implications for Moral Desert

Another relevant topic I’ve written about in several previous posts is the concept of free will.  This is an extremely important concept to parse out here, because moral desert is most often intimately tied to the positive claim of our having free will.  That is to say, most concepts of moral desert, whereby it is believed that people deserve punishment and reward for actions that warrant it, fundamentally relies on the premise that people could have chosen to do otherwise but instead chose the path they did out of free choice.  While there are various versions of free will that philosophers have proposed, they all tend to revolve around some concept of autonomous agency.  The folk psychological conception of free will that most people subscribe to is some form of deliberation that is self-caused in some way thus ruling out randomness or indeterminism as the “cause”, since randomness can’t be authored by the autonomous agent, and also ruling out non-randomness or determinism as well, since an unbroken chain of antecedent causes can’t be authored by the autonomous agent either.

So as to avoid a long digression, I’m not going to expound upon all the details of free will and the various versions that others have proposed, but will only mention that the most relevant version that is tied to moral desert is generally some form of having the ability to have chosen to do otherwise (ignoring randomness).  Notice that because indeterminism or determinism is a logical dichotomy, these are the only two options that can possibly exist to describe the ontological underpinnings of our universe (in terms of causal laws that describe how the state of the universe changes over time).  Quantum mechanics allows either of these two options to exist given their consistency with the various interpretations therein that are all empirically identical with one another, but there is no third option available, so quantum mechanics doesn’t buy us any room for this kind of free will either.  Since neither option can produce any form of self-caused or causa sui free will (sometimes referred to as libertarian free will), then the intuitive concept of moral desert that relies on said free will is also rendered impossible if not altogether meaningless.  Therefore moral desert can only exist as a coherent concept if it no longer contains within it any assumptions of the moral agent having an ability to have chosen to do otherwise (again, ignoring randomness).  So what does this realization imply for our preconceptions of justified punishment or even justice itself?

At the very least, the concept of moral desert that is involved in these other concepts needs to be reformulated or restricted given the impossibility and thus the non-existence of libertarian free will.  So if we are to say that people “deserve” anything at all morally speaking (such as a particular punishment), it can only be justified let alone meaningful in some other sense, such as a consequentialist goal that the implementation of the “desert” (in this case, the punishment) effectively accomplishes.  Punishing the wrongdoer can no longer be a means of their getting their due so to speak, but rather needs to be justified by some other purpose such as rehabilitation, future crime deterrence, and/or restitution for the victim (to compensate for physical damages, property loss, etc.)  With respect to this latter factor, restitution, there is plenty of wiggle room here for some to argue for punishment on the grounds of it simply making the victim feel better (which I suppose we could call a form of psychological restitution).  People may try to justify some level of punishment based on making the victim feel better, but vengeance should be avoided at all costs, and one needs to carefully consider what justifications are sufficient (if any) for punishing another with the intention of simply making the victim feel better.

Regarding psychological restitution, it’s useful to bring up the aforementioned concepts of retribution and vengeance, and appreciate the fact that vengeance can easily result in cases where no disinterested party performs the punishment or decides its severity, and instead the victim (or another interested party) is involved with these decisions and processes.  Given the fact that we lack libertarian free will, we can also see how vengeance is not rationally justifiable and therefore why it is important that we take this into account not only in terms of society’s methods of criminal behavioral correction but also in terms of how we behave toward others that we think have committed some wrongdoing.

Deterrence & Fairness of Punishment

As for criminal deterrence, I was thinking about this concept the other day and thought about a possible conundrum concerning its justification (certain forms of deterrence anyway).  If a particular punishment is agreed upon within some legal system on the grounds that it will be sufficient to rehabilitate the criminal (and compensate the victim sufficiently) and an additional amount of punishment is tacked on to it merely to serve as a more effective deterrent, it seems that it would lack justification, with respect to treating the criminal in a fair manner.

To illustrate this, consider the following: if the criminal commits the crime, they are in one of two possible epistemic states — either they knew about the punishment that would follow from committing the crime beforehand, or they didn’t.  If they didn’t know this, then the deterrence addition of the punishment wouldn’t have had the opportunity to perform its intended function on the potential criminal, in which case the criminal would be given a harsher sentence than is necessary to rehabilitate them (and to compensate the victim) which should be the sole purpose of punishing them in the first place (to “right” a “wrong” and to minimize behavioral recurrences).  How could this be justified in terms of what is a fair and just treatment of the criminal?

And then, on the other hand, if they did know the degree of punishment that would follow committing such a crime, but they committed the crime anyway, then the deterrence addition of the punishment failed to perform its intended function even if it had the opportunity to do so.  This would mean that the criminal is once again, given a punishment that is harsher than what is needed to rehabilitate them (and also to compensate the victim).

Now one could argue in the latter case that there are other types of justification to ground the harsher deterrence addition of the punishment.  For example, one could argue that the criminal knew beforehand what the consequences would be, so they can’t plead ignorance as in the first example.  But even in the first example, it was the fact that the deterrence addition was never able to perform its function that turned out to be most relevant even if this directly resulted from the criminal lacking some amount of knowledge.  Likewise, in the second case, even with the knowledge at their disposal, the knowledge was useless in actualizing a functional deterrent.  Thus, in both cases the deterrent failed to perform its intended function, and once we acknowledge that, then we can see that the only purposes of punishment that remain are rehabilitation and compensation for the victim.  One could still try and argue that the criminal had a chance to be deterred, but freely chose to commit the crime anyway so they are in some way more deserving of the additional punishment.  But then again, we need to understand that the criminal doesn’t have libertarian free will so it’s not as if they could have done otherwise given those same conditions, barring any random fluctuations.  That doesn’t mean we don’t hold them responsible for their actions — for they are still being justifiably punished for their crime — but it is the degree of punishment that needs to be adjusted given our knowledge that they lack libertarian free will.

Now one could further object and say that the deterrence addition of the punishment isn’t intended solely for the criminal under our consideration but also for other possible future criminals that may be successfully deterred from the crime given such a deterrence addition (even if this criminal was not).  Regardless of this pragmatic justification, that argument still doesn’t justify punishing the criminal, in such a way, if we are to treat the criminal in a fair way based on their actions alone.  If we bring other possible future criminals into the justification, then the criminal is being punished not only for their wrongdoing but in excess for hypothetical reasons concerning other hypothetical offenders — which is not at all fair.  So we can grant the fact that some may justify these practices on pragmatic consequentialist grounds, but they aren’t consistent with a Rawslian conception of justice as fairness.  Which means they aren’t consistent with many anti-consequentialist views (such as Kantian deontologists for example) that often promote strong conceptions of justice and moral desert in their ethical frameworks.

Conclusion

In summary, I wanted to reiterate the fact that even if our intuitive conceptions of moral desert and justice sometimes align with our rational moral goals, they often lack rational justification and thus often serve to inhibit the implementation of any kind of rational moral theory.  They often produce behaviors that are vengeful, malicious, sadistic, and most often counter-productive to our actual moral goals.  We need to incorporate the fact that libertarian free will does not (and logically can not) exist, into our moral framework, so that we can better strive to treat others fairly even if we still hold people responsible in some sense for their actions.

We can still hold people responsible for their actions (and ought to) by replacing the concept of libertarian free will with a free will conception that is consistent with the laws of physics, with psychology, and neurology, by proposing for example that people’s degree of “free will” with respect to some action is inversely proportional to the degree of conditioning needed to modify such behavior.  That is to say, the level of free will that we have with respect to some kind of behavior is related to our ability to be programmed and reprogrammed such that the behavior can (at least in principle) be changed.

Our punishment-reward systems then (whether in legal, social, or familial domains), should treat others as responsible agents only insofar as to protect the members of that society (or group) from harm and also to induce behaviors that are conducive to our physical and psychological well being — which is the very purpose of our having any reasonable moral theory (that is sufficiently motivating to follow) in the first place.  Anything that goes above and beyond what is needed to accomplish this is excessive and therefore not morally justified.  Following this logic, we should see that many types of punishment including, for example, the death penalty, are entirely unjustified in terms of our moral goals and the strategies of punishment that we should implement to accomplish those goals.  As the saying goes, an eye for an eye makes the whole world blind, and thus barbaric practices such as inflicting pain or suffering (or death sentences) simply to satisfy some intuitions need to be abolished and replaced with an enlightened system that relies on rational justifications rather than intuition.  Only then can we put the primitive, inhumane moral systems of the past to rest once and for all.

We need to work with our psychology (not only the common trends between most human beings but also our individual idiosyncrasies) and thus work under the pretense of our varying degrees of autonomy and behavioral plasticity.  Only then can we maximize our chances and optimize our efforts in attaining fulfilling lives for as many people as possible living in a society.  It is our intuitions (products of evolution and culture) that we must be careful of, as they can (and often have throughout history) led us astray to commit various moral atrocities.  All we can do is try to overcome these moral handicaps the best we can through means of reason and rationality, but we have to acknowledge that these problems exist before we can face them head on and subsequently engineer the right kinds of societal changes to successfully reach our moral goals.

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Some Thoughts on the Orlando Massacre

leave a comment »

My sincerest condolences go out to all the victims and the friends and families of those victims in the Orlando (“Pulse”) night-club shooting.  While it is still uncertain and under investigation whether or not there were any ties between the shooter (whom I won’t bother naming) and some Islamic extremist organization, there was in fact a proclaimed allegiance to such an organization voiced by the shooter himself to the police prior to the incident.  Even if no direct ties are found between the shooter and this or any other radical Islamic extremist organization, the possibility will remain that this was a “self-radicalized” or “self-actualized” Jihadist Muslim.  The man very likely knew that he was going to die one way or another that night (by police or otherwise) and so a belief in martyrdom and in an eternal paradise after death would have been perhaps the most powerful reason to not care about the consequences.  And a person believing that they are carrying out the wishes of an invisible magic man in the sky, and that are doing so in order to achieve eternal paradise, has more than enough motive to commit this kind of heinous act.

Obviously we don’t know what the man was thinking and can’t confirm his alleged motives, but if we take any of his own words seriously, then this is yet another incident that demands that the difficult religious conversation that many people want to avoid be opened further.  A conversation involving the topic of reforming Islam with the secular moderates that claim membership in that religion, and a conversation involving a recognition that when those religious texts are plainly read in their entirety, they clearly advocate for violence and oppression against non-believers.  There may be some good messages in those texts, as there are in just about any book — but to deny the heinous contents that also exist in those very same texts and to deny the real religious motivations of these murderers who are inspired by those texts is nothing but intellectual dishonesty and delusion.

Regressive liberals aren’t making things any easier as they throw out accusations of racism and bigotry even in cases where it is only the religious ideas themselves that are being criticized, with no mention of any race or ethnicity.  That has to stop too.  It’s true that many conservatives that are also racists and bigots and that have racist motives behind their anti-Islamic agenda, are also some of the same conservatives that are mentioning the dangerous ideas in Islam.  But as an intellectually honest liberal myself, I can both recognize and abhor those racist motives common to many conservative social and political circles, yet also agree with some of those conservatives’ claims pertaining to the dangers of certain Islamic religious ideas.

I suspect that one of the reasons for the origin of the regressive liberal movement and its commitment to eliminating any and all criticism of Islam is that it has conflated the racism and bigotry directed at Muslims that is often coming from conservatives (including political clowns like Donald Trump), with the criticism against Islamic ideas that make no mention of race.  I suspect that because many of these anti-Islamic claims are also coming from the same conservative sphere, that regressive liberals have unfortunately lumped all anti-Islamic claims into the same category (some form of racism and bigotry against Islam), when those two kinds of claims should be in entirely different categories.  There are criticisms of Islamic ideas that have nothing to do with race and there are criticisms of Muslims that are clearly racist — and the latter is what liberals and everyone should continue to fight against.  But the former type of criticisms are simply a part of a reasoned discussion on the topic and one that needs to take place in the public sphere.  I have propagated the former type of criticisms (based on reason and evidence, not prejudice or racism) and I have seen many other free-thinker and humanist advocates do so as well.

Bottom line — we have to begin to talk more about the dangers of believing in and relying on faith, dogma, revelation, and any other belief system not grounded on reason and evidence.  These epistemological “methods” are not only demonstrably unreliable and fallacious, but they are also being high-jacked by various terrorist organizations that have their own aims.  Even if the leaders of a dangerous extremist organization don’t actually believe in the religious ideas that they proclaim as their motivation, they know that if others do, and if others are already willing to die for their faith and to do it for such compelling reasons as eternal paradise, then those leaders can get people to commit heinous acts.  As Voltaire once said “Those who can make you believe absurdities can also make you commit atrocities.”  These words of wisdom still apply, even if the leaders that (initially) spread those absurdities don’t believe them. Now I think that many of the leaders that spread these ideas do believe them, but I’m betting that there are also many that do not.  If people begin to see the dangers of faith and dogma and are instilled with an ultimate appreciation and priority of reason and evidence, then these radical recruitments will be far less effective if not rendered entirely ineffective.

Religious ideas don’t get a free pass from criticism just because people hold them to be sacred.  Because, sacred or not, the reality is that this kind of muddled thinking can and has ended many innocent people’s lives throughout human history.  Bad ideas have bad consequences and it doesn’t matter where those bad ideas come from.  Despite the fact that we are living in a post-enlightenment age, not everyone has accepted that paradigm shift yet.  We must keep trying to spread the fruits of the enlightenment for the good of humanity.  It is our moral obligation to do so.

Sustainability, Happiness, and a Science of Morality: Part II

with 72 comments

In the first part of this post, I briefly went over some of the larger problems that our global society is currently facing, including the problem of overpopulation and the overall lack of environmental and economic sustainability.  I also mentioned some of the systematic and ideological (including religious and political) barriers that will need to be overcome before we can make any considerable progress in obtaining a sustainable future.

Although it may seem hopeless at times, I believe that we human beings – despite our cognitive biases and vulnerability to irrational and dogmatic behaviors – have an innate moral core in common that is driven by the incentive to increase our level of overall satisfaction and fulfillment in life. When people feel like they are living more fulfilling lives, they want to continue if not amplify the behavior that’s leading to that satisfaction. If a person is shown ways that lead to greater satisfaction and they are able to experience even a slight though noticeable improvement as a result of those prescriptions, I believe that even irrational and dogmatic people do begin to explore outside of their ideological box.

More importantly however, if everyone is shown that their level of satisfaction and fulfillment in life is ultimately a result of their doing what they feel they ought to do above all else (which is morality in a nutshell), then they can begin to recognize the importance and efficacy of basing those oughts on well-informed facts about the world. In other words, people can begin to universally derive every moral ought from a well-informed is, thus formulating their morality based on facts and empirical data and grounded on reason – as opposed to basing their morality on dogmatic and other unreliable beliefs in the supernatural. It’s easy for people to disagree on morals that are based on dogma and the supernatural, because those supernatural beliefs and sources of dogma vary so much from one culture and religion to another, but morals become common if not universal (in at least some cases) when they are based on facts about the world (including objective physical and psychological consequences not only for the person performing the moral action, but also for anyone on the receiving end of that moral action).

Moral Imperatives & Happiness

Science has slowly but surely been uncovering (or at least better approximating) what kinds of behaviors lead to the greatest levels of happiness and overall satisfaction in the collective lives of everyone in society. Since all morals arguably reduce to a special type of hypothetical imperative (i.e. if your fundamental goal is X, then you ought to do Y above all else), and since all goals ultimately reduce to the fundamental goal of increasing one’s life satisfaction and fulfillment, then there exist objective moral facts, whereby if they were known, they would inform a person of which behaviors they ought to do above all else in order to increase their happiness and fulfillment in life. Science may never be able to determine exactly what these objective moral facts are, but it is certainly logical to assume that they exist, namely some ideal set of behaviors for people (at least, those that are sane and non-psychopathic) which, if we only knew what those ideal behaviors were, they would necessarily lead to maximized satisfaction within every person’s life (a concept that has been proposed by many philosophers, and one which has been very well defended in Richard Carrier’s Goal Theory of Ethics).

What science can do however, and arguably what it has already been doing, is to continue to better approximate what these objective moral facts are as we accumulate more knowledge and evidence in psychology, neuroscience, sociology, and even other fields such as economics. What science appears to have found thus far is (among other things) a confirmation of what Aristotle had asserted over two thousand years ago, namely the importance of cultivating what have often been called moral virtues (such as compassion, honesty, and reasonableness), in order to achieve what the Greeks called eudaimonia, or an ultimate happiness with one’s life. This makes perfect sense because cultivating these virtues leads to a person feeling good while exercising behaviors that are also beneficial to everyone else, so then benefiting others is rarely if ever going to feel like a chore (which is an unfortunate side-effect of exclusively employing the moral duty mentality under Kant’s famous deontological ethical framework). Combine this virtue cultivation with the plethora of knowledge about the consequences of our actions that the sciences have been accumulating, thus integrating in John Stuart Mill’s utilitarian or teleological/consequentialist ethical framework, and then we have a good ethical framework that should work very effectively in leading us toward a future where more and more people are happy, fulfilled, and doing what is best for sustaining that happiness in one another, including sustaining the environment that their happiness is dependent on.

A Science of Morality

To give a fairly basic but good example of where science is leading us in terms of morality, consider the fact that science has shown that when people try to achieve ever-increasing levels of wealth at the expense of others, they are doing so because those people believe that wealth will bring them the most satisfaction in life, and thus they believe that maximizing that wealth will bring maximal happiness. However, this belief is incorrect for a number of reasons. For one, studies in psychology have shown that there is a diminishing return of happiness when one increases their income and wealth – which sharply diminishes once a person exceeds an income of about $70K per year (in U.S. dollars / purchasing power). So the idea that increasing one’s income or wealth will indefinitely increase their happiness isn’t supported by the evidence. At best, it has a limited effect on happiness that only works up to a point.

Beyond this, psychology has also shown that there are much more effective ways of increasing happiness, such as cultivating the aforementioned virtues (e.g. compassion, integrity, honesty, reasonableness, etc.) and exercising them while helping others, which leads to internal psychological benefits (which neuroscience can and has quantified to some degree) and also external sociological benefits such as the formation of meaningful relationships which in turn provide even more happiness over time. If we also take into account the amount of time and effort often required to earn more income and wealth (with the intention of producing happiness), it can be shown that the time and effort would have been better spent on trying to form meaningful relationships and cultivating various virtues. Furthermore, if those people gaining wealth could see first hand the negative side-effects that their accumulation of wealth has on many others (such as increased poverty), then doing so would no longer make them as happy. So indeed it can be shown that their belief of what they think maximizes their satisfaction is false, and it can also be shown that there are in fact better ways to increase their happiness and life satisfaction more than they ever thought possible. Perhaps most importantly, it can be shown that the ways to make them happiest also serve to make everyone else happier too.

A Clear Path to Maximizing (Sustainable) Happiness

Perhaps if we begin to invest more in the development and propagation of a science of morality, we’ll start to see many societal problems dissolve away simply because more and more people will begin to realize that the reason why we all think that certain actions are moral actions (i.e. that we ought to do them above all else), is because we feel that doing those actions brings us the most happy and fulfilling lives. If people are then shown much more effective ways that they can increase their happiness and fulfillment, including by maximizing their ability to help others achieve the same ends, then they’re extremely likely to follow those prescribed ways of living, for it could be shown that not doing so would prevent them from gaining the very maximal happiness and fulfillment that they are ultimately striving for. The only reason people wouldn’t heed such advice then is because they are being irrational, which means we need to simultaneously work on educating everyone about our cognitive biases, how to spot logical fallacies and avoid making them, etc.  So then solving society’s problems, such as overpopulation, socioeconomic inequality, or unsustainability, boils down to every individual as well as the collective whole accumulating as many facts as possible about what can maximize our life satisfaction (both now and in the future), and then heeding those facts to determine what we ought to do above all else to achieve those ends.  This is ultimately an empirical question, and a science of morality can help us discover what these facts are.