The Open Mind

Cogito Ergo Sum

Posts Tagged ‘ethics

Atheism, Morality, and Various Thoughts of the Day…

with 26 comments

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Sustainability, Happiness, and a Science of Morality: Part II

with 72 comments

In the first part of this post, I briefly went over some of the larger problems that our global society is currently facing, including the problem of overpopulation and the overall lack of environmental and economic sustainability.  I also mentioned some of the systematic and ideological (including religious and political) barriers that will need to be overcome before we can make any considerable progress in obtaining a sustainable future.

Although it may seem hopeless at times, I believe that we human beings – despite our cognitive biases and vulnerability to irrational and dogmatic behaviors – have an innate moral core in common that is driven by the incentive to increase our level of overall satisfaction and fulfillment in life. When people feel like they are living more fulfilling lives, they want to continue if not amplify the behavior that’s leading to that satisfaction. If a person is shown ways that lead to greater satisfaction and they are able to experience even a slight though noticeable improvement as a result of those prescriptions, I believe that even irrational and dogmatic people do begin to explore outside of their ideological box.

More importantly however, if everyone is shown that their level of satisfaction and fulfillment in life is ultimately a result of their doing what they feel they ought to do above all else (which is morality in a nutshell), then they can begin to recognize the importance and efficacy of basing those oughts on well-informed facts about the world. In other words, people can begin to universally derive every moral ought from a well-informed is, thus formulating their morality based on facts and empirical data and grounded on reason – as opposed to basing their morality on dogmatic and other unreliable beliefs in the supernatural. It’s easy for people to disagree on morals that are based on dogma and the supernatural, because those supernatural beliefs and sources of dogma vary so much from one culture and religion to another, but morals become common if not universal (in at least some cases) when they are based on facts about the world (including objective physical and psychological consequences not only for the person performing the moral action, but also for anyone on the receiving end of that moral action).

Moral Imperatives & Happiness

Science has slowly but surely been uncovering (or at least better approximating) what kinds of behaviors lead to the greatest levels of happiness and overall satisfaction in the collective lives of everyone in society. Since all morals arguably reduce to a special type of hypothetical imperative (i.e. if your fundamental goal is X, then you ought to do Y above all else), and since all goals ultimately reduce to the fundamental goal of increasing one’s life satisfaction and fulfillment, then there exist objective moral facts, whereby if they were known, they would inform a person of which behaviors they ought to do above all else in order to increase their happiness and fulfillment in life. Science may never be able to determine exactly what these objective moral facts are, but it is certainly logical to assume that they exist, namely some ideal set of behaviors for people (at least, those that are sane and non-psychopathic) which, if we only knew what those ideal behaviors were, they would necessarily lead to maximized satisfaction within every person’s life (a concept that has been proposed by many philosophers, and one which has been very well defended in Richard Carrier’s Goal Theory of Ethics).

What science can do however, and arguably what it has already been doing, is to continue to better approximate what these objective moral facts are as we accumulate more knowledge and evidence in psychology, neuroscience, sociology, and even other fields such as economics. What science appears to have found thus far is (among other things) a confirmation of what Aristotle had asserted over two thousand years ago, namely the importance of cultivating what have often been called moral virtues (such as compassion, honesty, and reasonableness), in order to achieve what the Greeks called eudaimonia, or an ultimate happiness with one’s life. This makes perfect sense because cultivating these virtues leads to a person feeling good while exercising behaviors that are also beneficial to everyone else, so then benefiting others is rarely if ever going to feel like a chore (which is an unfortunate side-effect of exclusively employing the moral duty mentality under Kant’s famous deontological ethical framework). Combine this virtue cultivation with the plethora of knowledge about the consequences of our actions that the sciences have been accumulating, thus integrating in John Stuart Mill’s utilitarian or teleological/consequentialist ethical framework, and then we have a good ethical framework that should work very effectively in leading us toward a future where more and more people are happy, fulfilled, and doing what is best for sustaining that happiness in one another, including sustaining the environment that their happiness is dependent on.

A Science of Morality

To give a fairly basic but good example of where science is leading us in terms of morality, consider the fact that science has shown that when people try to achieve ever-increasing levels of wealth at the expense of others, they are doing so because those people believe that wealth will bring them the most satisfaction in life, and thus they believe that maximizing that wealth will bring maximal happiness. However, this belief is incorrect for a number of reasons. For one, studies in psychology have shown that there is a diminishing return of happiness when one increases their income and wealth – which sharply diminishes once a person exceeds an income of about $70K per year (in U.S. dollars / purchasing power). So the idea that increasing one’s income or wealth will indefinitely increase their happiness isn’t supported by the evidence. At best, it has a limited effect on happiness that only works up to a point.

Beyond this, psychology has also shown that there are much more effective ways of increasing happiness, such as cultivating the aforementioned virtues (e.g. compassion, integrity, honesty, reasonableness, etc.) and exercising them while helping others, which leads to internal psychological benefits (which neuroscience can and has quantified to some degree) and also external sociological benefits such as the formation of meaningful relationships which in turn provide even more happiness over time. If we also take into account the amount of time and effort often required to earn more income and wealth (with the intention of producing happiness), it can be shown that the time and effort would have been better spent on trying to form meaningful relationships and cultivating various virtues. Furthermore, if those people gaining wealth could see first hand the negative side-effects that their accumulation of wealth has on many others (such as increased poverty), then doing so would no longer make them as happy. So indeed it can be shown that their belief of what they think maximizes their satisfaction is false, and it can also be shown that there are in fact better ways to increase their happiness and life satisfaction more than they ever thought possible. Perhaps most importantly, it can be shown that the ways to make them happiest also serve to make everyone else happier too.

A Clear Path to Maximizing (Sustainable) Happiness

Perhaps if we begin to invest more in the development and propagation of a science of morality, we’ll start to see many societal problems dissolve away simply because more and more people will begin to realize that the reason why we all think that certain actions are moral actions (i.e. that we ought to do them above all else), is because we feel that doing those actions brings us the most happy and fulfilling lives. If people are then shown much more effective ways that they can increase their happiness and fulfillment, including by maximizing their ability to help others achieve the same ends, then they’re extremely likely to follow those prescribed ways of living, for it could be shown that not doing so would prevent them from gaining the very maximal happiness and fulfillment that they are ultimately striving for. The only reason people wouldn’t heed such advice then is because they are being irrational, which means we need to simultaneously work on educating everyone about our cognitive biases, how to spot logical fallacies and avoid making them, etc.  So then solving society’s problems, such as overpopulation, socioeconomic inequality, or unsustainability, boils down to every individual as well as the collective whole accumulating as many facts as possible about what can maximize our life satisfaction (both now and in the future), and then heeding those facts to determine what we ought to do above all else to achieve those ends.  This is ultimately an empirical question, and a science of morality can help us discover what these facts are.

Sustainability, Happiness, and a Science of Morality: Part I

leave a comment »

Human beings seem to share the fundamental goal of wanting to live a satisfying and fulfilling life. We all want to be happy, and the humanist movement is an excellent demonstration of the kinds of strategies that have been most effective at achieving this admirable goal – such as the push for democracy, equality, basic human rights, and the elimination of poverty. Clearly we have a long way to go before human happiness is anywhere near universal, let alone maximized – if these are in fact possible futures within our grasp. We’re certainly not going to get there very easily (if at all) unless we address a number of serious societal problems.

One of the most pressing issues facing us today, because of it’s negative impact on just about every other societal problem, is the problem of overpopulation. The reasons for this are obvious and include the decreasing number of available resources per capita, thus forcing people to stretch their resources thinner and thinner over an ever growing population, and/or inclining some societies to go to war with others in order to obtain more resources. Then there’s also the problematic increase in environmental degradation and waste production as the population grows. Beyond the typical resources we’re depleting such as energy/power, food, clean air and water, and raw materials for making various products, there’s also other limited resources that are often overlooked such as the amount of available (let alone habitable) space where people can live, grow food, store waste, etc. There’s also a relatively small percentage of people employed in professions that not only require very special training but that also form the backbone of our society (such as teachers, doctors, scientists, etc.). As these latter resources get stretched thinner and thinner (i.e. education, healthcare, and scientific expertise and research), we’re effectively diluting the backbone of our society which can eventually cascade into societal collapse.

To be sure, there are several ways to combat many of these problems that are caused or exacerbated by overpopulation, for example, by shifting from a goods-based economy to a service-flow economy that recycles product materials that would otherwise be wasted (in part by leasing many of the products that are currently bought and later thrown into a landfill), by increasing the percentage of less-pollutive or non-pollutive renewable energy sources, and finding other ways of decreasing the demand for and increasing the efficiency and distribution of all the resources we rely on. The problem with these approaches however is that although these technologies and admirable efforts are slowly improving, the population is also increasing at the same time. So even if we are in fact increasing efficiency and decreasing consumption and waste per capita, we are simultaneously increasing that very capita, and so it is difficult to tell if technological progress has been (or will eventually be) fast enough to produce a true increase in overall sustainability per capita. It would be fallacious and unjustified to simply assume that to be the case – that technology will always be able to fix every problem. If anything, to error on the side of caution, we should assume that this isn’t the case until we have enough data and knowledge to prove otherwise.

Population Reduction is the Name of the Game

An obvious solution to this problem is to decrease the population growth rate such that our technological capabilities are more than sufficient enough to deliver a sustainable future for us. This goal may even require a negative growth rate, and at some point we’re going to have to start talking about what kinds of societal changes are necessary in order to achieve that goal. We may need some new incentives and/or some other kind of population control measures and policies, however, I’m hopeful that solving this problem is pragmatically achievable if we can manage to seriously educate the populace about how their reproductive choices affect the lives of everyone else in the world and how it is likely to impact future generations (though I don’t think this will be an easy task by any means). If people knew that certain reproductive choices would likely lead to either themselves, their children, or their children’s children, living in a future society filled with unprecedented amounts of poverty and war, environmental and economic collapse, and numerous other sources of suffering – any rational person would heed that knowledge and try their best to combat that possible future.

So a large part of the solution is simply educating everybody about the facts and probabilities of these undesirable outcomes. There are already many individuals and groups of people working on these types of endeavors, trying to push for renewable energy, pro-environmental advocacy and other sustainable living practices and policies, spreading education about family planning and trying to increase the access to and adoption of birth control methods, etc. Unfortunately, these practices haven’t yet been adopted by anywhere near a national nor global majority – far from it. However, if the movement becomes more globalized and builds up to a critical mass and momentum, eventually we’re likely to see the average person’s physical and psychological well being improve, which will further reinforce the incentives to improve and perpetuate the movement, because people will start to realize the tangible benefits they are gaining as a result.

Systematic & Ideological Barriers to Sustainability & Happiness

Unfortunately there are some serious systematic and ideological barriers that are preventing the sustainability movement from gaining traction and they’re ultimately inhibiting what would otherwise be fairly reasonable rates of progress. I think that the primary systematic barrier against achieving sustainability has been corporate-capitalism and the free-market economic models currently in place. While it may be true that there are certain forms of capitalism along with certain regulated market models that could work in principle if not also in practice, unfortunately these aren’t the brands of capitalism and market models that are currently employed by most industrialized nations (though some nations have more sustainable models than others).

What we currently have now are globalized economic systems and models that are fundamentally based on maximizing profit and consolidating privately owned production means at the expense of not only exploiting and depleting our natural resources and environment but also by exploiting unethical sources of human labor. Furthermore, these models have in turn led to unprecedented levels of socioeconomic inequality and environmental degradation. Then again, what else should we expect to happen when we employ corporate-capitalist free-market models which inherently lack adequate and universal economic, labor and environmental regulations? Despite the fact that the wealthy corporate elite, and the many politicians and citizens that have bought into their propaganda, have actually been touting this model as “the best in the world” or “the best model possible”, we can see that this isn’t true at all both by the fallacious fundamental principles that the models are based on and the actual results they’ve been delivering thus far. If we’re going to have a sustainable future, let alone one that provides us more satisfaction and happiness throughout our lives, we’re going to have to jump off of this sinking ship, and adopt an entirely new societal model.

We also need to consider the ideological barriers that have been hindering the sustainability movement as well as the humanism movement in general. For example, there are many prominent religions such as Christianity and Islam (which are highly influential as they make up over half the population of the world) that believe that one of the primary goals for human beings (according to their “divinely inspired” scripture) is to “be fruitful and multiply” while also claiming a general dominion over all the plants and animals of the earth. While the latter “dominion” over the earth has been interpreted by some as “responsible stewardship” (which is compatible with sustainability), it has often been interpreted as “ownership” over the environment and as justification to exploit it strictly for the benefit of human beings (not realizing our intimate dependence on all other ecosystems). Worse yet, the former “be fruitful and multiply” adage can only be reasonably interpreted one way, and unfortunately this “advice” is the antithesis of a sustainable model for society (though it has been an incredibly effective meme for the expansion of these religions and their cultural influence and power). Indeed, it is the exact opposite of what we should be doing at this point in human history, and perhaps the greatest irony here is that the current overpopulation problem was largely a result of this adage, and the subsequent viral spread of these Abrahamic religions over the past fifteen hundred years especially.

Two other religious beliefs worth mentioning here, which have also been highly popularized by the Abrahamic religions (notably Christianity), are the beliefs that “the end is near” and that “no matter what happens, everything is in God’s hands”, as these beliefs and the overall mentality they reinforce do nothing to support the long-term responsible planning that is fundamental to a sustainable societal model. The latter belief plays on an unfortunate human cognitive bias known as risk compensation, where we tend to behave less responsibly when we feel that we are adequately protected from any harm. In the case of a fanatical belief in divine protection, their level of risk compensation is biased to the theoretical maximum, thus making them the most likely to behave the most irresponsibly. The former belief (“the end is near”) unavoidably shifts the believer’s priorities to the short term (and in proportion to the strength of the belief), and with the specific intention of preparing for this “end that is to come”, rather than basing their beliefs on reality and evidence and responsibly preparing for a brighter future for all of humanity and the rest of the planet that we depend on.

Certainly, these religious beliefs aren’t the only ideological barriers to sustainability, as there are a number of other irrational political ideologies that are largely though not exclusively based on the rejection of scientific evidence and consensus, and have served to heavily reinforce the fossil-fuel and other natural resource driven corporate-capitalist model. This unsustainable model has been reinforced by denying facts about climate change and many other facts pertaining to human impacts on the environment in general. In some cases, I find it difficult to tell if the people that make these absurd claims actually believe them to be true (e.g. that 99+% of scientists are somehow conspiring or lying to everybody else in the world), or if they are just implicitly pleading ignorance and rationalizing so that they can maintain their profit-driven models for outright insatiable greed. I find it most plausible that politicians are collaborating with certain corporations to deny scientific facts because they want to continue to make billions off of this resource exploitation (at least for as long as they can get away with it), and are doing so in large part by brainwashing the constituent base that elected them into office with mounds of corporate-funded misinformation, fear mongering, and empty political rhetoric.

It should also come as no surprise that the people that believe and/or perpetuate these political ideological barriers to sustainability are most often the very same people that believe and/or perpetuate the aforementioned religious ideological barriers, and it seems quite evident that politicians have taken advantage of this fact. Many of them surely know quite well that if they can persuade religious voters to vote for them by convincing those voters that they share a common ground on some moral issue, then those voters become distracted from critically thinking about the primary political agendas that those politicians are really pushing for behind the curtain. The very agendas that are in fact hindering a sustainable future from ever coming into fruition.

We’ve all seen it – certain politicians claiming that they oppose stem cell research or abortion, or that advocate for abolishing the separation between church and state (though generally not admittedly), and use this tactic to suck in these (often) single issue religious voters, while ironically promoting a number of policies that often violate the morals of those very same voters (unbeknownst to the voters). They enact policies that perpetuate war, capital punishment, poverty, and the military-industrial complex. They enact policies that worsen socioeconomic inequality and the accumulation of wealth and power in the hands of a few at the expense of the many. They enact policies that are destroying the finite supply of natural capital we have left on this planet. They enact policies that ultimately hinder democracy, equality, and universal human rights.

So in the end, most religious voters (and some non-religious voters that are similarly misled), while admirably trying to do what they believe is the most moral thing to do, end up vastly increasing the amount of immoral behavior and suffering in the world, due in large part to the politicians that manipulated them into doing so. Which is why it is crucial that people make their decisions based on reason and evidence and also critically think about the consequences of their decisions and actions as they are sometimes more complicated than we are often led to believe. We need to think more critically of all the policies and legislation that we are choosing based on who we vote for, and we also need to be wary of policies that may initially seem to align with our morals and desires, and yet will actually result in more suffering or other unforeseen problems in the long run.

In the next part of this post, I will elaborate more on the broader human goals we all seem to share, and how a science of morality can help us use those broader goals to alleviate these societal problems and simultaneously help us to achieve a future where we are all collectively happier than we ever thought we could be, with far more fulfilling lives.  Here’s the link to part two.