The Open Mind

Cogito Ergo Sum

Archive for the ‘Morality’ Category

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part III)

leave a comment »

In the first two posts that I wrote in this series (part I and part II) concerning some concepts and themes mentioned in Dostoyevksy’s The Brothers Karamazov, I talked about moral realism and how it pertains to theism and atheism (and the character Ivan’s own views), and I also talked about moral responsibility and free will to some degree (and how this related to the interplay between Ivan and Smerdyakov).  In this post, I’m going to look at the concept of moral conscience and intuition, and how they apply to Ivan’s perspective and his experiencing an ongoing hallucination of a demonic apparition.  This demonic apparition only begins to haunt Ivan after hearing that his influence on his brother Smerdyakov led him to murder their father Fyodor.  The demon continues to torment Ivan until just before his other brother Alyosha informs him that Smerdyakov has committed suicide.  Then I’ll conclude with some discussion on the concept of moral desert (justice).

It seems pretty clear that the demonic apparition that appears to Ivan is a psychosomatic hallucination brought about as a manifestation of Ivan’s overwhelming guilt for what his brother has done, since he feels that he bears at least some of the responsibility for his brothers actions.  We learn earlier in the story that Zosima, a wise elder living at a monastery who acts as a mentor and teacher to Alyosha, had explained to Ivan that everyone bears at least some responsibility for the actions of everyone around them because human causality is so heavily intertwined with one person’s actions having a number of complicated effects on the actions of everyone else.  Despite Ivan’s strong initial reservations against this line of reasoning, he seems to have finally accepted that Zosima was right — hence him suffering a nervous breakdown as a result of realizing this.

Obviously Ivan’s moral conscience seems to be driving this turn of events and this is the case whether or not Ivan explicitly believes that morality is real.  And so we can see that despite Ivan’s moral skepticism, his moral intuitions and/or his newly accepted moral dispositions as per Zosima, have led him to his current state of despair.  Similarly, Ivan’s views on the problem of evil — whereby the vast amount of suffering in the world either refutes the existence of God, or shows that this God (if he does exist) must be a moral monster — betray even more of Ivan’s moral views with respect to how he wants the world to be.  His wanting the world to have less suffering in it, along with his wishing that his brother had not committed murder (let alone as a result of his influence on his brother), illustrates a number of moral “oughts” that Ivan subscribes to.  And whether they’re simply based on his moral intuitions or also rational moral reflection, they illustrate the deeply rooted psychological aspects of morality that are an inescapable facet of the human condition.

This situation also helps to explain some of the underlying motivations behind my own reversion back toward some form of moral realism, after becoming an atheist myself, initially catalyzed by my own moral intuitions and then later solidified and justified by rational moral reflection on objective facts pertaining to human psychology and other factors.  Now it should be said that moral intuitions on their own are only a generally useful heuristic as they are often misguiding (and incorrect) which is why it is imperative that they are checked by a rational assessment of the facts at hand.  But, nevertheless, they help to illustrate how good and evil can be said to be real (in at least some sense), even to someone like Ivan that doesn’t think they have an objective foundation.  They may not be conceptions of good and evil as described in many religions, with supernatural baggage attached, but they are real nonetheless.

Another interesting point worth noting is in regard to Zosima’s discussion about mutual moral responsibility.  While I already discussed moral responsibility in the last post along with its relation to free will, there’s something rather paradoxical about Dostoyevsky’s reasoning as expressed through Zosima that I found quite interesting.  Zosima talks about how love and forgiveness are necessary because everyone’s actions are intertwined with everyone else’s and therefore everyone bears some responsibility for the sins of others.  This idea of shared responsibility is abhorrent to those in the story that doubt God and the Christian religion (such as Ivan), who only want to be responsible for their own actions, but the complex intertwined causal chain that Zosima speaks of is the same causal chain that many determinists invoke to explain our lack of libertarian free will and how we can’t be held responsible in a causa sui manner for our actions.

Thus, if someone dies and there is in fact an afterlife, by Zosima’s own reasoning that person should not be judged as an individual solely responsible for their actions either.  That person should instead receive unconditional love and forgiveness and be redeemed rather than punished.  But this idea is anathema to standard Christian theology where one is supposed to be judged and given eternal paradise or eternal torment (with vastly disproportionate consequences given the finite degree of one’s actions).  It’s no surprise that Zosima isn’t looked upon as a model clergyman by some of his fellow monks in the monastery because his emphatic preaching about love and forgiveness undermines the typical heavy-handed judgemental aspects of God within Christianity.  But in any case, if God exists and understood that people were products of their genes and their environment which is causally interconnected with everyone else’s (i.e. libertarian free will is logically impossible), then a loving God would grant everyone forgiveness after death and grant them eternal paradise based on that understanding.  And oddly enough, this also undermines Ivan’s own reasoning that good and evil can only exist with an afterlife that undergoes judgement, because forgiveness and eternal paradise should be granted to everyone in the afterlife (by a truly loving God) if Zosima’s reasoning was taken to it’s logical conclusions.  So not only does Zosima’s reasoning seem to undermine the justification for unequal treatment of souls in the afterlife, but it also undermines the Christian conception of free will to boot (which is logically impossible regardless of Zosima’s reasoning).

And this brings me to the concept of moral desert.  In some ways I agree with Zosima, at least in the sense that love (or more specifically compassion) and forgiveness are extremely important in proper moral reasoning. And once one realizes the logical impossibility of libertarian free will, this should only encourage one’s use of love and forgiveness in the sense that people should never be trying to punish a wrongdoer (or hope for their punishment) for the sake of retributive justice or vengeance.  Rather, people should only punish (or hope that one is punished) as much as is necessary to compensate the victim as best as the circumstances allow and (more importantly) to rehabilitate the wrongdoer by reprogramming them through behavioral conditioning.  Anything above and beyond this is excessive, malicious, and immoral.  Similarly, a loving God (if one existed) would never punish anyone in the afterlife beyond what is needed to rehabilitate them (and it would seem that no punishment at all should really be needed if this God had the power to accomplish these feats on immaterial souls using magic), and if this God had no magic to accomplish this, then at the very least, it would still mean that there should never by any eternal punishments, since punishing someone forever (let alone torturing them forever), not only illustrates that there is no goal to rehabilitate the wrongdoer, but also that this God is beyond psychopathic and malevolent.  Again, think of Zosima’s reasoning as it applies here.

Looking back at the story with Smerdyakov, why does the demonic apparition disappear from Ivan right around the time that he learns that Smerdyakov killed himself?  It could be because Ivan thinks that Smerdyakov has gotten what he deserved, and that he’s no longer roaming free (so to speak) after his heinous act of murder.  And it could also be because Ivan seemed sure at that point that he would confess to the murder (or at least motivating Smerdyakov to do it).  But if either of these notions are true, then once again Ivan has betrayed yet another moral disposition of his, that murder is morally wrong.  It may also imply that Ivan, deep down, may in fact believe in an afterlife, and that Smerdyakov will now be judged for his actions.

It no doubt feels good to a lot of people when they see someone that has wronged another, getting punished for their bad deeds.  The feeling of justice and even vengeance can be so emotionally powerful, especially if the wrongdoer took the life of someone that you or someone else loved very much.  It’s a common feeling to want that criminal to suffer, perhaps to rot in jail until they die, perhaps to be tortured, or what-have-you.  And these intuitions illustrate why so many religious beliefs surrounding judgment in the afterlife share many of these common elements.  People invented these religious beliefs (whether unconsciously or not) because it makes them feel better about wrongdoers that may otherwise die without having been judged for their actions.  After all, when is justice going to be served?  It is also a motivating factor for a lot of people to keep their behaviors in check (as per Ivan’s rationale regarding an afterlife requirement in order for good and evil to be meaningful to people).  Even though I don’t think that this particular motivation is necessary (and therefore Ivan’s argument is incorrect) — due to other motivating forces such as the level of fulfillment and personal self-worth in one’s life, gained through living a life of moral virtue, or the lack thereof by those that fail to live virtuously — it is still a motivation that exists with many people and strongly intersects with the concept of moral desert.  Due to its pervasiveness in our intuitions and how we perceive other human beings and its importance in moral theory in general, people should spend a lot more time critically reflecting on this concept.

In the next part of this post series, I’m going to talk about the conflict between faith and doubt, perhaps the most ubiquitous theme found in The Brothers Karamazov, and how it ties all of these other concepts together.

Advertisements

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part II)

leave a comment »

In my last post in this series, concerning Dostoyevsky’s The Brothers Karamazov, I talked about the concept of good and evil and the character Ivan’s personal atheistic perception that they are contingent on God existing (or at least an afterlife of eternal reward or punishment).  While there may even be a decent percentage of atheists that share this view (objective morality being contingent on God’s existence), I briefly explained my own views (being an atheist myself) which differs from Ivan’s in that I am a moral realist and believe that morality is objective independent of any gods or immortal souls existing.  I believe that moral facts exist (and science is the best way to find them), that morality is ultimately grounded on objective facts pertaining to human psychology, biology, sociology, neurology, and other facts about human beings, and thus that good and evil do exist in at least some sense.

In this post, I’m going to talk about Ivan’s influence on his half-brother, Smerdyakov, who ends up confessing to Ivan that he murdered their father Fyodor Pavlovich, as a result of Ivan’s philosophical influence on him.  In particular, Smerdyakov implicates Ivan as at least partially responsible for his own murderous behavior since Ivan successfully convinced him that evil wasn’t possible in a world without a God.  Ivan ends up becoming consumed with guilt, basically suffers a nervous breakdown, and then is incessantly taunted by a demonic apparition.  The hallucinations continue up until the moment Ivan comes to find out, from his very religious brother Alyosha, that Smerdyakov has hung himself.  This scene highlights a number of important topics beyond the moral realism I discussed in the first post, such as moral responsibility, free will, and even moral desert (I’ll discuss this last topic in my next post).

As I mentioned before, beyond the fact that we do not need a god to ground moral values, we also don’t need a god or an afterlife to motivate us to behave morally either.  By cultivating moral virtues such as compassion, honesty, and reasonableness, and analyzing a situation using a rational assessment of as many facts as are currently accessible, we can maximize our personal satisfaction and thus our chances of living a fulfilling life.  Behavioral causal factors that support this goal are “good” and those that detract from it are “evil”.  Aside from these labels though, we actually experience a more or less pleasing life depending on our behaviors and therefore we do have real-time motivations for behaving morally (such as acting in ways that conform to various cultivated virtues).  Aristotle claimed this more than 2000 years ago and moral psychology has been confirming it time and time again.

Since we have evolved as a particular social species with a particular psychology, not only do we have particular behaviors that best accomplish a fulfilling life, but there are also various behavioral conditioning algorithms and punishment/reward systems that are best at modifying our behavior.  And this brings me to Smerdyakov.  By listening to Ivan and ultimately becoming convinced by Ivan’s philosophical arguments, he seems to have been conditioned out of his previous views on moral responsibility.  In particular, he ended up adopting the belief that if God does not exist, then anything is permissible.  Since he also rejected a belief in God, he therefore thought he could do whatever he wanted.

One thing this turn of events highlights is that there are a number of different factors that influence people’s behaviors and that lead to their being reasoned into doing (or not doing) all sorts of things.  As Voltaire once said “Those who can make you believe absurdities can also make you commit atrocities.  And I think what Ivan told Smerdyakov was in fact absurd — although it was a belief that I once held as well not long after becoming an atheist.  For it is quite obviously absurd that anything is permissible without a God existing for at least two types of reasons: pragmatic considerations and moral considerations (with the former overlapping with the latter).  Pragmatic reasons include things like not wanting to be fined, incarcerated, or even executed by a criminal justice system that operates to minimize illegal behaviors.  It includes not wanting to be ostracized from your circle of friends, your social groups, or your community (and risking the loss of beneficial reciprocity, safety nets, etc.).  Moral reasons include everything that detracts from your overall psychological well-being, the very thing that is needed to live a maximally fulfilling life given one’s circumstances.  Behaving in ways that degrade your sense of inner worth, your integrity, self-esteem, and that diminish a good conscience, is going to make you feel miserable compared to behaving in ways that positively impact these fundamental psychological goals.

Furthermore, this part of the story illustrates that we have a moral responsibility not only to ourselves and our own behavior, but also in terms of how we influence the behavior of those around us, based on what we say, how we treat them, and more.  This also reinforces the importance of social contract theory and how it pertains to moral behavior.  If we follow simple behavioral heuristics like the Golden Rule and mutual reciprocity, then we can work together to obtain and secure common social goods such as various rights, equality, environmental sustainability, democratic legislation (ideally based on open moral deliberation), and various social safety nets.  We also can punish those that violate the social contract, as we already do with the criminal justice system and various kinds of social ostracization.  While our system of checks is far from perfect, having some system that serves such a purpose is necessary because not everybody behaves in ways that are ultimately beneficial to themselves nor everyone else around them.  People need to be conditioned to behave in ways that are more conducive to their own well being and that of others, and if all reasonable efforts to achieve that fails, they may simply need to be quarantined through incarceration (for example psychopaths or other violent criminals that society needs to be protected from, and that aren’t responding to rehabilitation efforts).

In any case, we do have a responsibility to others and that means we need to be careful what we say, such as the case with Ivan and his brother.  And this includes how we talk about concepts like free will, moral responsibility, and moral desert (justice).  If we tell people that all of their behaviors are determined and therefore don’t matter, that’s not a good way to get people to behave in ways that are good for them or for others.  Nor is telling them that because a God doesn’t exist, that their actions don’t matter.  In the case of deterministic nihilism, it’s a way to get people to lose much if not all of their motivation to put forward effort in achieving useful goals.  And both deterministic and atheistic moral nihilism are dangerous ideas that can get some people to commit heinous crimes such as mass shootings (or murdering their own father as Smerdyakov did), because they simply cause people to think that all behaviors are on equal footing in any way that matters.  And quite frankly, those nihilistic ideas are not only dangerous but also absurd.

While I’ve written a bit on free will in the past, my views have become more refined over the years, and my overall attitude towards the issue has been co-evolving alongside my views on morality.  The main crux of the free will issue is that libertarian free will is logically impossible because our actions are never free from both determinism and indeterminism (randomness) since one or the other must underlie how our universe operates (depending on which interpretation of Quantum Mechanics is correct).  Neither option from this logical dichotomy gives us “the freedom to have chosen to behave differently given the same initial conditions in a non-random way”.  Therefore free will in this sense is logically impossible.  However, this does not mean that our behavior isn’t operating under some sets of rules and patterns that we can discover and modify.  That is to say, we can effectively reprogram many of our behavioral tendencies using various forms of conditioning through punishment/reward systems.  These are the same systems we use to teach children how to behave and to rehabilitate criminals.

The key thing to note here is that we need to acknowledge that even if we don’t have libertarian free will, we still have a form of “free will” that matters (as philosophers like Daniel Dennett have said numerous times) whereby we have the ability to be programmed and reprogrammed in certain ways, thus allowing us to take responsibility for our actions and design ways to modify future actions as needed.  We have more degrees of freedom than a person who is insane for example, or a child, or a dog, and these degrees of freedom or autonomy — the flexibility we have in our decision-making algorithms — can be used as a rough guideline for determining how “morally responsible” a person is for their actions.  That is to say, the more easily a person can be conditioned out of a particular behavior, and the more rational decision making processes are involved in governing that behavior, the more “free will” this person has in a sense that applies to a criminal justice system and that applies to most of our everyday lives.

In the end, it doesn’t matter whether someone thinks that their behavior doesn’t matter because there’s no God, or because they have no libertarian free will.  What needs to be pointed out is the fact that we are able to behave in ways (or be conditioned to behave in ways) that lead to more happiness, more satisfaction and more fulfilling lives.  And we are able to behave in ways that detract from this goal.  So which behaviors should we aim for?  I think the answer is obvious.  And we also need to realize that as a part of our behavioral patterns, we need to realize that ideas have consequences on others and their subsequent behaviors.  So we need to be careful about what ideas we choose to spread and to make sure that they are put into a fuller context.  If a person hasn’t given some critical reflection about the consequences that may ensue from spreading their ideas to others, especially to others that may misunderstand it, then they need to keep those ideas to themselves until they’ve reflected on them more.  And this is something that I’ve discovered and applied for myself as well, as I was once far less careful about this than I am now.  In the next post, I’m going to talk about the concept of moral desert and how it pertains to free will.  This will be relevant to the scene described above regarding Ivan’s demonic apparition that haunts him as a result of his guilt over Smerdyakov’s murder of their father, as well as why the demonic apparition disappeared once Ivan heard that Smerdyakov had taken his own life.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part I)

leave a comment »

I wanted to write some thoughts on Dostoyevsky’s The Brothers Karamazov, and I may end up writing this in several parts.  I’m interested in some of the themes that Dostoevsky develops, in particular, those pertaining to morality, moral responsibility, free will, moral desert, and their connection to theism and atheism.  Since I’m not going to go over the novel in great detail, for those not already familiar with this story, please at least read the plot overview (here’s a good link for that) before pressing on.

One of the main characters in this story, Ivan, is an atheist, as I am (though our philosophies differ markedly as you’ll come to find out as you read on).  In his interactions and conversation with his brother Alyosha, a very religious man, various moral concepts are brought up including the dichotomy of good and evil, arguments against the existence of God (at least, against the existence of a loving God) such as the well-known Problem of Evil, and other ethical and religious quandaries.  I wanted to first talk about Ivan’s insistence that good and evil cannot exist without God, and since Ivan’s character doesn’t believe that God exists, he comes to the conclusion that good and evil do not exist either.  Although I’m an atheist, I disagree with Ivan’s views here and will expand on why in a moment.

I’ve written a bit on my blog, about various arguments against the existence of God that I’ve come across over the years, some of which that I’ve formulated on my own after much reflection – and that were at least partially influenced by my former religious views as a born-again Protestant Christian.  Perhaps ironically, it wasn’t until after I became an atheist that I began to delve much deeper into moral theory, and also into philosophy generally (though the latter is less surprising).  My views on morality have evolved in extremely significant ways since my early adult years.  For example, back when I was a Christian I was a moral objectivist/realist, believing that morals were indeed objective but only in the sense that they depended on what God believed to be right and wrong (even if these divine rules changed over time or seemed to contradict my own moral intuitions and analyses, such as stoning homosexuals to death).  Thus, I subscribed to some form of Divine Command Theory (or DCT).  After becoming an atheist, much like Ivan, I became a moral relativist (but only temporarily – keep reading), believing as the character Ivan did, that good and evil couldn’t exist due to their resting on a fictitious or at least non-demonstrable supernatural theological foundation and/or (perhaps unlike Ivan believed) that good and evil may exist but only in the sense that they were nothing more than cultural norms that were all equally valid.

Since then, I’ve become a moral realist once again (as I was when I was a Christian), after putting the philosophy of Ivan to the test (so to speak).  I realized that I could no longer justify the belief that any cultural moral norm had as equal of a claim to being true as any other cultural norm.  There were simply too many examples of moral prescriptions in various cultures and religions that couldn’t be justified.  Then I realized that many of the world’s cultural moral norms, though certainly not all of them, were largely universal (such as prohibitions against, at least certain forms of, stealing, killing, and others) which suggested a common human psychological component underlying many of them.

I also realized that as an atheist, much as Nietzsche realized, I now had to ground my own moral views on something that didn’t rely on Divine Command Theory, gods, Christian traditions, or any other foundation that I found to be invalid, illogical, unreasonable, unjustified, or not sufficiently demonstrated to be true.  And I had to do this if I was to find a way out of moral relativism, which simply didn’t sit well with me as it didn’t seem to be coherent with the bulk of human psychology and the more or less universal goals that humans strive to achieve in their lives.  It was ultimately the objective facts pertaining to human psychology that allowed me to resubscribe to an objectivist/realist morality — and now my views of morality were no longer contingent on merely the whim or dictates of some authoritarian god (thus bypassing the Euthyphro dilemma), but rather were contingent on objective facts about human beings, what makes us happy and fulfilled and what doesn’t (where these facts often disagree with moral prescriptions stemming from various religions and Divine-Command-Theory).

After dabbling with the teachings of various philosophers such as Aristotle, Kant, Mill, Rawls, Foot, and others, I came to accept a view of morality that was indeed coherent, sensible, sufficiently motivating to follow (which is a must), and which subsumed all the major moral theories into one framework (and which therefore had the best claim to being true since it was compatible with all of them –  virtue ethics, deontology, and consequentialism).   Now I’ve come to accept what can be described as a Goal Theory of Ethics, whereby morality is defined as “that which one ought to do above all else – when rational and maximally informed based on reason and evidence – in order to increase one’s personal life fulfillment and overall level of preference satisfaction”.  One could classify this as a subset of desire utilitarianism, but readers must be warned that this is NOT to be confused with traditional formulations of utilitarianism – such as those explicitly stated by J.S. Mill, Peter Singer, etc., as they are rife with problems resulting from not taking ALL consequences into account (such as consequences pertaining to one’s own character and how they see themselves as a person, as per the wisdom of Aristotle and Kant).

So how can good and evil exist without some God(s) existing?  That is to say, if a God doesn’t exist, how can it not be the case that “anything is permissible”?  Well, the short answer is – because of human psychology (and also social contract theory).

When people talk about behaving morally, what they really mean (when we peel back all the layers of cultural and religious rhetoric, mythology, narrative, etc.) is behaving in a way that maximizes our personal satisfaction – specifically our sense of life fulfillment.  Ask a Christian, or a Muslim, or a Humanist, why ought they behave in some particular way, and it all can be shown to break down to some form of human happiness or preference satisfaction for life fulfillment (not some hedonistic form of happiness).  They may say to behave morally “because then you can get into heaven, or avoid hell”, or “because it pleases God”, or what-have-you.  When you ask why THOSE reasons are important, it ultimately leads to “because it maximizes your chance of living a fulfilled life” (whether in this life or in the next, for those that believe in an afterlife).  I don’t believe in any afterlife because there’s no good evidence or reason to have such a belief, so for me the life that is most important is the one life we are given here on earth – which therefore must be cherished and not given any secondary priority to a hypothetical life that may or may not be granted after death.

But regardless, whether you believe in an afterlife (as Alyosha does) or not (as in Ivan’s case), it is still about maximizing a specific form of happiness and fulfillment.  However, another place where Ivan seems to go wrong in his thinking is his conclusion that people only behave morally based on what they believe will happen to them in an afterlife.  And therefore, if there is no afterlife (immortal souls), then there is no reason to be moral.  The fact of the matter is though, in general, much of what we tend to call moral behavior actually produces positive effects on the quality of our lives now, as we live them.  People that behave immorally are generally not going to live “the good life” or achieve what Aristotle called eudaimonia.  On the other hand, if people actually cultivate virtues of compassion, honesty, and reasonableness, they will simply live more fulfilling lives.  And people that don’t do this or simply follow their immediate epicurean or selfish impulses will most certainly not live a fulfilling life.  So there is actually a naturalistic motivating force to behave morally, regardless of any afterlife.  Ivan simply overlooked this (and by extension, possibly Dostoyevsky as well), likely because most people brought up in Christianized cultures often focus on the afterlife as being the bearer of ultimate justice and therefore the ultimate motivator for behaving as they do.

In any case, the next obvious question to ask is what ways of living best accomplish this goal of life fulfillment?  This is an empirical question which means science can in principle discover the answer, and is the only reliable (or at least the most reliable) way of arriving at such answers.  While there is as of yet no explicit “science of morality”, various branches of science such as psychology, sociology, anthropology, and neuroscience, are discovering moral facts (or at least reasonable approximations of these facts, given what data we have obtained thus far).  Unless we as a society choose to formulate a science of morality — a laborious research project indeed — we will have to live with the best approximations to moral facts that are at our disposal as per the findings in psychology, sociology, neuroscience, etc.

So even if we don’t yet know with certainty what one ought to do in any and all particular circumstances (no situational ethical certainties), many scientific findings have increased our confidence in having discovered at least some of those moral facts or approximations of those facts (such as that slavery is morally wrong, because it doesn’t maximize the overall life satisfaction of the slaveholder, especially if he/she were to analyze the situation rationally with as many facts as are pragmatically at their disposal).  And to make use of some major philosophical fruits cultivated from the works of Hobbes, Locke, Rousseau, Hume, and Rawls (among many others), we have the benefits of Social Contract Theory to take into consideration.  In short, societies maximize the happiness and flourishing of the citizens contained therein by making use of a social contract – a system of rules and mutual expectations that ought to be enforced in order to accomplish that societal goal (and which ought to be designed in a fair manner, behind a Rawlsian veil of ignorance, or what he deemed the “original position”).  And therefore, to maximize one’s own chance of living a fulfilling life, one will most likely need to endorse some form of social contract theory that grants people rights, equality, protection, and so forth.

In summary, good and evil do exist despite there being no God because human psychology is particular to our species and our biology, it has a finite range of inputs and outputs, and therefore there are some sets of behaviors that will work better than others to maximize our happiness and overall life satisfaction given the situational circumstances that we find our lives embedded in.  What we call “good” and “evil” are simply the behaviors and causal events that “add to” or “detract from” our goal of living a fulfilling life.  The biggest source of disagreement among the various moral systems in the world (whether religiously motivated or not), are the different sets of “facts” that people subscribe to (some beliefs being based on sound reason and evidence whereas others are based on irrational faith, dogma, or emotions) and whether or not people are analyzing the actual facts in a rational manner.  A person may think they know what will maximize their chances of living a fulfilling life when in fact (much like with the heroin addict that can’t wait to get their next fix) they are wrong about the facts and if they only knew so and acted rationally, would do what they actually ought to do instead.

In my next post in this series, I’ll examine Ivan’s views on free will and moral responsibility, and how it relates to the unintended consequence of the actions of his half-brother Smerdyakov (who murders their father, Fyodor Pavlovich, as a result of Ivan’s influence on his moral views).

Atheism, Morality, and Various Thoughts of the Day…

with 26 comments

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

On Moral Desert: Intuition vs Rationality

leave a comment »

So what exactly is moral desert?  Well, in a nutshell, it is what someone deserves as a result of their actions as defined within the framework of some kind of moral theory.  Generally when people talk about moral desert, it’s often couched in terms of punishment and reward, and our intuitions (whether innate or culturally inherited) often produce strong feelings of knowing exactly what kinds of consequences people deserve in response to their actions.  But if we think about our ultimate goals in implementing any reasonable moral theory, we should quickly recognize the fact that our ultimate moral goal is to have ourselves and everybody else simply abide by that moral theory.  And we want that in order to guide our behavior and the behavior of those around us in ways that are conducive to our well being.  Ultimately, we want ourselves and others to act in ways that maximize our personal satisfaction — and not in a hedonistic sense — but rather to maximize our sense of contentment and living a fulfilled life.

If we think about scenarios that seem to merit punishment or reward, it would be useful to keep our ultimate moral goal in mind.  The reason I mention this is because, in particular, our feelings of resentment toward those that have wronged us can often lead one to advocate for an excessive amount of punishment to the wrongdoer.  Among other factors, vengeance and retribution often become incorporated into our intuitive sense of justice.  Many have argued that retribution itself (justifying “proportionate” punishment by appealing to concepts like moral desert and justice) isn’t a bad thing, even if vengeance — which lacks inherent limits on punishment, involves personal emotions from the victim, and other distinguishing factors — is in fact a bad thing.  While thinking about such a claim, I think it’s imperative that we analyze our reasons for punishing a wrongdoer in the first place and then analyze the concept of moral desert more closely.

Free Will & It’s Implications for Moral Desert

Another relevant topic I’ve written about in several previous posts is the concept of free will.  This is an extremely important concept to parse out here, because moral desert is most often intimately tied to the positive claim of our having free will.  That is to say, most concepts of moral desert, whereby it is believed that people deserve punishment and reward for actions that warrant it, fundamentally relies on the premise that people could have chosen to do otherwise but instead chose the path they did out of free choice.  While there are various versions of free will that philosophers have proposed, they all tend to revolve around some concept of autonomous agency.  The folk psychological conception of free will that most people subscribe to is some form of deliberation that is self-caused in some way thus ruling out randomness or indeterminism as the “cause”, since randomness can’t be authored by the autonomous agent, and also ruling out non-randomness or determinism as well, since an unbroken chain of antecedent causes can’t be authored by the autonomous agent either.

So as to avoid a long digression, I’m not going to expound upon all the details of free will and the various versions that others have proposed, but will only mention that the most relevant version that is tied to moral desert is generally some form of having the ability to have chosen to do otherwise (ignoring randomness).  Notice that because indeterminism or determinism is a logical dichotomy, these are the only two options that can possibly exist to describe the ontological underpinnings of our universe (in terms of causal laws that describe how the state of the universe changes over time).  Quantum mechanics allows either of these two options to exist given their consistency with the various interpretations therein that are all empirically identical with one another, but there is no third option available, so quantum mechanics doesn’t buy us any room for this kind of free will either.  Since neither option can produce any form of self-caused or causa sui free will (sometimes referred to as libertarian free will), then the intuitive concept of moral desert that relies on said free will is also rendered impossible if not altogether meaningless.  Therefore moral desert can only exist as a coherent concept if it no longer contains within it any assumptions of the moral agent having an ability to have chosen to do otherwise (again, ignoring randomness).  So what does this realization imply for our preconceptions of justified punishment or even justice itself?

At the very least, the concept of moral desert that is involved in these other concepts needs to be reformulated or restricted given the impossibility and thus the non-existence of libertarian free will.  So if we are to say that people “deserve” anything at all morally speaking (such as a particular punishment), it can only be justified let alone meaningful in some other sense, such as a consequentialist goal that the implementation of the “desert” (in this case, the punishment) effectively accomplishes.  Punishing the wrongdoer can no longer be a means of their getting their due so to speak, but rather needs to be justified by some other purpose such as rehabilitation, future crime deterrence, and/or restitution for the victim (to compensate for physical damages, property loss, etc.)  With respect to this latter factor, restitution, there is plenty of wiggle room here for some to argue for punishment on the grounds of it simply making the victim feel better (which I suppose we could call a form of psychological restitution).  People may try to justify some level of punishment based on making the victim feel better, but vengeance should be avoided at all costs, and one needs to carefully consider what justifications are sufficient (if any) for punishing another with the intention of simply making the victim feel better.

Regarding psychological restitution, it’s useful to bring up the aforementioned concepts of retribution and vengeance, and appreciate the fact that vengeance can easily result in cases where no disinterested party performs the punishment or decides its severity, and instead the victim (or another interested party) is involved with these decisions and processes.  Given the fact that we lack libertarian free will, we can also see how vengeance is not rationally justifiable and therefore why it is important that we take this into account not only in terms of society’s methods of criminal behavioral correction but also in terms of how we behave toward others that we think have committed some wrongdoing.

Deterrence & Fairness of Punishment

As for criminal deterrence, I was thinking about this concept the other day and thought about a possible conundrum concerning its justification (certain forms of deterrence anyway).  If a particular punishment is agreed upon within some legal system on the grounds that it will be sufficient to rehabilitate the criminal (and compensate the victim sufficiently) and an additional amount of punishment is tacked on to it merely to serve as a more effective deterrent, it seems that it would lack justification, with respect to treating the criminal in a fair manner.

To illustrate this, consider the following: if the criminal commits the crime, they are in one of two possible epistemic states — either they knew about the punishment that would follow from committing the crime beforehand, or they didn’t.  If they didn’t know this, then the deterrence addition of the punishment wouldn’t have had the opportunity to perform its intended function on the potential criminal, in which case the criminal would be given a harsher sentence than is necessary to rehabilitate them (and to compensate the victim) which should be the sole purpose of punishing them in the first place (to “right” a “wrong” and to minimize behavioral recurrences).  How could this be justified in terms of what is a fair and just treatment of the criminal?

And then, on the other hand, if they did know the degree of punishment that would follow committing such a crime, but they committed the crime anyway, then the deterrence addition of the punishment failed to perform its intended function even if it had the opportunity to do so.  This would mean that the criminal is once again, given a punishment that is harsher than what is needed to rehabilitate them (and also to compensate the victim).

Now one could argue in the latter case that there are other types of justification to ground the harsher deterrence addition of the punishment.  For example, one could argue that the criminal knew beforehand what the consequences would be, so they can’t plead ignorance as in the first example.  But even in the first example, it was the fact that the deterrence addition was never able to perform its function that turned out to be most relevant even if this directly resulted from the criminal lacking some amount of knowledge.  Likewise, in the second case, even with the knowledge at their disposal, the knowledge was useless in actualizing a functional deterrent.  Thus, in both cases the deterrent failed to perform its intended function, and once we acknowledge that, then we can see that the only purposes of punishment that remain are rehabilitation and compensation for the victim.  One could still try and argue that the criminal had a chance to be deterred, but freely chose to commit the crime anyway so they are in some way more deserving of the additional punishment.  But then again, we need to understand that the criminal doesn’t have libertarian free will so it’s not as if they could have done otherwise given those same conditions, barring any random fluctuations.  That doesn’t mean we don’t hold them responsible for their actions — for they are still being justifiably punished for their crime — but it is the degree of punishment that needs to be adjusted given our knowledge that they lack libertarian free will.

Now one could further object and say that the deterrence addition of the punishment isn’t intended solely for the criminal under our consideration but also for other possible future criminals that may be successfully deterred from the crime given such a deterrence addition (even if this criminal was not).  Regardless of this pragmatic justification, that argument still doesn’t justify punishing the criminal, in such a way, if we are to treat the criminal in a fair way based on their actions alone.  If we bring other possible future criminals into the justification, then the criminal is being punished not only for their wrongdoing but in excess for hypothetical reasons concerning other hypothetical offenders — which is not at all fair.  So we can grant the fact that some may justify these practices on pragmatic consequentialist grounds, but they aren’t consistent with a Rawslian conception of justice as fairness.  Which means they aren’t consistent with many anti-consequentialist views (such as Kantian deontologists for example) that often promote strong conceptions of justice and moral desert in their ethical frameworks.

Conclusion

In summary, I wanted to reiterate the fact that even if our intuitive conceptions of moral desert and justice sometimes align with our rational moral goals, they often lack rational justification and thus often serve to inhibit the implementation of any kind of rational moral theory.  They often produce behaviors that are vengeful, malicious, sadistic, and most often counter-productive to our actual moral goals.  We need to incorporate the fact that libertarian free will does not (and logically can not) exist, into our moral framework, so that we can better strive to treat others fairly even if we still hold people responsible in some sense for their actions.

We can still hold people responsible for their actions (and ought to) by replacing the concept of libertarian free will with a free will conception that is consistent with the laws of physics, with psychology, and neurology, by proposing for example that people’s degree of “free will” with respect to some action is inversely proportional to the degree of conditioning needed to modify such behavior.  That is to say, the level of free will that we have with respect to some kind of behavior is related to our ability to be programmed and reprogrammed such that the behavior can (at least in principle) be changed.

Our punishment-reward systems then (whether in legal, social, or familial domains), should treat others as responsible agents only insofar as to protect the members of that society (or group) from harm and also to induce behaviors that are conducive to our physical and psychological well being — which is the very purpose of our having any reasonable moral theory (that is sufficiently motivating to follow) in the first place.  Anything that goes above and beyond what is needed to accomplish this is excessive and therefore not morally justified.  Following this logic, we should see that many types of punishment including, for example, the death penalty, are entirely unjustified in terms of our moral goals and the strategies of punishment that we should implement to accomplish those goals.  As the saying goes, an eye for an eye makes the whole world blind, and thus barbaric practices such as inflicting pain or suffering (or death sentences) simply to satisfy some intuitions need to be abolished and replaced with an enlightened system that relies on rational justifications rather than intuition.  Only then can we put the primitive, inhumane moral systems of the past to rest once and for all.

We need to work with our psychology (not only the common trends between most human beings but also our individual idiosyncrasies) and thus work under the pretense of our varying degrees of autonomy and behavioral plasticity.  Only then can we maximize our chances and optimize our efforts in attaining fulfilling lives for as many people as possible living in a society.  It is our intuitions (products of evolution and culture) that we must be careful of, as they can (and often have throughout history) led us astray to commit various moral atrocities.  All we can do is try to overcome these moral handicaps the best we can through means of reason and rationality, but we have to acknowledge that these problems exist before we can face them head on and subsequently engineer the right kinds of societal changes to successfully reach our moral goals.

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).