Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC). For those unfamiliar with the term, Parfit described it accordingly:
For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.
To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:
Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png
In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living). Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8. According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A. This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.
Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A. Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic; A <= A+ <B, therefore, A<B). If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1. This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable. So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for. This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.
Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle. However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive. A brief overview of the argument and the supposed solutions and their associated problems can be found here.
I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat. The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:
If one wants X above all else, then one ought to Y above all else. Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal. The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts. These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.
So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness). So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better? Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer? It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random). This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).
Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being. If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)? It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations. If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.
In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible. It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A. But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other. So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.
Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively). If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences. Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.
As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us. If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one. However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time. That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves). One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly. In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.
But what if we are instead comparing two populations that both have “well-being ratings” that are negative? For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)? It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more. However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision. In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia). If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.
Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly. And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts). As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states. So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).
Nice post Lage. I have two questions – one directly related and one tangentially related.
1. Given your probabilistic veil of ignorance resolution, and given the practical impossibility of a world with equal well-being across the entire population, would it stand to reason that the best world would then be the one with a single person whose disposition is such that their well-being is maximized in the absence of others?
2. I am inclined toward something like the moral framework you’ve proposed, but I had been previously convinced that “realism” is traditionally only ascribed to moral theories which assert that the truth value of moral claims are mind-independent and, as I see it, “egoistic moral realist ethics” does not make such an assertion. Have I misunderstood the proper application of “realism” to moral theories, or have I misunderstood the nature of the theory you support?
Hi Travis, and thanks for commenting! I’ll see if I can answer your questions satisfactorily.
1) Keep in mind that the probabilistic veil of ignorance is simply one of the factors in making the assessment, effectively a heuristic that selects the best probability for high well being given no other information (as in the thought experiment). But one also needs to consider human psychology in general (as the facts become available), including which virtues we should cultivate and implement to make us most satisfied (which also happens to benefit others as well since we’ve evolved as a social species), etc. That is to say, the moral decision depends on what will make us most satisfied as the thought experimenter given our knowledge of what will make the population within the thought experiment the most satisfied. And I should stress that one may think they know what will make them most satisfied (like a drug addict looking for another hit), but if they were in fact better informed of the actual facts and assessed them rationally, they would in fact arrive at a different conclusion. So one must try their best to ascertain as many verifiable facts as possible and act accordingly based on those facts, and this includes preferring a world where the people in that world do the same.
So you asked whether or not I think it would stand to reason that the best world would be one with one person that can have a maximized well being in the absence of others? My initial response to that is that I doubt that’s a realistic possibility, given human nature, including the fact that we’re a social species who’s psychological health has been shown to be largely dependent on having plentiful interactions with other humans. Now I’ll grant that hypothetically, if it were possible to choose a world with one person that had the highest level of well being despite that person being alone in that world (if there were no other people anyway, even if other non-human animals were around), this could end up being the most preferable world. But ultimately if it is a psychologically healthy/normal human being analyzing the differences between worlds to choose from (that is, the person performing the thought experiment), then what matters is what they would think is the best world given normal psychology, and so forth. In any case, if the thought experimenter knew that the person in the world you are hypothesizing had the qualities you mentioned, it could wind up being the best possible world. We’d also want to think about whether or not the person in the thought experiment that had this odd disposition was due to the fact that they were in some kind of ignorant bliss, not aware of the facts of the world (like if they were delusional) and that just so happened to lead to their being happy because of some set of false beliefs. This would imply that this odd individual was not the most satisfied as a result of having as many facts as are available and operating on those facts rationally, for their bliss would be based on ignorance, and they wouldn’t be behaving morally within that hypothetical world if they were operating in negation of facts or rational reasoning.
Additionally, one would have to compare the likely future outcomes of the populations being compared because if, for example, the world it was compared to had many people that had a high level of well being, but lower than that of the single person in the world you mentioned, but were more likely to sustain their well being and/or improve it and more quickly over time (given the power of large numbers of people and the labor they can produce to achieve this goal), then it could stand to reason that the higher population with less well being now, is still the preferable choice once other factors are taken into consideration. That’s why these thought experiments are highly complicated and depend on how much knowledge you have and prior probabilities of future outcomes based on the knowledge we have for those populations being compared.
2) Moral realism only requires the following to be the case:
– There are meaningful moral statements (i.e. “you ought to…”)
– Some of them are true (if and only if you want a particular result above all else and performing the act in question will produce said result)
– They are made true due to objective features of the world (which can be verified empirically within all the sciences including human psychology, neuroscience, cognitive science, physics, biology, sociology, etc.), and these facts must be independent of subjective opinion (although it is possible that at least some moral facts are different for different people depending on their situation, i.e. situational ethics, depending on their psychology given their personal history, etc.).
Some people mistakenly think that moral realism must be independent of brains/minds entirely, that is, not dependent on human psychology, etc., but that is preposterous as far as I’m concerned. At best, those people calling themselves moral realists that hold that position are a subset of all moral realists, with a much higher restriction on what is considered objective. I hold the position that independently verifiable facts that are not grounded on subjective opinion are objective facts and a morality based on said facts is thus not subjective either nor is it a part of an anti-realist moral framework (since it satisfies the three conditions outlined above). Does that answer your questions or have I misunderstood anything you were asking?
Lage,
Thank you for the thoroughgoing response. I feel like I should apologize in advance for not taking all of that into account in my response, but I think that my concern about this framework can perhaps boil down to a single question that is more direct and to the point than my previous comment: does this system ultimately result in the promotion of something like eugenics for the sake of “improving the odds” for the next soul that will arrive from behind the veil of ignorance?
Regarding moral realism, I was directed toward the mind-independent definition by way of Sharon Street and came to find that this is a sensible view due to the fact that it coheres with the typical use of the ‘realism’ qualifier when applied to ontology in general. That said, it’s really just a question of semantics, so there isn’t anything important to address here. I was just interested in hearing your perspective on that question.
Hi Travis.
No problem. I tend to be verbose, so I apologize in advance if my responses are a bit long. To respond to your latter comment about moral realism, I’d point out the general wiki page on moral realism at:
https://en.wikipedia.org/wiki/Moral_realism
As it says, “Moral realism is the position that ethical sentences express propositions that refer to objective features of the world (that is, features independent of subjective opinion), some of which may be true to the extent that they report those features accurately.”
So my view aligns with a very general/typical definition (although there is an ongoing debate about robust vs. minimal moral realism and who should use which label and so forth). Regarding mind independence per Sharon Street, I would say that the objective features of the world that produce various moral facts for humans include objective facts about the brain (forget the mind for a moment, since it is counter-intuitive for some to think of it as physical in nature and think about the brain as an information processor with goals/perceptual-predictions-it-wants-to-fulfill), physics, biology in general, and all sorts of physical facts which are themselves mind-independent. It is the fact that morality itself is grounded on the objective mind and independent of any mind’s subjective opinions that would make it “independent of the mind”. But as you said, this is just a semantic issue so I’ll leave it at that.
As for you former comment here and question in particular “does this system ultimately result in the promotion of something like eugenics for the sake of “improving the odds” for the next soul that will arrive from behind the veil of ignorance?”
My answer is, it would depend on your definition of eugenics. My probabilistic veil of ignorance concept that I invoked was more of a heuristic to use when all else is truly equal between populations so it alone wouldn’t argue for eugenics as that is an action that some society or population would take, and whether or not they ought to do it ultimately boils down to whether or not that behavior makes us most satisfied as human beings, once fully informed and reasoning rationally about the facts (and given their situation). I am an advocate of eugenics in a sense that many would agree with, as I am a proponent of genetic engineering of future gene pools. So I believe that we have a moral imperative to not leave future offspring’s genes in the hands of chance. If we can prevent genetic diseases that kill children (for example) and ultimately make humans more intelligent, and overall more healthy by controlling the genes of future offspring — then technically that is a form of positive eugenics, and one that has the primary goal of preventing suffering and increasing well being. I also believe in a woman’s right to choose (to have an abortion or not) due to her having bodily autonomy, even if her choice to have an abortion is for genetic reasons (which would make it a form of negative eugenics). I am NOT a proponent of genocide nor forcing people to reproduce nor prevent their reproducing as is typically referred to as “eugenics”. In any case, even if we arrive at conclusions that seem repugnant (no pun intended given this blog post title), if the evidence shows that it is in fact working to increase our overall life satisfaction, then it would be irrational to reject those new conclusions. For example, the evidence suggests that we ought to honor equal rights for homosexuals (say, in terms of marriage and being a couple in general) and thus we should not work to prevent their life fulfillment/satisfaction even if it is counter-intuitive or seems repugnant to some. Inter-racial marriage and reproduction was once seen (and still is seen in some places) as repugnant and gross. We now know (with a high level of confidence given our knowledge) that it is morally wrong to prevent inter-racial marriage and wrong to encourage the feelings of disgust against it. Likewise for a number of other moral propositions. Some may seem gross or counter-intuitive (though most likely will not), but those feelings are often a result of poor cultivation of certain virtues (such as compassion and reasonableness) or the holding of beliefs that are not factual (like that “races are not equal in worth, and so mixing is forbidden”).
Lage,
Though I find myself agreeing with much of what you have outlined, I have been wary of throwing my support behind any normative theory at the social scale because it seems that I can inevitably imagine a scenario in which the theory seems to infer a conclusion which runs counter to what I perceive to be a dominant moral intuition. That is, I’m not comfortable with your statement that
If I had to explain why I wasn’t comfortable with that statement, I think I would find that my discomfort is rooted in the priority given to satisfaction. Perhaps there is a better term somewhere out there in the literature, but I am more comfortable if priority is given to “moral peace” – which I define as the condition in which one’s fully informed assessment of a moral judgement results in minimal angst or dissonance. Is this synonymous with the “satisfaction” of your theory? If so, then I would suggest that it should be impossible for a fully informed person to consider a conclusion to be both repugnant and demonstrably effective at maximizing their satisfaction \ moral peace (because repugnance is mutually exclusive with that condition). If this does not accord with your view of satisfaction, then I would be interested in hearing any objections to my preference for moral peace (which I think is ultimately incompatible with normative theories at the social scale because of the inherent subjectivity and the disagreement that ensues).
“Though I find myself agreeing with much of what you have outlined, I have been wary of throwing my support behind any normative theory at the social scale because it seems that I can inevitably imagine a scenario in which the theory seems to infer a conclusion which runs counter to what I perceive to be a dominant moral intuition.”
Unfortunately, moral intuitions are no better than a heuristic (albeit a good one) which is useful in many situations, but not fool proof. There are also moral intuitions that encourage racism (due to evolutionary trait/advantage of recognizing in/out groups), but we learn how to overcome some moral innate intuitions through higher reasoning and the conditioning of one’s self to have certain virtues that result in overcoming racist innate prejudices, leading to greater happiness, etc. So what one perceives to be a dominant moral intuition may in fact be a morally wrong intuition, and the solution may require conditioning the person with particular virtues and also update their belief system with as many facts as are available to change that disposition. But keep in mind, that the moral system will ultimately be that which maximizes satisfaction/eudaimonia given the facts and a proper assessment of them (including demonstrable facts pertaining to which virtues we should be cultivating to maximize that eudaimonia, given what we’ve learned in sociology, psychology, etc.). To then say “I don’t want to do those actions because they sound counter-intuitive to me” would be irrational and immoral. This is where it gets tricky, because what a person believes to be counter-intuitive or morally wrong right now COULD change once they have become fully informed of what those new prescribed behaviors actually lead to, what factual assumptions those prescriptions are based on, and what the sciences can show regarding their impact on our happiness, what type of person we’ll become as a result of behaving in that new way, etc.
Imagine one holds cows to be sacred because of some belief not supported by facts (e.g. a belief that cows are “gods” of some kind and should be preserved/revered at all costs, even at the expense of human welfare). Now imagine that one is told that what is moral is to maximize human welfare over that of cows. The person who has a strong moral intuition to hold cows as sacred (perhaps through a child upbringing with conditioning that reinforced those non-factual beliefs about cows) may think “This goes against everything that FEELS right, and moral. I can’t do it.” That person holds the moral views they do because they are not based on facts, so even if they have a strong moral intuition, it can be an incorrect one. Only when they finally become aware of the facts (i.e. when they become aware that cows are not actually “gods” as there’s no evidence of such things being the case), will they start to change those moral intuitions. And if they’ve spent their entire life putting compassion toward cows over that of human beings, they will also find it difficult to change their behavior and become compassionate primarily to human beings. This would involve being reconditioned such that they have cultivated the virtue of compassion toward human beings, for that prescribed behavior to eventually become easier to abide by, and finally lead to maximized satisfaction given those facts and cultivated virtues. My main point here is to show that what we think might make us happiest, is often based on false beliefs, incomplete information, or irrational thinking based on cognitive biases and so forth. Once we cultivate virtues to maximize satisfaction (in the eudaimonia sense) by their guiding our behavior generally, and combine this with a common maximal body of facts to act on and to steer those virtues appropriately, then any rational person should be able to arrive at a moral conclusion that any other rational person (if given the same body of facts) would also agree is moral given some particular situation.
Now it could be that we would actually be most happy/satisfied believing some set of false beliefs than not. But we can see how that epistemological methodology is untenable because then one can’t sustainably separate truth from falsity (to maintain this ignorance, unless someone ELSE is keeping the truth from them), nor can they have control over how those false beliefs will affect other behaviors, all of which that may end up being worse for everyone in the long run. The only way to behave as responsibly as possible (and to separate truth from falsehood) is to aim to have as many true beliefs as possible and as few false beliefs as possible, even if some ignorant delusion made people happiest (in the short run anyway). So once we constrain our moral models to those that prioritize truth as fundamental (the strongest epistemological foundation to base all other philosophy on), then we choose the model that produces maximized satisfaction given that “true beliefs only” condition — even if it may limit our happiness (perhaps in the short run) when compared to one relying on false beliefs.
Does this explanation and (sacred cow) example answer your questions and/or explain my position better?
I completely agree that our intuitions can be wrong and in some cases should change if they are informed by errant data or are resting on contradictions. This is why I specified that my preference for “moral peace” also include that the judgment be fully informed.
Ultimately, however, after you’ve eliminated all the errors and contradictions, the resulting judgement is still resting on a moral intuition. In the “maximize satisfaction/eudaimonia” theory, you’re simply giving ultimate priority to the intuition which says that satisfaction/eudaimonia is good, and in most cases that seems pretty safe. But if we follow that theory and come to a situation where it runs counter to our intuitions despite an apparent absence of errant data or contradictions in the intuition, I don’t see why we should then give preference to the theory over our intuitions – especially when the theory itself is resting on an intuition.
Not necessarily. It may never be “intuitive” even after being fully informed and after having cultivated necessary virtues. But once it is assessed rationally, we will nevertheless end up with some resulting moral judgement that we ought to follow — even if counter-intuitive.
This may be true but may need qualifiers to confirm I understand what you’re saying here. Our “desiring that which we desire most above all else” may be considered an intuition, but in any case, it can be shown that all tenable moral systems ultimately break down to this reasoning. No matter who’s moral system you pick. The kicker is that many moral systems, like those stemming from various religions have false beliefs as their “motivators”, such as a reward or punishment in the afterlife (or the belief that one is pleasing some God, or “fill in the blank here”). If one had no such motivators, one would have no reason to follow the moral system. It just so happens that a number of these moral systems also have at least SOME motivators that happen to be supported by objective facts about human psychology and sociology (even if these motivators are often not advertised within some religious belief system). Take for example those that follow something analogous to the Golden Rule (which is a great moral heuristic grounded on the virtue of compassion in particular) — it actually benefits us when we are compassionate and “altruistic” to others because it makes us feel good doing so (increases our satisfaction more so than if we did not act this way) and makes society a better place to live for everybody by reciprocal good behaviors and social safety nets (as per the “veil of ignorance”). In any case, if it doesn’t lead to a maximal level of satisfaction (eudaimonia) when followed, then when all else is equal, any other moral system that does accomplish this better is going to be the more tenable moral system and thus more preferable. Because one will choose the moral system that they are most motivated to follow. Whichever one provides the most life fulfillment/satisfaction will be the most motivating to adhere to, and thus if one chooses that which has been shown (based on all the facts available and a rational assessment of said facts) to deliver this satisfaction (which would exclude all moral systems based on reward or punishment in an afterlife, “sacred cows”, or any other unfounded claims not empirically verified to be factual), then there is no rational self-motivating reason not to follow it.
If there is a situation where the proper moral assessment runs counter to our intuitions, as mentioned before, then we must accept that our intuitions are wrong in that case and act according to the rational assessment. Just as we must accept that quantum phenomena are real and relativistic phenomena are real, despite their going against our intuitions (e.g. non-locality, entanglement, fundamental randomness, time dilation, etc.). We learn to accept that our intuitions are wrong when we have empirical data that demonstrates that the phenomena or theoretical model running counter to our intuitions are factual and produce successful predictions. That can be hard for some people to accept, but it’s the way knowledge accumulation and science in general works. Sometimes it takes a generation or two to pass before people finally begin to accept those counter-intuitive conclusions. In cases like moral behavior, it’s more complicated because of situational ethics, where people can’t pragmatically (if even possibly) be conditioned to make every possible moral decision be intuitive and feel right. But we have to try our best, despite our biases and gut feelings on many of these issues.
To give a powerful hypothetical example to illustrate this point, consider the famous Trolley problem (per Philippa Foot) where we have a train barrelling down the track about to run over 5 people that are stuck on the tracks. If you pull a lever, the train will be diverted onto another track with one person stuck on the track that will be killed. Most people don’t hesitate to say that they would pull the lever, to save five people at the expense of one person. Now change the scenario a bit: instead of pulling a lever to divert the train, your only hope of saving the five is to push an obese man (who happens to be walking by) off a bridge to stop the train. Assume you can’t sacrifice yourself because you’re not big enough. Many people hesitate with this latter scenario even though both scenarios result in one life lost to save five lives. The addition of the visceral detail of having to forcefully push a man to his death rather than “removing ourselves cognitively” from the emotion by simply pulling a lever or flipping a switch, shows how our intuitions and gut feelings can run counter to a rational assessment of the moral predicament we’re faced with. Now I’ll add that in the second scenario, one would have an additional consequence, because they would ALSO have to live with the emotional turmoil that followed their visceral action of pushing the obese man to his death. In any case, it may be that a person could live with that more than they could live with being responsible for the deaths of 5 people when they had an option to save them. This is where science comes in, where the best course of action depends on what knowledge we have regarding human psychology, with respect to the emotional trade-off of our knowing we could have prevented 5 deaths (or a net of 4 deaths) versus living with having performed a more “up close and personal” or “hands on” action that killed 1 person to save 5. Whichever would make us most satisfied overall is the action we ought to take above all else.
As I said before, sometimes this isn’t known even if it could be known in principle (a true Science of Morality, if ever formally developed, is our best chance of answering the most questions of this type). So what we do instead is perform the action that we ought to take above all else given what we know (thus far) about the consequences to our own happiness and life satisfaction, what type of person we become by taking certain actions, and essentially ALL the consequences we have knowledge of. We are bound to make most moral decisions with some level of ignorance, so all we can do is make the best choice given what we know at the time, and trying our best to continue to increase our knowledge of these consequences, looking into the research being discovered in the fields of psychology, sociology, etc, to continue to morally grow. It also means we need to keep trying to learn about cognitive biases and how to counter them, learn about logic and logical fallacies, and learning about how to improve our ability to think rationally. All of these lifelong endeavors are needed for moral growth, because a larger body of facts and a better ability to assess them rationally, is the key to the whole puzzle. My two cents anyway.
I should add, if it wasn’t clear in my last reply, that we should always prioritize intuitions that are most fundamental, with the most fundamental being our desire for life fulfillment/satisfaction. It is ultimately the desire that is tautological in that it is the ultimate reason for any higher level desire, and thus the only “intuition” that could be seen as “always right” because all behaviors are driven to fulfill it (even if we fail at that goal due to false beliefs or irrationality).
Hmm. Maybe we are actually talking about the same thing. I would say that the most fundamental desire is achievement of “moral peace”, as I defined in my earlier comment. If your “satisfaction” is equivalent to my “moral peace”, then I agree that it is tautological and the ultimate foundation for any judgement, but it appears to me that the corresponding normative ethic just boils down to doing your best to inform and scrutinize your intuitions before following them.
To be clear, I may be employing a broader definition of ‘intuitive’ than you are. I don’t mean to equate intuitive with reflexive – as in the immediate response to some stimuli. When I say that a moral judgement is intuitive, I mean that it ultimately derives from a subjective feeling, even if there are layers of reasoning on top of that (i.e., the intuition plays the same role that axioms do in other domains). I agree that reflexive heuristics are prone to error.
I’ve never heard of fundamental satisfaction/fulfillment being referred to as “moral peace”, but if that’s what you mean by it, then yes.
Yes, or at least, mostly so. I think it boils down to doing your best to inform and scrutinize your intuitions before making a moral judgement ultimately based on reason (in the moment of performing the action). If it is ultimately based on intuition (in the moment of performing the action), but the proper moral judgement (assessed through reason) requires you to do something that goes against your intuitions (but which there is good reason to believe will result in more satisfaction fulfillment when followed), then one will be impeded from behaving morally. I think in most cases the proper thing to do will appeal to our intuitions, but in many cases it will not, which is why reason needs to be the ultimate judge. We use reason to learn about which virtues to cultivate to make the proper moral actions more automatic or easier to occur (which one could refer to as a way of making them more “intuitive”). And we use reason in the overall assessment (based on our current knowledge of the relevant factors at play), and if there is good reason to believe that violating our intuitions (temporarily) will result in the best outcome of personal life fulfillment/satisfaction, then we should do so. This would result in fulfilling that ultimate tautological “intuition” that self-satisfaction (in the eudaimonia sense) is the ultimate “good”, even if along the way, an “immediate” intuition was violated.
I see. Yes, eudaimonia would be a subjective feeling — composed of (perhaps) a host of different feelings leading to overall contentment. But again, the rational course of action (arrived at through reason) could violate your immediate state of contentment and intuition of what “feels” right, with the long term effect of increased contentment when all is said and done. As per the Trolley problem and “the obese man”, it could be that reason leads us to the conclusion that the right thing to do will not feel right to do it — even if one feels best with themselves in the long run, perhaps not realizing that an alternate course of action would have felt right while doing it at the moment, but then leaves the person with a long term feeling of guilt or discontent afterward. Which is why reason has to be the ultimate judge, even if we expect it to (quite often) be accompanied with intuitions aligned in the same direction. Reason carries with it the empirical evidence to support the desired outcome of the moral action (facts in psychology, sociology, etc.), where intuition is not guaranteed to do so. Just as ultimately we trust empirical results regarding relativity and quantum mechanics over our intuitions which are just plain wrong on that front. That’s an extreme example illustrating how our intuitions are products of evolution that are effective heuristics in decision-making and making sense of the world, but which are clearly not guarantors of truth or accuracy.
In any case, I think we’re at least mostly in agreement here. Maybe some semantic quibbles here and there.
I’m not surprised that you haven’t heard of “moral peace”. I hadn’t either until a couple days ago when I decided I needed a term for the concept I had in mind. To reiterate, what I mean by this is that one has maximized their “moral peace” if they can reflect on their moral judgements and find that they have done their best to accurately inform those judgements and eliminate any conflicts or concerns, and so are “at peace” with their judgements. Presumably, then, one who operates according to those judgements will minimize moral angst, which may or may not be equivalent to the fundamental satisfaction/fulfillment you promote as the ultimate goal. To help determine whether or not we’re talking about the same thing, let’s consider an extreme hypothetical:
Usain Bolt and his mother are being held hostage and the captor says that he is going to shoot one of them in the leg before releasing them to the hospital, but it is up to Usain to choose who will be shot and his mother will never know who made the decision. Usain loves his mother dearly and knows that in her ignorance she will not hold him responsible for the consequences if he took the bullet for her (because she would prefer to take the bullet for him) and he knows that the long term consequences of a bullet to one of his legs are far more severe than a bullet to his mother’s leg. Having accounted for all the information, Usain decides that the collective life satisfaction of he and his mother is maximized under the circumstances if she takes the bullet.
Under my naive understanding of what is meant by “fundamental satisfaction/fulfillment”, I anticipate that this might sometimes be at odds with morality (if morality is not already been defined as the maximization of fundamental satisfaction/fulfillment but is instead rooted in our fully informed and consistent moral sense). As I see it, Usain might choose to value fundamental satisfaction/fulfillment above moral peace and choose to live with the moral angst of the decision to allow his mother to take the bullet because the long term result is still better from a holistic perspective – but not from a moral perspective. Conversely, he could also choose to minimize the moral angst by sacrificing life satisfaction instead and taking the bullet.
Would you suggest that Usain’s moral sense is in error if he feels more moral angst at having let his mother take the bullet, or would you suggest that the correct moral decision is for him to take the bullet because true satisfaction entails minimizing moral angst?
I think that when one’s overall satisfaction/fulfillment is maximized by rationally assessed moral behaviors, then they will generally be “at peace” with their past moral judgements, but this may not happen in real time / immediately.
I would agree that this is most likely the proper moral action for Usain to take given what his mother would want and what he wants. If he took the bullet, his mother would be crushed that her son’s athletic dream is likely over, and Usain’s knowledge of his mother’s likely response to such a waste of potential happiness with Usain’s talent, should prompt him to likely have her take the bullet instead. This would likely maximize both of their satisfaction, because the alternative would involve Usain feeling less guilty about not having his mother take the bullet, and he may in fact feel good for sacrificing his future career and talent to spare his mother of the pain. But it is likely a much worse trade-off given the horrible feelings that his mother would feel for Usain’s career ending and all the potential going down the drain. So while we only have your hypothetical’s information at our disposal, it is likely that he should choose that she be shot instead of him.
But it can be shown that morality ultimately breaks down to satisfying ourselves in this fundamental way. Even if it involves self-sacrifice to get one to such a state (such as a person sacrificing themselves by jumping on a mine, knowing it will save 5 friends nearby — or possibly Usain taking a bullet for his mother, though I suspect this is less likely to lead to maximal satisfaction for him). If satisfaction/fulfillment is actually the same as this “moral sense”, then it wouldn’t be at odds with it. I personally don’t like the term “moral sense” if this is the case because it seems to carry the concept of “intuition” with it (like some kind of sixth sense or what-have-you), rather than leaving room for a “moral assessment” that is a combination of reason and “cultivated intuitions” (automated virtues) — where it is also accepted that reason may sometimes violate immediate intuitions (as I alluded to earlier).
This is a good question to parse this out. Again, I don’t like the term “moral sense” because of the reasons I just mentioned in the preceding paragraph here, but I suppose I could say that I don’t think his moral sense is necessarily in error. If we’re talking about which action would lead to the most satisfaction, then that action would likely be the one to reduce moral angst. But like I said before, it may be that in the short term he would feel guilty about it (high perceived moral angst from having his mother take the bullet), but in the long run, he’d end up feeling best about it. That is to say, he’d then have the lowest moral angst upon further reflection (perhaps weeks or months after the events transpired). If he looked back on it, say, one year later, when his mom is recovered from the wound and he’s still actively in the Olympics, having won the gold several more times, with his mom cheering him on (perhaps with a body guard this time so the same ruthless man wouldn’t pull this evil “shoot you or your mom” stunt again…). This is why I stress the point that one can feel worse doing what is morally right, in the moment, so long as they end up feeling best within a reasonable time frame afterward (whether within a few days, weeks, or a few months for example). Our moral “intuitions” (as most people would call them) on the other hand may serve to prevent this proper moral action, because of the immediate moral angst one may feel, thus requiring that they use reason to overcome the immediate angst, with the long term benefit of overall minimized moral angst. It can be reduced to a sort of “short term gain versus long term gain” argument. Should the college student party and drink and take no time to study, likely to have oodles of fun in the moment, but likely to become a failure who isn’t satisfied with their life (and where they’d tell you they regret what they did)? Or should they sacrifice some of that short term fun and games (even if not all of it) to point themselves in a direction that truly leads them to a more content life with feelings of accomplishment and that sought after state of eudaimonia? The Usain hypothetical likely wouldn’t require years to realize that overall maximized contentment from him choosing to have his mother take the bullet instead of him (assuming this WAS the course of action that is best), so that scenario is easier to see the benefits of in a shorter time frame (not years after a wasted life as in the case of a grossly under-disciplined college student/drop-out).
What is the evidence for this statement?
The goal of the hypothetical was to identify a case in which this might not be true. Maybe that particular example isn’t obviously such a case, but given that our moral judgements give preference to others over self, it seems quite reasonable to me that somebody can make a decision that favors personal satisfaction over another person’s well-being, thus resulting in a higher overall satisfaction while still entailing a higher level of moral angst regarding that decision (unless you are limiting satisfaction to the moral realm, in which case it would seem to be the same thing as the concept of “moral peace”). Are you suggesting that such a situation is actually impossible?
As I mentioned in my post, consider the following:
All moral statements can be put in the form of hypothetical imperatives (a concept first articulated by Kant) such as “if you desire X above all else, then you ought to do Y above all else.”
Then consider some desire you have, whatever it may be, and ask why do I desire that? It will break down to other more fundamental desires that that desire is based on until you get to a desire that has no reason for itself and so is ultimately tautological and biologically grounded (i.e. the most fundamental desire). This most fundamental desire is personal satisfaction, because all decisions humans make are made in the hopes that it will maximize this satisfaction with one’s life given the circumstances they find themselves in. This is a widely observable fact in psychology and sociology and you can look into positive psychology and moral psychology in particular to see more specific work on this subject.
In any case, by logical syllogism:
1) Moral statements can be broken down to the following form:
If you desire X above all else, then you ought to do Y above all else.
2) All desires can be broken down to a most fundamental desire, i.e. fundamental satisfaction.
3) Therefore, all morality is based on achieving this most fundamental desire through actions that maximize it given the circumstances one finds themselves in.
Personal satisfaction may in some cases only be maximized by sacrificing some amenity for another person (even sacrificing themselves entirely to save another’s life), or it may not. It depends on the relevant facts pertaining to that person’s psychology, the reasons for a possible personal sacrifice, and the trade-offs that each alternative would result in. It is still the one resulting in the most personal satisfaction that one ought to do above all else. There’s an important distinction to be made here between self-defeating “selfishness” that only looks at immediate short-term gains to one’s self at the expense of others, and self-optimizing “self-interest” that looks at short as well as long term gains to one’s self which includes implementing virtues that are often deemed “altruistic” (even though ultimately they are based on self-interest, or else we wouldn’t do them).
This strikes me as overly idealistic. One way to poke at this is to ask whether it makes sense that evolution would have coordinated all of our desires to converge on (or originate from) an underlying foundation that equates to personal satisfaction.
I’m not sure why you think it’s overly idealistic. Perhaps because it seems too simplistic?
I think it makes perfect sense that evolution would have all higher level desires ultimately based on satisfying a lowest level desire that needs no reason for itself (otherwise there is no ultimate foundation for having ANY desires at all). If the biological imperative to survive is ultimately accomplished by naturally selecting innate inclinations to behaving in ways that maximize satisfaction (particularly at the most basic levels which would include eating, sleeping, sex, etc.), then the fundamental desire we have serves the ultimate evolutionary purpose of survival and flourishing. However, because of consciousness and our brain’s plasticity and higher level reasoning, we’ve been able to make many more complex behaviors to accomplish this satisfaction through knowledge that is by no means innate (such as culturally transmitted knowledge about consequences that certain actions have on our happiness and satisfaction). So now the very same motivating fundamental desire to survive (i.e. seeking maximal satisfaction), can be used to actually commit suicide. We have the knowledge and reasoning skills to reason that ending our life is an option and one that will reduce our suffering and maximize satisfaction in SOME cases (most other animals don’t realize this, and so suffer to their deaths instead, though some are “altruistic” and self-sacrifice for their young). So what would very plausibly originate from evolutionary origins can now be “high-jacked” to accomplish more than simply replicating our genes through basic biological imperative preserving behaviors. Just as we’ve come to realize that sex can be an act for pleasure alone with no intention to procreate, even though evolutionarily, it was selected for procreation and “selfish genes” so to speak. I think the same reasoning can apply here with no stretch of the imagination needed.
OK, I need you to break this down further.
I need you to define “desire”. Do viruses have a desire to inject their genetic material into cells? If so, is that based on (or equivalent to) the same foundational desire that underlies all of our desires? If not, at what point does a biochemical tendency become a desire?
It seems like this is circular. Isn’t satisfaction itself the result of having responded to those inclinations after they are in place, rather than an impetus for the selection of those inclinations in the first place? The innate inclinations are not aimed at satisfaction – they’re aimed at the propagation of genetic material. I suppose that if your definition of desire (above) transcends neurology then perhaps satisfaction could too, but I find that to be an odd choice of terminology.
Why is it that the hypothesis is plural (innate inclinations) while the conclusion is singular (fundamental desire)? Are the innate inclinations products of the fundamental desire?
“Desire” as I define it is exactly what you think it is. A “wanting” of some particular outcome. This is limited to conscious desires, so no it does not apply to viruses or the like. Only systems that are conscious.
I don’t know the answer to that and I suspect no biologist nor neurologist does, although I’m sure many people have some hypotheses. It will be related to the problem of “at what point does a biochemical reaction or system of reactions produce consciousness”. Since the desires that I’m talking about pertain to conscious creatures. Now an evolutionary explanation would presumably have some translation from one to the other, whereby, a conscious brain conferring a survival advantage to the organism possessing said brain, should function such that biochemical needs for homeostasis lead to cognitive operations that “map” such a goal onto higher order information processing which leads to what we would call conscious goals, feelings, hunger, pain, horniness, etc. So yes, while I believe one could (in principle) reduce what I’m referring to as a “fundamental desire of satisfaction” to some kind of biochemical reactions or biochemical tendency toward homeostasis of the organism, that’s not my intention in this discussion and is outside the scope of it.
Satisfaction is an impetus in that it is the ultimate goal driving our behavior. The innate inclinations were naturally selected in order to propagate genetic material, but that doesn’t mean that the basic schema put into place (a brain able to learn and execute behaviors for achieving this goal) is immune to learning about and favoring “supernormal stimuli” and other desirable sensations and perceptions that no longer produce a goal of propagated genetic material. A long time ago, before culture began, it would likely have been much more obvious to a modern informed time traveler such as ourselves that our brains were used to solve problems critical to survival alone. Now that culture has entered the picture, there is so much more knowledge and also a new replicator that has effectively superseded the propagation of genes, namely, memes. Now that we have memes, our fundamental desire for satisfaction has likewise been re-purposed (to a large extent at least) to the propagation of certain memes, not our genes. These memes are those relevant to what gives us pleasure, the very pleasure that once solely served to incline us to behaviors beneficial to survival. Only now, because of our brain and the complex memes it’s absorbed, our satisfaction is no longer accomplished by merely finding food and mates. Now there are new behaviors learned that lead us to experience levels of satisfaction perhaps never before realized, even if still based on an evolutionary foundation, a biological imperative, a desire for a feeling of satisfaction. As I said before, sex is no longer used solely for reproduction, but also for pleasure. So now we can gain satisfaction from sex without the burden of children for those that don’t want children — or that don’t want MORE children (for those who are glad to have had kids but that do not want any more due to the hardship and decrease in satisfaction it will produce). This isn’t an argument against evolution naturally selecting brains, for it did just that. Rather it’s an argument that shows that what evolution naturally selects can be flexible enough (or an imperfect strategy) to become self-defeating (where people choose to NOT reproduce or to sacrifice themselves with no benefit to their genetic propagation, based on what their naturally selected brains have learned).
I chose the word “inclinations” to account for the multiple behaviors that are needed to accomplish the one ultimate goal. For example, one must be inclined to sleep, eat, breath oxygen, etc., to be satisfied which all require different strategies to obtain/maintain.
I’m having trouble grasping this “fundamental desire for satisfaction”, so I going to offer my current interpretation and you can correct me as you see fit.
From an evolutionary perspective, selection favors organisms which include systems that motivate behavior which maximizes the propagation of genes. As complexity increases, the biochemical systems which trigger adaptive actions morph into a neurological condition that equates to a general purpose “desire to maximize gene propagation”. This, in turn, spawns more specific peripheral desires which facilitate the general goal in various discrete domains (food, sex, cooperation, etc…). With the rise of consciousness we became aware of these desires, including the underlying fundamental general purpose desire. We do not perceive it as the “desire to maximize gene propagation” but rather as something like “the desire to fulfill our peripheral desires”, such that the fulfillment of the peripheral desires, which we experience as satisfaction, ultimately leads to the fulfillment of this fundamental desire. Satisfaction serves as a heuristic for “fulfillment of all our peripheral desires”, such that the fundamental desire is a “desire for satisfaction”.
How am I doing so far?
I agree.
I would think it more appropriate to say that the neurological condition that precipitates is one with a general purpose “desire to maximize satisfaction”, although it just so happens that early in our evolutionary history, this amounted to primarily fulfilling that which also happened to “maximize gene propagation” via the desire to find resources, mates, and skills to survive the environment. Effectively, it morphed into a neurological condition that I would call a heuristic for maximizing the propagation of genes, but which could be accurately described as a “desire for maximized satisfaction”.
Yes, but with the exception I just noted above. That is the neurological condition “desire to maximize satisfaction” is what spawns more specific peripheral desires to find food, oxygen, mates, and general will to survive the environment.
With the rise of consciousness, we became aware of the desires, including the underlying general purpose desire (although I think it is satisfaction, not gene propagation).
I would say that satisfaction serves as a heuristic for the “propagation of genes”, which accounts for why evolution would select this heuristic. But, like all heuristics, which sacrifice accuracy for efficiency, that heuristic is now faced with culture and a number of new experiences that interact with empathy, hormonal systems, etc., such that the original peripheral desires of obtaining food, water, sex, etc., are in many cases (if not all), no longer enough to maximize satisfaction or are in fact out-competed by an ever-changing and expanding set of possible experiences that maximize this more than ever before possible. It is this expanding set of possible experiences to maximize this satisfaction, or more specifically the expanding set of behaviors to accomplish this, that I’ve been labeling as “moral actions”, or “what one ought to do above all else”. Because we’ve evolved as a social species, have an oxytocin/vasopressin neurotransmitter schema in place among other things of course (for more details on this see Pat Churchland’s work if you’re interested), the things that maximize our satisfaction tend to align quite well with things that benefit everyone in the society. Which is why the “self-interest” is not to be confused with “selfishness”, because of this largely shared set of benefits that many benefit from, by one simply fulfilling their OWN satisfaction. If we weren’t a social species, then this wouldn’t likely benefit the group as it does in our species’ case.
I read this as saying that evolution is actually oriented toward satisfaction and it only accidentally coincided with gene propagation. Is that a fair reading?
Any particular reason why we should suspect this order of operations rather than the reverse, in which the specific peripheral desires arise more or less independently and the general purpose desire for satisfaction subsequently arises as a means for arbitration and reconciliation of the desires?
It’s that evolution is oriented toward gene propagation, so our brains evolved a “heuristic of maximizing satisfaction” that is meant to be an efficient way of accomplishing that gene propagation goal, but due to it sacrificing accuracy for efficiency, as culture evolved that gene propagation goal is no longer necessarily realized during the maximization of satisfaction. It still accomplishes gene propagation in many cases, but can become secondary to other goals, due to that heuristic being overtaken by new sought after stimuli and complex interactions culturally with memes and so forth.
I think our higher level cognition and reasoning (including moral reasoning) are the means for arbitration and reconciliation of the desires. I think one must look at the reasons for having any of those desires, to figure out the desires that are more fundamental causing higher level desires to subsequently form. Whichever one is tautological is the most fundamental desire. So if I ask why I desire food, and answer that it is because it helps to fulfill my desire to maximize satisfaction (specifically to reduce feelings of hunger), and then ask, why I desire that satisfaction, and answer that it is because I just do (i.e. because it’s tautological), this makes sense and provides a solid foundation with the desires building up from that direction with no circularity or infinite regress. On the other hand, if one starts with the desires to find food and water and mates and says that each of those three desires are tautological, and the desire for satisfaction arises later merely to reconcile these desires (in some way), then we are starting with a larger number of fundamental desires, thus going against parsimony and evolutionary efficiency (when we don’t yet have enough information/evidence to rescue the smaller prior probability that this alternative hypothesis would have). It would make sense to have the smallest number of fundamental/tautological desires as possible (mapping onto some biologically fundamental goal), ideally one fundamental desire (which generally encourages staying alive and flourishing), instead of the other way around while not hinging on one primary desire.
That’s not to say that the alternative that you mentioned isn’t also possible. It could be that evolution chose several tautological desires, but I suspect that a generic biological goal of homeostasis can be translated to or neatly mapped onto an eventually conscious neurological goal/schema of “maximizing satisfaction” (with some range of homeostasis being reinforced through rewards of pleasurable feelings and so forth, from eating, sex, etc., but also from the successful realization of higher complexity goals such as cultivated virtues and so on), and that this is better than the alternative explanations, which require more ad hoc assumptions (like multiple fundamental/tautological desires with no primary desire superseding the others). So I think one should be inclined to adopt the former view since it is a simpler framework to work with to explain all the phenomena. But it’s a complicated topic with a lot of neurological complexity surrounding it (and with an ongoing process of evolution changing said neurology). These are my conclusions based on what I’ve learned about the brain thus far, it’s evolution, and subsequently how this fundamental desire for satisfaction maximization (grounded on maintaining a biological homeostatic range with perhaps no theoretical upper-bound of well-being) fits in line with and provides an effective foundation for the goal theory of morality that I subscribe to.
Lage,
Thanks for participating in this discussion with me. I feel a bit like I’m playing 20 questions as I try to understand exactly what is meant by this fundamental desire to maximize satisfaction. On the one hand I can appreciate the philosophical derivation through asking “why do I desire x” until you reach something fundamental, but on the other hand, it also seems possible to not reach that convergence, or that the philosophical process does not bring us to an accurate understanding of the nature of our desires – much as the same process doesn’t bring us to an accurate understanding of free will. It kinds of seems like it might just be one of those post-hoc rationalizations that we’re so good at, especially when one considers the complexity of the neuroscience of desire, motivation, reward, etc… So I’m not sure if I have any further questions at this point, but you’ve given me something more to consider. In the past I had tried to build up a sensible moral framework and had gone down a similar line of thinking, but had abandoned the search for a unifying principle because it simply seemed more accurate to accept that the difficulties one encounters in that process actually reveals a truth about the nature of morality – namely that it is a complex product of many factors that cannot necessarily be reduced to a simple principle. I think that’s still where I am, but that conclusion seems less secure.
It was my pleasure. I enjoyed parsing this out and having a nice civil discussion with you about a topic I rather enjoy. So thank you as well!
Yes, it may be possible that that’s the case, but I see the fact that our behavior isn’t frozen by indecisiveness (generally speaking), and then can’t help but see this as yet more evidence to support the hypothesis that the brain is driving the final decision of “what to do” based on some fundamental goal it has (naturally selected for via evolution of course). It must prioritize some goals over others, and it would seem that eventually, one has to get to a goal that has the highest priority (or a set of them as you mentioned as a possibility), and it seems to me that this would most likely be a goal that drives homeostasis of the organism (since that would be integral to survival on a basic biological level). It would seem to explain a lot, and the “maximizing satisfaction” serves as a nice transitional heuristic for more complex cognitive processing as we do as intelligent conscious creatures, immersed in an evolving culture with novel experiences and scenarios to navigate ourselves through — yet still grounded on an evolutionary/biologically imperative basis.
I’m curious what you mean by your comment regarding our understanding of free will? Would you be willing to unpack that a bit? I’ve written about it a bit and am curious what your thoughts are on the topic. It’s going to come down to semantics more than likely, as one must define what they mean by “free will” to assess whether or not it’s logically possible, and if so, then subsequently assessing whether or not it’s supported by evidence.
Possibly, but since it explains so much and can be put into an evolutionary context without a lot of mental gymnastics required, I’m inclined to believe that it has a lot of merit. My conclusion seems to be reinforced, the more I read about moral psychology, neurology, etc. But who knows?
Anyway, it was nice to expand on it a bit with you!
I just mean that if we assess free will based purely on subjective introspection of the experience of making choices then something like libertarian free will would be a natural conclusion, but that conclusion doesn’t seem to stand up once you start accounting for other data.
Ah yes. Well I’m not assessing morality from a purely subjective perspective, although subjective qualia certainly plays a critical role since satisfaction is going to be largely gauged on how we feel. But it requires more than that, including the use of reason and the external evidence supporting it which is independent of introspection (and even qualia can be objectively gauged quantitatively in some cases with neurological measurements including the release of dopamine or other neurotransmitter effects mediating satisfaction, etc.). This reason that we use to guide our feelings allows us to track probabilities of how we will feel in the long term, e.g., getting a cavity drilled/filled knowing that the short-term pain is in the best interest of our satisfaction because of the long term pay-off of no more tooth aches and the preservation of our teeth for easier eating experiences in the future.
Anyway, regarding free will, I actually agree with Sam Harris’ take on it in that I think even introspection should lead us, upon careful examination, to realize that we don’t even introspectively know how our thoughts pop into our consciousness. They pop into it nevertheless, but they seem to come from nowhere (the unconscious). We may have an attentively directed stream of consciousness that is connected through deliberation, such as internally naming off our favorite five foods, but we don’t know where we’re getting that list, how it was stored, and generally don’t direct which thoughts pop into our head at any particular moment. Conscious authorship of each thought that pops into our heads seems untenable based on introspection alone.
Additionally, one need only realize that libertarian free will is incompatible with determinism and indeterminism/randomness. Both options in that logical dichotomy are incompatible with libertarian free will so if either one explains the ultimate ontology of the universe, we can’t have this kind of free will (fixed futures prevent the freedom for having different outcomes, and randomness prevents genuine authorship). Once one realizes this, then all that’s left are compatibilist concepts of free will that people like Dennett ascribe to which I think are more or less properly defined as “a decision making process with a lower bound of complexity”. Once the decision making process for an organism or information processing system reaches a certain level of complexity, we ascribe it with “free will”. But careful philosophers such as Dennett are quick to point out that this doesn’t mean that the organism or system could have behaved differently given the same initial conditions (which would be libertarian free will), but rather that they are capable of being conditioned to make a different choice in the future. Thus punishment/reward systems need to be in place (for human brains at least, not necessarily in future AI systems, see last section in this link for my take on that), in order to recondition a person’s brain to behave in ways that society deems appropriate. Those unable to be conditioned even in principle (such as the insane or psychopaths or someone with a brain tumor in an emotional-control region of the brain) are given a free pass and labeled as “not having free will” in the sense that we (healthy brained individuals) do. There are no doubt gradations to it (since no two people are equally conditionable/plastic, and thus we all have varying degrees of free will), but I think my free will conception of “decision making processes with a lower bound of complexity” fits in line with what most people mean (including a court of law) when they use the term “free will”.
Your thoughts?
I largely agree with your opening paragraph. As noted in the prior comment, I’m just not convinced that all the non-introspective data leads to accepting the introspective conclusion regarding a single, fundamental guiding desire (if that is even the introspective conclusion on reaches).
I can understand this, but that uncertainty doesn’t seem to be incompatible with libertarian free will – the alternative is an infinite regress, which is incompatible with libertarian free will. Despite the mysterious origins, there’s still a sense of ownership over the thought, so you end up with the introspective appearance of an “uncaused yet owned” thought, which maps pretty well to libertarian free will.
I agree with just about everything you said in the final paragraph. I primarily endorse compatibilism because it seems to be the only option on the table that gives us a viable approach for responsibly dealing with the apparent truth of biological determinism within the confines of human nature.
Honestly, I’m not sure what other conclusion is possible upon introspection. That is to say, once one asks themselves “why do I desire x” and gets to more fundamental desires, the necessary stopping point (without falling into circularity or infinite regress) is an axiomatic or tautological desire (possibly several but I doubt this as I’ve seen no evidence of it and there are good arguments against it being true). Once one gets to that point, “satisfaction” (or some comparable term having to do with happiness and well-being) seems to be the only tautological desire that drives us to do what we do. I haven’t found any evidence to the contrary yet, and only mounds of supporting evidence that support it. Furthermore, all of the fundamental moral theories that have been presented, including Aristotelian Virtue Ethics, Kant’s Deontological Ethics, and Mill’s Teleological/Utilitarian/Consequentialist Ethics, can all be shown to break down to a maximally informed consequentialism. And this consequentialism is grounded on satisfaction (in some way shape or form) in all of those theories once they are examined closely. So not only does introspective contemplation seem to lead us to this conclusion, but all the fundamental moral theories seem to point in the same direction as well.
I’d be willing to concede this point. It is compatible in some way, at least in the sense that it may not be a conclusive refutation of libertarian free will introspectively. Although I do think the introspective argument I mentioned reduces the likelihood of the libertarian position, even if it doesn’t outright refute it. Because while we may feel some sense of ownership, we don’t seem to be conscious of the authorship or ultimate cause of every thought. And self-caused thoughts, one would think, would be evidently caused by that self, and not merely owned by that self.
Regarding compatibilism, my only qualm is when philosophers aren’t careful to make the distinction between libertarian free will and the kind of free will they endorse (such as Dennett). Often they throw the term around not realizing that people are reading their position incorrectly and thus wasting lots of time with long arguments over a simple misunderstanding over terms (this has happened to me several times in the past). It may help philosophy if new terms were made to better make the distinction. Unfortunately “free will” and “libertarian free will” are not distinct enough as they are often used interchangeably. Oh the woes of philosophy…
It is that “possibly several” option which I find plausible – that somebody may introspectively fail to identify an underlying fundamental desire. Personally, if I push myself to locate a root desire then it seems like I can do it, but it’s fuzzy and feels a bit like a forced rationalization.
When I previously asked about this possibility you offered parsimony as the primary argument for preferring the single source theory, noting that this is also consistent with evolutionary efficiency. Those principles seem equally at home, however, in a situation where multiple desires are achieved through a common mechanism rather than through a lower-level desire. Perhaps that isn’t a valid distinction in your view?
Well not exactly. It’s not that there wouldn’t be any underlying fundamental desires. Rather, it’s that they’d find several underlying fundamental desires. But this doesn’t follow from the very definition of desire and so would only apply to fundamental desires from a biological perspective.
I don’t see how one has to push themselves at all. It may be laborious to go through the process and trace one’s desires lower and lower and lower, but one will have to wind up at a fundamental desire (or desires) and I’ve seen none other than satisfaction as the root of it all. For if one couldn’t be satisfied by fulfilling some desire, then by definition they’d desire whatever could. That’s dependent on the very definition of desire.
Parsimony is a supporting argument from an evolutionary perspective of having one fundamental desire (primitive and biologically-based) instead of several, but that was simply to map the biology and evolution onto the concept of desire. It ultimately breaks down to how we define “desire”. If a “desire” is by definition something that when fulfilled, increases our satisfaction, then satisfaction is the tautological foundation for all desires. I didn’t mention this very explicitly and perhaps I should have right off the bat. If somebody is not referring to “something that when fulfilled, increases our satisfaction”, then they aren’t referring to the concept of desires. They must be referring to something else which is irrelevant to the moral argument. Maybe this clears up any confusion (no doubt brought on through my lack of clarity)?
Lage,
Sorry for the slow response. Been busy.
It seems to me that this is just recasting the issue in terms of “satisfaction”. I’m not sure it’s fair to group all “satisfaction” together like that and assume there’s no conflict or competition within the spectrum of feelings that associate with desire fulfillment. Might this be a bit of essentialism that mistakes a conceptual heuristic for a unified entity?
No biggie. Good to hear back from ya’.
First to clarify, what I was pointing out earlier when I conceded that there may be several fundamental desires was referring to us narrowing down the reasons for desires to several specific reasons. And such that it would map onto some biological foundation (like homeostasis) which may consist of several “reasons” or “purposes” like “finding food”, “finding mates”, “breathing”, etc. But when one finally breaks it down to the most supreme of desires, then the reasons for doing those becomes the ultimate reason for fulfilling any desire whatsoever and that comes down to the definition of “desire” which ultimately has to do with satisfaction (in some way shape or form). It wouldn’t make sense to rationally desire something that, when fulfilled, would not lead to any increase in satisfaction. Now it’s true that due to irrationality, people do fulfill desires that they don’t realize go against that fundamental goal. But that’s where being fully informed (maximally informed) and rational come into play. If we knew better that fulfilling some supposed desire wouldn’t result in any increase in satisfaction, then we would no longer desire it, but would instead desire that which does accomplish satisfaction.
While there can be conflict between a spectrum of feelings, there will always be a best course of action out of the available options (or possible several courses of action that are equally good but this is less likely in the real world) and this will always be best determined by being rational and maximally informed of the facts pertaining to the action and its consequences.
This comes down to the definition of “desire” and what people mean when they use that term. That’s not to say that we don’t oversimplify the world in certain ways when we categorize things with particular properties in order to come up with definitions and terms in the first place. But we have to start somewhere, define terms, and further verify their meaning through usage. Saying that maximizing satisfaction is all that we are driven to do would perhaps be an instance of mistaking a conceptual heuristic for a unified entity of some form. But only in the sense that maximizing satisfaction requires many different actions that can be described as higher level desires to fulfill that ultimate end goal. But the definition is still what we must come back to, in order to confirm what a desire ultimately is and whether or not a person should adopt some particular desire in the first place. Similar to what I said before, if I ask “Why do I want to keep my job?” and answer “Because I want money to buy food, clothing, and shelter”, and then ask why I want that, at some point I must ask why I ultimately desire anything at all, what ultimate end goal does it serve to fulfill, etc. But one has to be careful when they try to “map” our conscious conception of “satisfaction”, and how it may precipitate from biological properties of the organism. Jumping from basic biological needs to higher neurological constructs that may serve heuristically to serve those needs. But biology and evolutionary reasons aside, we can just think of the terms “desire” and “satisfaction” and apply them in the moral framework based on how we use those terms and what we mean by them generally speaking.
OK, but if best is defined as “maximizing satisfaction” and if we suppose that satisfaction is actually not a unity and that there is a spectrum of feelings of satisfaction, how do you arbitrate? Is there some neurological measure of satisfaction that properly takes the spectrum of feelings into account?
Yeah, I do think there are several neurological measures and this becomes a question of both psychology and neuroscience, where a person’s own feelings of happiness, contentment, etc., are mapped onto the release of certain neurotransmitters in certain brain regions associated with reward and pleasure (such as the nucleus accumbens, and other regions) and regions associated with stress, measuring cortisol levels, and the like. This kind of work has been done, but is obviously ongoing since neuroscience is relatively new compared to psychology. From a psychological perspective, it involves looking at longitudinal studies, clinical therapy results, positive psychology in general, and the like to see what psychologists have found out about what makes people feel the best based on a number of mental health metrics including what people report about how they feel and how they weigh those feelings. It’s complex indeed, but nobody ever said moral psychology and moral philosophy were easy subjects to tackle. Nevertheless, just as “health” can be difficult to define and involves a number of different metrics that we usually aim to satisfy within some particular range, such is the case with moral psychology and satisfaction. No easy short answers here but more like meta-collaboration from several sciences, and many findings over the decades in these fields like psychology, sociology, neuroscience, etc.
So are you saying that you’re content with a somewhat ambiguous foundation for the moral framework, or that you think there will ultimately be a definitive formula that could be used for the arbitration?
It’s not an ambiguous foundation as far as I’m concerned, but only seems so when one tries to quantify and determine all the complicated nuances that underlie it biologically and neurologically. It may be as ambiguous as “mental health”, but this is a term that has an operational definition that, while complex, is well founded with a solid foundation and use in the medical community. A more comprehensive moral theory is also bound to be more complex in that it subsumes all three of the traditional options proposed (i.e. virtue ethics, deontological ethics, and teleological ethics), and since what makes us most satisfied is a complicated issue where we’re still in need of answers to many moral questions (though further scientific investigation can discover what those answers are).
In case it wasn’t obvious from the rest of my comments, I don’t share that clarity even from the higher level view (as I was trying to demonstrate with the tension I perceived in the Usain Bolt hypothetical). It isn’t at all clear to me that we can even superficially know what it means to maximize satisfaction when dilemmas arise.
Yeah, I’m not sure what else to say. It seems pretty obvious to me what maximizing satisfaction means generally speaking. For more complex moral dilemmas, it may be that we don’t yet have a good answer as to what to do given this moral theory, but that is irrelevant to whether or not it is the best theory available out of the options that have been presented. We have to use the best we’ve got and this one best accommodates situational ethics, which can be very difficult to assess and almost all moral theories fail to accommodate it to any significant degree (aside from virtue ethics, which goal theory already subsumes/includes plus a lot more). Our best hope of ever answering difficult moral dilemmas in any case is still going to be grounded on satisfaction ultimately, otherwise it is self-defeating due to no motivating reason to follow it.
I’m not convinced that there necessarily is a right answer to moral dilemmas. If there is, then I largely agree with your assessment, though I think that my definition of ‘satisfaction’ would run closer to the concept of ‘moral peace’ that I previously offered.
Neither am I. But the only cases where there isn’t a right answer necessarily is where BOTH lead to comparable/equal levels of satisfaction (therefore both options are equally moral to take). In all other cases, one would lead to higher satisfaction than the other in which case there would be a right answer for which action to prefer.
I was saying something else – namely, that I’m not convinced that there is even a correct metric, be it satisfaction or something else, which could settle the matter in some objective sense. Maybe the foundation is itself dynamic, such that the best we can do is heed those higher level impressions.
An objectively moral action is that which you are most motivated to do when rational and maximally informed of the consequences of those actions (given objective facts about human psychology, biology, etc.). All other ways of behaving are self-defeating or relatively less effective because of the lower motivation to follow them. It is the combinations of short-term and long-term satisfaction provided by any moral system (while rational and maximally informed) that will ultimately determine the power of the motivating force present to follow that system. What may be important to note is that one may have to experience at least some of those short and long term benefits to realize/experience the motivation behind the actions that led to those benefits, but in many cases you will be able to trust others that have already lived their lives, having tried this or that behavior in this or that situation including the more objective and reliable data stemming from the medical community.
So, we must distinguish between how we determine which moral actions/values are true, from the problem of how to motivate or persuade others to adopt these behaviors and values. The latter problem may take time and experience (cultivated virtues as I mentioned earlier) in order to realize the motivations, because right now one’s beliefs (if not rational and/or if based on some false information) may motivate them toward the wrong answer/action. So seeking to improve one’s ability to be as rational as possible (learning to combat cognitive biases as best we can and to identify and avoid logical fallacies), and seeking to gain as much information as we can from the most objective and reliable sources (including from psychologists, sociologists, etc.), is the key to doing the best we can. It will lead us to the most motivating system of behaviors, and so by definition, there will be no other system that is more preferable to us.
That’s an assertion. My comment was simply that I’m not convinced that there is an assertion of this type that is correct for everybody. Perhaps the foundation of morality cannot actually be pinned to some unified concept across the population. Or maybe it’s even the case that the moral nihilists are right.
I’ve never come across any argument that’s refuted it yet. I’d recommend reading “Sense and Goodness Without God” and “The End of Christianity” (TEC), and then you can see Richard Carrier’s work on the goal theory of morality, the epistemological foundation, and so forth. He also gives a formal logical proof in TEC.
What would count as a refutation? An observation of distinct neurological systems underlying different subjective experiences of motivation and desire?
I don’t think so because an observation of distinct neurological systems underlying different subjective experiences of motivation and desire would at best show that the fundamental satisfaction that motivates our behavior is accomplished through several neurological systems (which is irrelevant to the truth value of the moral theory). It would only describe in more detail how satisfaction works neurologically, even if it relies on several sub-systems. There will always be some neurological scheme for determining which desires override others and thus which ones are supreme and fundamental to direct action.
To be honest, I’m not sure what could refute the moral theory because it’s grounded on a tautology of “desire”. One could try and demonstrate that there is in fact a better moral system to follow, by showing that someone might prefer a less satisfying life to a more satisfying one, but I’m not sure how that could be done.
If there are multiple systems and no objective standard of measurement for translating between them, how do you define the metrics so that the satisfaction yielded by each system can be compared (e.g., to inform arbitration between choices that favor one system or the other)?
As an analogy: if two similar looking lines are drawn some distance apart from each other in an indestructible glass case and the sign says that one line is 10 bigunits long and that the other line is 25 smallunits long, how do you determine which line is longer? Nobody knows how to properly convert between bigunits and smallunits. You could go with the larger number, but that is no better than a 50/50 guess and the case prevents you from moving them next to each other for a direct comparison. It seems that the best option is to rely on your intuition, as informed by your subjective experience of the rod lengths as they lay.
With respect to satisfaction, I suppose you could generate units in terms of subjective reporting. For example, if the subjective report of satisfaction from system A is 8 out of 10 with an objective measure of neural correlates of 80 units, and the subjective report of satisfaction from system B is 8 out of 10 with an objective measure of neural correlates of 40 units, then you could come to the conclusion that 1 satisfaction unit equals 10 system A units and 5 system B units. OK, but what if you run the same experiment on a different person and get inverted ratios, or the same person comes back and produces different ratios. It seems that we’ve lost any hope of an objective measure at that point.
Yes. You are relying on subjective reporting, because satisfaction is realized through the moral agent’s subjective qualia as it were. If one found ratios reversed in another person, that would in no way compromise the objective quality of the metric, but merely shows that the uniqueness of human brains produces neurological differences that correspond to their feeling of satisfaction.
This may complicate or show limitations in how the satisfaction can be quantified on a neurological level, but this doesn’t detract from the objective facts corresponding to maximized satisfaction nor the legitimacy of the moral theory. If it’s the best we can do, then we must accept the limitations that come along with it (much of them stemming from current technological limitations that will no doubt evaporate over time).
It appears that I misunderstood the earlier comment about what defines an “objectively moral action” – you were not suggesting that it could be identified through some objective measure of desire or satisfaction, but rather that it could be objectively arbitrated through the subjective strength of the desire or satisfaction (once fully informed). In that case, the objectively quantifiable refutations I proposed do not apply.
I think there probably isn’t much else I have to say at this point. I agree that so far as moral theories go this is a strong contender, but I remain unconvinced of the unity of the subjective measure of satisfaction. It still seems plausible to me that one could not be in error to identify an action as less moral yet overall more satisfying (or vice versa), and that this could be due to “moral satisfaction” being a subset of the larger subjective experience that we classify as satisfaction.
Well, I was suggesting that there are objective features of satisfaction (affective brain states for example) that can be measured in the brain, but that they must be correlated with the conscious affective feelings of the person with the brain states being measured for one to be able to translate the neurological signals into the feelings of satisfaction. So this can be used to identify the moral action (even though it may be less pragmatic to do so at the moment, given our limitations in this relatively new endeavor in neuroscience), and it could possibly be used to identify the moral action once the correlation to the subjective feelings are adequately mapped. It may not be as objectively quantifiable or simplistic as measuring temperature, but nevertheless there are objective features of satisfaction as they manifest into particular types of brain states with their own correlated sets/ranges of neurological features (levels of neurotransmitters in certain parts of the brain, brain spikes in certain regions, etc.). Where this would be most useful is in cases where one could quantify how much satisfaction certain behaviors produce and then use this to refute someone’s opinion that action B will make them most satisfied. In other words, one could show brain state measurements taken when action A or action B were taken and use this to objectively refute a person’s own subjective opinion of that outcome. They may say, “no, action A will make me most satisfied, not B” but there may be objective measurements to refute that opinion. If it can be shown that they were in fact more satisfied with action B, based on the brain states correlated with that action in real time, then it would show that one’s opinion on what makes them most satisfied was false. It was based on mis-memory or something analogous to this. So even one’s reported level of satisfaction, while integral to determining how this maps onto objective neurological measurements, can be flawed when one tries to recall a previous level of satisfaction after time has passed and new experiences have occurred in that individual. So the subjective component is needed to develop the translation scheme to correlate it to objective measurements in the brain, but then it can possibly be used to trump one’s opinion on what would make them most satisfied with an objective measurement. If the person goes back to try action A, and they find that they were mistaken, that, sure enough, they were wrong that action B would maximize satisfaction and the objective brain measurements informed them of their error, it would show how objective measurements could be useful in establishing what is the moral action to take.
This distinction can be removed by simply noting that the most motivating type of satisfaction is what I’m referring to here. Whatever you want to call that is fine but I would call it overall satisfaction because it is the most motivating and therefore trumps all other possible types of satisfaction (non-fundamental, etc.).
Hi Travis,
FYI, I edited my last comment to you hoping that I was more clear about some other details that I thought were not explained well on my part in my first reply. So if you want to re-read my reply to your commments/questions, it is up to date for better clarity.
Hi Lage , I would ask a question , but you intellectuals speak a language I don’t understand . Im glad you have a burning desire to learn learn learn . Im glad to have you as a Son . I know this is not related to your post .
Hehehe. Yeah, what can I say? I like to dabble…with philosophy in particular. It’s often rewarding to ponder about things that most people either take for granted or haven’t thought about in any depth. And by the way, I’m glad to have you as a father in my life, a good father to my wife and good grandpa to my daughter. Thanks for bearing with me and my philosophical rants! 🙂