Technology, Mass-Culture, and the Prospects of Human Liberation

Cultural evolution is arguably just as fascinating as biological evolution (if not more so), with new ideas and behaviors stemming from the same kinds of natural selective pressures that lead to new species along with their novel morphologies and capacities.  And as with biological evolution where it, in a sense, takes off on its own unbeknownst to the new organisms it produces and independent of the intentions they may have (with our species being the notable exception given our awareness of evolutionary history and our ever-growing control over genetics), so too cultural evolution takes off on its own, where cultural changes are made manifest through a number of causal influences that we’re largely unaware of, despite our having some conscious influence over this vastly transformative process.

Alongside these cultural changes, human civilizations have striven to find new means of manipulating nature and to better predict the causal structure that makes up our reality.  One unfortunate consequence of this is that, as history has shown us, within any particular culture’s time and place, people have a decidedly biased overconfidence in the perceived level of truth or justification for the status quo and their present world view (both on an individual and collective level).  Undoubtedly, the “group-think” or “herd mentality” that precipitates from our simply having social groups often reinforces this overconfidence, and this is so in spite of the fact that what actually influences a mass of people to believe certain things or to behave as they do is highly contingent, unstable, and amenable to irrational forms of persuasion including emotive, sensationalist propaganda that prey on our cognitive biases.

While we as a society have an unprecedented amount of control over the world around us, this type of control is perhaps best described as a system of bureaucratic organization and automated information processing, that gives less and less individual autonomy, liberty, and basic freedom, as it further expands its reach.  How much control do we as individuals really have in terms of the information we have access to, and given the implied picture of reality that is concomitant with this information in the way it’s presented to us?  How much control do we have in terms of the number of life trajectories and occupations made available to us, what educational and socioeconomic resources we have access to given the particular family, culture, and geographical location we’re born and raised in?

As more layers of control have been added to our way of life and as certain criteria for organizational efficiency are continually implemented, our lives have become externally defined by increasing layers of abstraction, and our modes of existence are further separated cognitively and emotionally from an aesthetically and otherwise psychologically valuable sense of meaning and purpose.

While the Enlightenment slowly dragged our species, kicking and screaming, out of the theocratic, anti-intellectual epistemologies of the Medieval period of human history, the same forces that unearthed a long overdue appreciation for (and development of) rationality and technological progress, unknowingly engendered a vulnerability to our misusing this newfound power.  There was an overcompensation of rationality when it was deployed to (justifiably) respond to the authoritarian dogmatism of Christianity and to the demonstrably unreliable nature of superstitious beliefs and of many of our intuitions.

This overcompensatory effect was in many ways accounted for, or anticipated within the dialectical theory of historical development as delineated by the German philosopher Georg Hegel, and within some relevant reformulations of this dialectical process as theorized by the German philosopher Karl Marx (among others).  Throughout history, we’ve had an endless clash of ideas whereby the prevailing worldviews are shown to be inadequate in some way, failing to account for some notable aspect of our perceived reality, or shown to be insufficient for meeting our basic psychological or socioeconomic needs.  With respect to any problem we’ve encountered, we search for a solution (or wait for one to present itself to us), and then we become overconfident in the efficacy of the solution.  Eventually we end up overgeneralizing its applicability, and then the pendulum swings too far the other way, thereby creating new problems in need of a solution, with this process seemingly repeating itself ad infinitum.

Despite the various woes of modernity, as explicated by the modern existentialist movement, it does seem that history, from a long-term perspective at least, has been moving in the right direction, not only with respect to our heightened capacity of improving our standard of living, but also in terms of the evolution of our social contracts and our conceptions of basic and universal human rights.  And we should be able to plausibly reconcile this generally positive historical trend with the Hegelian view of historical development, and the conflicts that arise in human history, by noting that we often seem to take one step backward followed by taking two steps forward in terms of our moral and epistemological progress.

Regardless of the progress we’ve made, we seem to be at a crucial point in our history where the same freedom-limiting authoritarian reach that plagued humanity (especially during the Middle Ages) has undergone a kind of morphogenesis, having been reinstantiated albeit in a different form.  The elements of authoritarianism have become built into the very structure of mass-culture, with an anti-individualistic corporatocracy largely mediating the flow of information throughout this mass-culture, and also mediating its evolution over time as it becomes more globalized, interconnected, and cybernetically integrated into our day-to-day lives.

Coming back to the kinds of parallels in biology that I opened up with, we can see human autonomy and our culture (ideas and behaviors) as having evolved in ways that are strikingly similar to the biological jump that life made long ago, where single-celled organisms eventually joined forces with one another to become multi-cellular.  This biological jump is analogous to the jump we made during the early onset of civilization, where we employed an increasingly complex distribution of labor and occupational specialization, allowing us to survive many more environmental hurdles than ever before.  Once civilization began, the spread of culture became much more effective for transmitting ideas both laterally within a culture and longitudinally from generation to generation, with this process heavily enhanced by our having adopted various forms of written language, allowing us to store and transmit information in much more robust ways, similar to genetic information storage and transfer via DNA, RNA, and proteins.

Although the single-celled bacterium or amoeba (for example) may be thought of as having more “autonomy” than a cell that is forcefully interconnected within a multi-cellular organism, we can see how the range of capacities available to single cells were far more limited before making the symbiotic jump, just as humans living before the onset of civilization had more “freedom” (at least of a certain type) and yet the number of possible life trajectories and experiences was minuscule when compared to a human living in a post-cultural world.  But once multi-cellular organisms began to form a nervous system and eventually a brain, the entire collection of cells making up an organism became ultimately subservient to a centralized form of executive power — just as humans have become subservient to the executive authority of the state or government (along with various social pressures of conformity).

And just as the fates of each cell in a multi-cellular organism became predetermined and predictable by its particular set of available resources and the specific information it received from neighboring cells, similarly our own lives are becoming increasingly predetermined and predictable by the socioeconomic resources made available to us and the information we’re given which constitutes our mass-culture.  We are slowly morphing from individual brains into something akin to individual neurons within a global brain of mass-consciousness and mass-culture, having our critical thinking skills and creative aspirations exchanged for rehearsed responses and docile expectations that maintain the status quo and which continually transfers our autonomy to an oligarchic power structure.

We might wonder if this shift has been inevitable, possibly being yet another example of a “fractal pattern” recapitulated in sociological form out of the very same freely floating rationales that biological evolution has been making use of for eons.  In any case, it’s critically important that we become aware of this change, so we can try and actively achieve and effectively maintain the liberties and level of individual autonomy that we so highly cherish.  We ought to be thinking about what kinds of ways we can remain cognizant of, and critical to, our culture and its products; how we can reconcile or transform technological rationality and progress with a future world comprised of truly liberated individuals; and how to transform our corporatocratic capitalist society into one that is based on a mixed economy with a social safety net that even the wealthiest citizens would be content with living under, so as to maximize the actual creative freedom people have once their basic existential needs have been met.

Will unchecked capitalism, social-media, mass-media, and the false needs and epistemological bubbles they’re forming lead to our undoing and destruction?  Or will we find a way to rise above this technologically-induced setback, and take advantage of the opportunities it has afforded us, to make the world and our technology truly compatible with our human psychology?  Whatever the future holds for us, it is undoubtedly going to depend on how many of us begin to critically think about how we can seriously restructure our educational system and how we disseminate information, how we can re-prioritize and better reflect on what our personal goals ought to be, and also how we ought to identify ourselves as free and unique individuals.

Advertisement

Irrational Man: An Analysis (Part 4, Chapter 11: The Place of the Furies)

In my last post in this series on William Barrett’s Irrational Man, I examined some of the work of existential philosopher Jean-Paul Sartre, which concluded part 3 of Barrett’s book.  The final chapter, Ch. 11: The Place of the Furies, will be briefly examined here and this will conclude my eleven part post-series.  I’ve enjoyed this very much, but it’s time to move on to new areas of interest, so let’s begin.

1. The Crystal Palace Unmanned

“The fact is that a good dose of intellectualism-genuine intellectualism-would be a very helpful thing in American life.  But the essence of the existential protest is that rationalism can pervade a whole civilization, to the point where the individuals in that civilization do less and less thinking, and perhaps wind up doing none at all.  It can bring this about by dictating the fundamental ways and routines by which life itself moves.  Technology is one material incarnation of rationalism, since it derives from science; bureaucracy is another, since it aims at the rational control and ordering of social life; and the two-technology and bureaucracy-have come more and more to rule our lives.”

Regarding the importance and need for more intellectualism in our society, I think this can be better described as the need for more critical thinking skills and the need for people to be able to discern fact from fiction, to recognize their own cognitive biases, and to respect the adage that extraordinary claims require extraordinary evidence.  At the same time, in order to appreciate the existentialist’s concerns, we ought to recognize that there are aspects of human psychology including certain psychological needs that are inherently irrational, including with respect to how we structure our lives, how we express our emotions and creativity, how we maintain a balance in our psyche, etc.  But, since technology is not only a material incarnation of rationalism, but also an outlet for our creativity, there has to be a compromise here where we shouldn’t want to abandon technology, but simply to keep it in check such that it doesn’t impede our psychological health and our ability to live a fulfilling life.

“But it is not so much rationalism as abstractness that is the existentialists’ target; and the abstractness of life in this technological and bureaucratic age is now indeed something to reckon with.  The last gigantic step forward in the spread of technologism has been the development of mass art and mass media of communication: the machine no longer fabricates only material products; it also makes minds. (stereotypes, etc.).”

Sure enough, we’re living in a world where many of our occupations are but one of many layers of abstraction constituting our modern “machine of civilization”.  And the military industrial complex that has taken over the modern world has certainly gone beyond the mass production of physical stuff to be consumed by the populace, and now includes the mass production and viral dissemination of memes as well.  Ideas can spread like viruses and in our current globally interconnected world (which Barrett hadn’t yet seen to the same degree when writing this book), the spread of these ideas is much faster and influential on culture than ever before.  The degree of indoctrination, and the perpetuated cycles of co-dependence between citizens and the corporatocratic, sociopolitical forces ruling our lives from above, have resulted in making our way of life and our thinking much more collective and less personal than at any other time in human history.

“Kierkegaard condemned the abstractness of his time, calling it an Age of Reflection, but what he seems chiefly to have had in mind was the abstractness of the professorial intellectual, seeing not real life but the reflection of it in his own mind.”

Aside from the increasingly abstract nature of modern living then, there’s also the abstractness that pervades our thinking about life, which detracts from our ability to actually experience life.  Kierkegaard had a legitimate point here, by pointing out the fact that theory cannot be a complete substitute for practice; that thought cannot be a complete substitute for action.  We certainly don’t want the reverse either, since action without sufficient forethought leads to foolishness and bad consequences.

I think the main point here is that we don’t want to miss out on living life by thinking about it too much.  Since living a philosophically examined life is beneficial, it remains an open question exactly what balance is best for any particular individual to live the most fulfilling life.  In the mean time we ought to simply recognize that there is the potential for an imbalance, and to try our best to avoid it.

“To be rational is not the same as to be reasonable.  In my time I have heard the most hair-raising and crazy things from very rational men, advanced in a perfectly rational way; no insight or feelings had been used to check the reasoning at any point.”

If you ignore our biologically-grounded psychological traits, or ignore the fact that there’s a finite range of sociological conditions for achieving human psychological health and well-being, then you can’t develop any theory that’s supposed to apply to humans and expect it to be reasonable or tenable.  I would argue that ignoring this subjective part of ourselves when making theories that are supposed to guide our behavior in any way is irrational, at least within the greater context of aiming to have not only logically valid arguments but logically sound arguments as well.  But, if we’re going to exclude the necessity for logical soundness in our conception of rationality, then the point is well taken.  Rationality is a key asset in responsible decision making but it should be used in a way that relies on or seeks to rely on true premises, taking our psychology and sociology into account.

“The incident (making hydrogen bombs without questioning why) makes us suspect that, despite the increase in the rational ordering of life in modern times, men have not become the least bit more reasonable in the human sense of the word.  A perfect rationality might not even be incompatible with psychosis; it might, in fact, even lead to the latter.”

Again, I disagree with Barrett’s use or conception of rationality here, but semantics aside, his main point still stands.  I think that we’ve been primarily using our intelligence as a species to continue to amplify our own power and maximize our ability to manipulate the environment in any way we see fit.  But as our technological capacity advances, our cultural evolution is getting increasingly out of sync with our biological evolution, and we haven’t been anywhere close to sufficient in taking our psychological needs and limitations into account as we continue to engineer the world of tomorrow.

What we need is rationality that relies on true or at least probable premises, as this combination should actually lead a person to what is reasonable or likely to be reasonable.  I have no doubt that without making use of a virtue such as reasonableness, rationality can become dangerous and destructive, but this is the case with every tool we use whether it’s rationality or other capacities both mental and material; tools can always be harmful when misused.

“If, as the Existentialists hold, an authentic life is not handed to us on a platter but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its (unique) threats and its promises.”

And our use of rationality on an individual level should be used to help reach this goal of self-finitization, so that we can balance the benefits of the collective with the freedom of each individual that makes up that collective.

“I for one am personally convinced that man will not take his next great step forward until he has drained to the lees the bitter cup of his own powerlessness.”

And when a certain kind of change is perceived as something to be avoided, the only way to go through with it is by perceiving the status quo as something that needs to be avoided even more so, so that a foray out of our comfort zone is perceived as an actual improvement to our way of life.  But as long as we see ourselves as already all powerful and masters over our domain, we won’t take any major leap in a new direction of self and collective improvement.  We need to come to terms with our current position in the modern world so that we can truly see what needs repair.  Whether or not we succumb to one or both of the two major existential threats facing our species, climate change and nuclear war, is contingent on whether or not we set our eyes on a new prize.

“Sartre recounts a conversation he had with an American while visiting in this country.  The American insisted that all international problems could be solved if men would just get together and be rational; Sartre disagreed and after a while discussion between them became impossible.  “I believe in the existence of evil,” says Sartre, “and he does not.” What the American has not yet become aware of is the shadow that surrounds all human Enlightenment.”

Once again, if rationality is accompanied with true premises that take our psychology into account, then international problems could be solved (or many of them at least), but Sartre is also right insofar as there are bad ideas that exist, and people that have cultivated their lives around them.  It’s not enough to have people thinking logically, nor is some kind of rational idealism up to the task of dealing with human emotion, cognitive biases, psychopathy, and other complications in human behavior that exist.

The crux of the matter is that some ideas hurt us and other ideas help us, with our perception of these effects being the main arbiter driving our conception of what is good and evil; but there’s also a disparity between what people think is good or bad for them and what is actually good or bad for them.  I think that one could very plausibly argue that if people really knew what was good or bad for them, then applying rationality (with true or plausible premises) would likely work to solve a number of issues plaguing the world at large.

“…echoing the Enlightenment’s optimistic assumption that, since man is a rational animal, the only obstacles to his fulfillment must be objective and social ones.”

And here’s an assumption that’s certainly difficult to ground since it’s based on false premises, namely that humans are inherently rational.  We are unique in the animal kingdom in the sense that we are the only animal (or one of only a few animals) that have the capacity for rational thought, foresight, and the complex level of organization made possible from its use.  I also think that the obstacles to fulfillment are objective since they can be described as facts pertaining to our psychology, sociology, and biology, even if our psychology (for example) is instantiated in a subjective way.  In other words, our subjectivity and conscious experiences are grounded on or describable in objective terms relating to how our particular brains function, how humans as a social species interact with one another, etc.  But, fulfillment can never be reached let alone maximized without taking our psychological traits and idiosyncrasies into account, for these are the ultimate constraints on what can make us happy, satisfied, fulfilled, and so on.

“Behind the problem of politics, in the present age, lies the problem of man, and this is what makes all thinking about contemporary problems so thorny and difficult…anyone who wishes to meddle in politics today had better come to some prior conclusions as to what man is and what, in the end, human life is all about…The speeches of our politicians show no recognition of this; and yet in the hands of these men, on both sides of the Atlantic, lies the catastrophic power of atomic energy.”

And as of 2018, we’ve seen the Doomsday clock now reach two minutes to midnight, having inched one minute closer to our own destruction since 2017.  The dominance hierarchy being led and reinforced by the corporatocratic plutocracy are locked into a narrow form of tunnel vision, hell bent on maintaining if not exacerbating the wealth and power disparities that plague our country and the world as a whole, despite the fact that this is not sustainable in the long run, nor best for the fulfillment of those promoting it.

We the people do share a common goal of trying to live a good, fulfilling life; to have our basic needs met, and to have no fewer rights than anybody else in society.  You’d hardly know that this common ground exists between us when looking at the state of our political sphere, likely as polarized now in the U.S. (if not more so) than even during the Civil War.  Clearly, we have a lot of work to do to reevaluate what our goals ought to be, what our priorities ought to be, and we need a realistic and informed view of what it means to be human before any of these goals can be realized.

“Existentialism is the counter-Enlightenment come at last to philosophic expression; and it demonstrates beyond anything else that the ideology of the Enlightenment is thin, abstract, and therefore dangerous.”

Yes, but the Enlightenment has also been one of the main driving forces leading us out of theocracy, out of scientific illiteracy, and towards an appreciation of reason and evidence (something the U.S. at least, is in short supply of these days), and thus it has been crucial in giving us the means for increasing our standard of living, and solving many of our problems.  While the technological advancements derived from the Enlightenment have also been a large contributor to many of our problems, the current existential threats we face including climate change and nuclear war are more likely to be solved by new technologies, not an abolition of technology nor an abolition of the Enlightenment-brand of thinking that led to technological progress.  We simply need to better inform our technological goals of the actual needs and constraints of human beings, our psychology, and so on.

“The finitude of man, as established by Heidegger, is perhaps the death blow to the ideology of the Enlightenment, for to recognize this finitude is to acknowledge that man will always exist in untruth as well as truth.  Utopians who still look forward to a future when all shadows will be dispersed and mankind will dwell in a resplendent Crystal Palace will find this recognition disheartening.  But on second thought, it may not be such a bad thing to free ourselves once and for all from the worship of the idol of progress; for utopianism-whether the brand of Marx or or Nietzsche-by locating the meaning of man in the future leaves human beings here and how, as well as all mankind up to this point, without their own meaning.  If man is to be given meaning, the Existentialists have shown us, it must be here and now; and to think this insight through is to recast the whole tradition of Western thought.”

And we ought to take our cue from Heidegger, at the very least, to admit that we are finite, our knowledge is limited, and it always will be.  We will not be able to solve every problem, and we would do ourselves and the world a lot better if we admitted our own limitations.  But to avoid being overly cynical and simply damning progress altogether, we need to look for new ways of solving our existential problems.  Part of the solution that I see for humanity moving forward is going to be a combination of advancements in a few different fields.

By making better use of genetic engineering, we’ll one day have the ability to change ourselves in remarkable ways in order to become better adapted to our current world.  We will be able to re-sync our biological evolution with our cultural evolution so we no longer feel uprooted, like a fish out of water.  Continuing research in neuroscience will allow us to learn more about how our brains function and how to optimize that functioning.  Finally, the strides we make in computing and artificial intelligence should allow us to vastly improve our simulation power and arm us with greater intelligence for solving all the problems that we face.

Overall, I don’t see progress as the enemy, but rather that we have an alignment problem between our current measures of progress, and what will actually lead to maximally fulfilling lives.

“The realization that all human truth must not only shine against an enveloping darkness, but that such truth is even shot through with its own darkness may be depressing, and not only to utopians…But it has the virtue of restoring to man his sense of the primal mystery surrounding all things, a sense of mystery from which the glittering world of his technology estranges him, but without which he is not truly human.”

And if we actually use technology to change who we are as human beings, by altering the course of natural selection and our ongoing evolution (which is bound to happen with or without our influence, for better or worse), then it’s difficult to say what the future really holds for us.  There are cultural and technological forces that are leading to transhumanism, and this may mean that one day “human beings” (or whatever name is chosen for the new species that replaces us) will be inherently rational, or whatever we’d like our future species to be.  We’ve stumbled upon the power to change our very nature, and so it’s far too naive, simplistic, unimaginative, and short-sighted to say that humans will “always” or “never” be one way or another.  Even if this were true, it wouldn’t negate the fact that one day modern humans will be replaced by a superseding species which has different qualities than what we have now.

2. The Furies

“…Existentialism, as we have seen, seeks to bring the whole man-the concrete individual in the whole context of his everyday life, and in his total mystery and questionableness-into philosophy.”

This is certainly an admirable goal to combat simplistic abstractions of what it means to be human.  We shouldn’t expect to be able to abstract a certain capacity of human beings (such as rationality), consider it in isolation (no consideration of context), formulate a theory around that capacity, and then expect to get a result that is applicable to human beings as they actually exist in the world.  Furthermore, all of the uncertainties and complexities in our human experiences, no matter how difficult they may be to define or describe, should be given their due consideration in any comprehensive philosophical system.

“In modern philosophy particularly (philosophy since Descartes), man has figured almost exclusively as an epistemological subject-as an intellect that registers sense-data, makes propositions, reasons, and seeks the certainty of intellectual knowledge, but not as the man underneath all this, who is born, suffers, and dies…But the whole man is not whole without such unpleasant things as death, anxiety, guilt, fear and trembling, and despair, even though journalists and the populace have shown what they think of these things by labeling any philosophy that looks at such aspects of human life as “gloomy” or “merely a mood of despair.”  We are still so rooted in the Enlightenment-or uprooted in it-that these unpleasant aspects of life are like Furies for us: hostile forces from which we would escape (the easiest way is to deny that the Furies exist).”

My take on all of this is simply that multiple descriptions of human existence are needed to account for all of our experiences, thoughts, values, and behavior.  And it is what we value given our particular subjectivity that needs to be primary in these descriptions, and primary with respect to how we choose to engineer the world we live in.  Who we are as a species is a complex mixture of good and bad, lightness and darkness, and stability and chaos; and we shouldn’t deny any of these attributes nor repress them simply because they make us uncomfortable.  Instead, we would do much better to face who we are head on, and then try to make our lives better while taking our limitations into account.

“We are the children of an enlightenment, one which we would like to preserve; but we can do so only by making a pact with the old goddesses.  The centuries-long evolution of human reason is one of man’s greatest triumphs, but it is still in process, still incomplete, still to be.  Contrary to the rationalist tradition, we now know that it is not his reason that makes man man, but rather that reason is a consequence of that which really makes him man.  For it is man’s existence as a self-transcending self that has forged and formed reason as one of its projects.”

This is well said, although I’d prefer to couch this in different terms: it is ultimately our capacity to imagine that makes us human and able to transcend our present selves by pointing toward a future self and a future world.  We do this in part by updating our models of the world, simulating new worlds, and trying to better understand the world we live in by engaging with it, re-shaping it, and finding ways of better predicting its causal structure.  Reason and rationality have been integral in updating our models of the world, but they’ve also been high-jacked to some degree by a kind of super-normal stimuli reinforced by technology and our living in a world that is entirely alien to our evolutionary niche, and which have largely dominated our lives in a way that overlooks our individualism, emotions, and feelings.

Abandoning reason and rationality is certainly not the answer to this existential problem; rather, we just need to ensure that they’re being applied in a way that aligns with our moral imperatives and that they’re ultimately grounded on our subjective human psychology.

Moral Deliberation, Religious Freedom & Church-State Separation

I’m glad we live in a society where we have the freedom to believe whatever we want to believe.  No matter how crazy or dangerous some of these beliefs are, no matter how unreasonable and irrational some of them may be, and no matter whether some of these beliefs may hurt others and detract from their happiness and life fulfillment, we have the freedom to believe them nevertheless.  We also live in a representative democracy (for now at least), thereby granting us the freedom to vote for political representatives and the policies they stand for and in some cases granting us the freedom to vote for some of the particular laws themselves.  Combining these two freedoms, freedom of belief and freedom to vote, we have the freedom to vote for a particular candidate or law based on whatever reason or belief we wish.  It is this latter freedom that I believe is being grossly abused by so many in this country.

I’ve written previously about the imperative of democracy for any just society, but within that post I also mentioned the (perhaps) equal importance of moral deliberation within any just democratic framework.  People should be justifying their votes and their positions on particular issues through a moral deliberative process.  We do this to some degree already but not nearly enough and not in any useful public format.

We can’t simply leave it up to a room full of politicians to decide for us (as we primarily do now) as then all the individual perspectives that constitute and drive the public’s understanding of some issue become truncated, distorted, or superseded by some kind of misleading rhetorical caricature that can take on a life of its own.

Our society needs a political system which stresses the need to justify the laws enacted through moral deliberation not only to create more transparency in the political process but also to help resolve moral disagreements (to the best of our ability) through a process of open and inclusive critical discourse, helping to encourage citizens to form a more well-rounded perspective on public policy.  The increase in transparency is not only to help us distinguish between political aims that are self-interested from those that are actually in the public’s best interests, but also to point out the different fundamental reasons driving people’s voting preferences.  In order to point out errors in one another’s reasoning (if there are any errors), we have to talk with each other about our reasons and the thought processes that have led us to some particular point of view.  It may be that the disagreement is about a difference in what we value but often times its due to a rational argument opposing an irrational argument.

Moral deliberation would help us to illustrate when political or legislative points of view are grounded on beliefs in the supernatural or other beliefs that are not based on evidence that the opposing side can examine and consider.  We may find points of view that are dependent on someone’s religious beliefs, which if voted to become the law of the land, could actually exclude the religious freedom of others (simply by majority rule).

Let’s consider abortion and embryonic stem cell research as examples.  If through a moral deliberative process we come to find that people are voting to ban the right to an abortion or to ban the use of life-saving medical technologies that require embryonic stem cells, because they believe that human embryos have souls or some other magical property, then we need to point out that creating a law grounded on non-demonstrable religious beliefs (such as the belief in souls) is not something that can reasonably be implemented without violating the religious rights of everyone in that society that do not share their unfalsifiable belief in souls.  Those people should consider what they would feel like if a religion other than their own became endorsed by the majority and tried to push for legislation based on some other unfalsifiable religious dogma.

Ultimately, a majority rule that enacts legislation based on religious belief is analogous to eradicating the separation of church and state, but rather than having the church or churches with direct political power over our laws, instead they indirectly obtain their political power by influencing their congregations to vote for some law that is deemed acceptable by the church’s own dogma.  It’s one thing for a religious institution to point out what evidence or secular arguments exist to support their position or that of their opponents, whereby the arguments can at least move forward by examining said evidence and seeing where it leads us.  But when an argument is based on beliefs that have no evidence to support them, then it lacks the objective character needed to justifiably ground a new law of the land — a law that will come to exist and apply to all in a secularized society (as opposed to a theocracy).

If we are to avoid slipping further into a theocracy, then we need to better utilize moral deliberation to tease out the legislative justifications that are based on unfalsifiable beliefs such as beliefs in disembodied minds and magic and so forth, so we can shift the argument to exclude any unfalsifiable beliefs and reasoning.  Disagreeing on the facts themselves is a different matter that we’ll always have to deal with, but disagreeing on whether or not to use facts and evidence in our legislative decision-making process is beyond ridiculous and is an awful and disrespectful abuse of the freedoms that so many of our ancestors have fought and died to protect.

The arguments surrounding abortion rights and stem cell research, for example, once the conversation shifts from the personal to the political sphere, should likewise shift from those that can include unfalsifiable supernatural beliefs to those that eventually exclude them entirely.  By relying on falsifiable secular claims and arguments, one can better approximate a line of argumentation that is more likely to transcend any particular religious or philosophical system.  By doing so we can also better discover what it is that we actually value in our everyday lives.  Do we value an undetectable, invisible, disembodied mind that begins to inhabit fertilized eggs at some arbitrary point in time?  A magical substance that, if it exists, is inadvertently flushed out of many women’s uteri countless times (by failing to implant an egg after conception) without their giving it a second thought?  Or rather do we value persons, human persons in particular, with consciousness, the ability to think and feel, and that have a personality (a minimum attribute of any person)?

I think it’s the latter that we actually value (on both sides of the aisle, despite the apparent contradiction in their convictions), so even if we ignore compelling arguments for bodily autonomy and only focus on arguments from person-hood as they relate to abortion and embryonic stem cell research, we should see that what we actually value isn’t under threat when people have an abortion (at least, not before consciousness and a personality develops in the fetus around the 25th-30th week of gestation) nor is what we value with persons under threat when we carry out embryonic stem cell research, since once again there is no person under threat but only a potential future person (just as blueprints are a potential future building, or an acorn is a potential future oak tree).  If I choose to destroy the blueprints or the acorn to achieve some other end I desire, nobody should falsely equivocate that with destroying a building or an oak tree. Unfortunately, that is what many people do when they consider abortion or embryonic stem cell research, where even if they limit their arguments to falsifiable claims and make no mention of souls — they falsely equivocate the potential future person with an actual realized person.  In doing so, they falsely attribute an intrinsic value to something that is only extrinsically valuable.  It should be said though that the latter argument to ban abortion or embryonic stem cell research, while still logically fallacious, is at least based on falsifiable claims that can be discussed and considered, without any mention of souls or other non-demonstrables.

It should be pointed out here that I’m not saying that people can’t decide how they ought to act based on religious beliefs or other beliefs regarding magic or the supernatural.  What I am saying is that one should be able to use those non-secular reasons to guide their own behavior with respect to whether or not they will have an abortion or have their embryo used for stem cell research.  That’s fine and dandy even though I strongly discourage anybody and everybody from making decisions that aren’t based on reason and evidence.  Nevertheless I think it’s one’s right to do so, but what they most definitely shouldn’t do is use such reasons to justify what other people can or can’t do.

If I have a religious belief that leads me to believe that it is immoral to feed my children broccoli (for some unfalsifiable reason), should I try to make it a law of the land that no other parents are allowed to feed their children broccoli?  Or should I use my religious belief to simply inform my own actions and not try to force others to comply with my religious belief?  Which seems like a more American ideal?  Which seems more fair to every independent citizen, each with their own individual liberties?  Now what if I find out that there’s a substance in the broccoli that leads to brain damage if fed to children of a certain age?  Well then we would now have a secular reason, more specifically a falsifiable reason, to ban broccoli (where we didn’t before) and so it would no longer need to remain isolated from the law of the land, but can (and should) be instantiated in a law that would protect children from harmful brain damage.  This legislation would make sense because we value conscious persons, and because reasons that appeal to evidence can and should be examined by everyone living in a society to inform them of what laws of the land should and shouldn’t be put into place.

In summary, I think it is clear that our freedom of belief and freedom to vote are being abused by those that want to use their non-demonstrable, religiously grounded moral claims to change the law of the land rather than to simply use those non-demonstrable moral claims to guide their own actions.  What we should be doing instead is limiting our freedom to vote such that the justifications we impose on our decisions are necessarily based on demonstrable moral claims and beliefs (even if our values differ person to person).  And this still allows us the freedom to continue using any number of demonstrable and non-demonstrable moral claims to guide our own behavior as we see fit.  This is the only way to maintain true religious freedom in any democratic society, and we need to push for the kind of moral deliberation that will get us there.

“The Brothers Karamazov” – A Moral & Philosophical Critique (Part I)

I wanted to write some thoughts on Dostoyevsky’s The Brothers Karamazov, and I may end up writing this in several parts.  I’m interested in some of the themes that Dostoevsky develops, in particular, those pertaining to morality, moral responsibility, free will, moral desert, and their connection to theism and atheism.  Since I’m not going to go over the novel in great detail, for those not already familiar with this story, please at least read the plot overview (here’s a good link for that) before pressing on.

One of the main characters in this story, Ivan, is an atheist, as I am (though our philosophies differ markedly as you’ll come to find out as you read on).  In his interactions and conversation with his brother Alyosha, a very religious man, various moral concepts are brought up including the dichotomy of good and evil, arguments against the existence of God (at least, against the existence of a loving God) such as the well-known Problem of Evil, and other ethical and religious quandaries.  I wanted to first talk about Ivan’s insistence that good and evil cannot exist without God, and since Ivan’s character doesn’t believe that God exists, he comes to the conclusion that good and evil do not exist either.  Although I’m an atheist, I disagree with Ivan’s views here and will expand on why in a moment.

There are a number of arguments against the existence of God that I’ve come across over the years, some of which that I’ve formulated on my own after much reflection, and that were at least partially influenced by my former religious views as a born-again Protestant Christian.  Perhaps ironically, it wasn’t until after I became an atheist that I began to delve much deeper into moral theory, and also into philosophy generally (though the latter is less surprising).  My views on morality have evolved in extremely significant ways since my early adult years.  For example, back when I was a Christian I was a moral objectivist/realist, believing that morals were indeed objective but only in the sense that they depended on what God believed to be right and wrong (even if these divine rules changed over time or seemed to contradict my own moral intuitions and analyses, such as stoning homosexuals to death).  Thus, I subscribed to some form of Divine Command Theory (or DCT).  After becoming an atheist, much like Ivan, I became a moral relativist (but only temporarily – keep reading), believing as the character Ivan did, that good and evil couldn’t exist due to their resting on a fictitious or at least non-demonstrable supernatural theological foundation and/or (perhaps unlike Ivan believed) that good and evil may exist but only in the sense that they were nothing more than cultural norms that were all equally valid.

Since then, I’ve become a moral realist once again (as I was when I was a Christian), after putting the philosophy of Ivan to the test (so to speak).  I realized that I could no longer justify the belief that any cultural moral norm had as equal of a claim to being true as any other cultural norm.  There were simply too many examples of moral prescriptions in various cultures and religions that couldn’t be justified.  Then I realized that many of the world’s cultural moral norms, though certainly not all of them, were largely universal (such as prohibitions against, at least certain forms of, stealing, killing, and others) which suggested a common human psychological component underlying many of them.

I also realized that as an atheist, much as Nietzsche realized, I now had to ground my own moral views on something that didn’t rely on Divine Command Theory, gods, Christian traditions, or any other foundation that I found to be invalid, illogical, unreasonable, unjustified, or not sufficiently demonstrated to be true.  And I had to do this if I was to find a way out of moral relativism, which simply didn’t sit well with me as it didn’t seem to be coherent with the bulk of human psychology and the more or less universal goals that humans strive to achieve in their lives.  It was ultimately the objective facts pertaining to human psychology that allowed me to resubscribe to an objectivist/realist morality — and now my views of morality were no longer contingent on merely the whim or dictates of some authoritarian god (thus bypassing the Euthyphro dilemma), but rather were contingent on objective facts about human beings, what makes us happy and fulfilled and what doesn’t (where these facts often disagree with moral prescriptions stemming from various religions and Divine-Command-Theory).

After dabbling with the teachings of various philosophers such as Aristotle, Kant, Mill, Rawls, Foot, and others, I came to accept a view of morality that was indeed coherent, sensible, sufficiently motivating to follow (which is a must), and which subsumed all the major moral theories into one framework (and which therefore had the best claim to being true since it was compatible with all of them –  virtue ethics, deontology, and consequentialism).   Now I’ve come to accept what can be described as a Goal Theory of Ethics, whereby morality is defined as “that which one ought to do above all else – when rational and maximally informed based on reason and evidence – in order to increase one’s personal life fulfillment and overall level of preference satisfaction”.  One could classify this as a subset of desire utilitarianism, but readers must be warned that this is NOT to be confused with traditional formulations of utilitarianism – such as those explicitly stated by J.S. Mill, Peter Singer, etc., as they are rife with problems resulting from not taking ALL consequences into account (such as consequences pertaining to one’s own character and how they see themselves as a person, as per the wisdom of Aristotle and Kant).

So how can good and evil exist without some God(s) existing?  That is to say, if a God doesn’t exist, how can it not be the case that “anything is permissible”?  Well, the short answer is – because of human psychology (and also social contract theory).

When people talk about behaving morally, what they really mean (when we peel back all the layers of cultural and religious rhetoric, mythology, narrative, etc.) is behaving in a way that maximizes our personal satisfaction – specifically our sense of life fulfillment.  Ask a Christian, or a Muslim, or a Humanist, why ought they behave in some particular way, and it all can be shown to break down to some form of human happiness or preference satisfaction for life fulfillment (not some hedonistic form of happiness).  They may say to behave morally “because then you can get into heaven, or avoid hell”, or “because it pleases God”, or what-have-you.  When you ask why THOSE reasons are important, it ultimately leads to “because it maximizes your chance of living a fulfilled life” (whether in this life or in the next, for those that believe in an afterlife).  I don’t believe in any afterlife because there’s no good evidence or reason to have such a belief, so for me the life that is most important is the one life we are given here on earth – which therefore must be cherished and not given any secondary priority to a hypothetical life that may or may not be granted after death.

But regardless, whether you believe in an afterlife (as Alyosha does) or not (as in Ivan’s case), it is still about maximizing a specific form of happiness and fulfillment.  However, another place where Ivan seems to go wrong in his thinking is his conclusion that people only behave morally based on what they believe will happen to them in an afterlife.  And therefore, if there is no afterlife (immortal souls), then there is no reason to be moral.  The fact of the matter is though, in general, much of what we tend to call moral behavior actually produces positive effects on the quality of our lives now, as we live them.  People that behave immorally are generally not going to live “the good life” or achieve what Aristotle called eudaimonia.  On the other hand, if people actually cultivate virtues of compassion, honesty, and reasonableness, they will simply live more fulfilling lives.  And people that don’t do this or simply follow their immediate epicurean or selfish impulses will most certainly not live a fulfilling life.  So there is actually a naturalistic motivating force to behave morally, regardless of any afterlife.  Ivan simply overlooked this (and by extension, possibly Dostoyevsky as well), likely because most people brought up in Christianized cultures often focus on the afterlife as being the bearer of ultimate justice and therefore the ultimate motivator for behaving as they do.

In any case, the next obvious question to ask is what ways of living best accomplish this goal of life fulfillment?  This is an empirical question which means science can in principle discover the answer, and is the only reliable (or at least the most reliable) way of arriving at such answers.  While there is as of yet no explicit “science of morality”, various branches of science such as psychology, sociology, anthropology, and neuroscience, are discovering moral facts (or at least reasonable approximations of these facts, given what data we have obtained thus far).  Unless we as a society choose to formulate a science of morality — a laborious research project indeed — we will have to live with the best approximations to moral facts that are at our disposal as per the findings in psychology, sociology, neuroscience, etc.

So even if we don’t yet know with certainty what one ought to do in any and all particular circumstances (no situational ethical certainties), many scientific findings have increased our confidence in having discovered at least some of those moral facts or approximations of those facts (such as that slavery is morally wrong, because it doesn’t maximize the overall life satisfaction of the slaveholder, especially if he/she were to analyze the situation rationally with as many facts as are pragmatically at their disposal).  And to make use of some major philosophical fruits cultivated from the works of Hobbes, Locke, Rousseau, Hume, and Rawls (among many others), we have the benefits of Social Contract Theory to take into consideration.  In short, societies maximize the happiness and flourishing of the citizens contained therein by making use of a social contract – a system of rules and mutual expectations that ought to be enforced in order to accomplish that societal goal (and which ought to be designed in a fair manner, behind a Rawlsian veil of ignorance, or what he deemed the “original position”).  And therefore, to maximize one’s own chance of living a fulfilling life, one will most likely need to endorse some form of social contract theory that grants people rights, equality, protection, and so forth.

In summary, good and evil do exist despite there being no God because human psychology is particular to our species and our biology, it has a finite range of inputs and outputs, and therefore there are some sets of behaviors that will work better than others to maximize our happiness and overall life satisfaction given the situational circumstances that we find our lives embedded in.  What we call “good” and “evil” are simply the behaviors and causal events that “add to” or “detract from” our goal of living a fulfilling life.  The biggest source of disagreement among the various moral systems in the world (whether religiously motivated or not), are the different sets of “facts” that people subscribe to (some beliefs being based on sound reason and evidence whereas others are based on irrational faith, dogma, or emotions) and whether or not people are analyzing the actual facts in a rational manner.  A person may think they know what will maximize their chances of living a fulfilling life when in fact (much like with the heroin addict that can’t wait to get their next fix) they are wrong about the facts and if they only knew so and acted rationally, would do what they actually ought to do instead.

In my next post in this series, I’ll examine Ivan’s views on free will and moral responsibility, and how it relates to the unintended consequence of the actions of his half-brother Smerdyakov (who murders their father, Fyodor Pavlovich, as a result of Ivan’s influence on his moral views).

CGI, Movies and Truth…

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Atheism, Morality, and Various Thoughts of the Day…

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.

Virtual Reality & Its Moral Implications

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

On Moral Desert: Intuition vs Rationality

So what exactly is moral desert?  Well, in a nutshell, it is what someone deserves as a result of their actions as defined within the framework of some kind of moral theory.  Generally when people talk about moral desert, it’s often couched in terms of punishment and reward, and our intuitions (whether innate or culturally inherited) often produce strong feelings of knowing exactly what kinds of consequences people deserve in response to their actions.  But if we think about our ultimate goals in implementing any reasonable moral theory, we should quickly recognize the fact that our ultimate moral goal is to have ourselves and everybody else simply abide by that moral theory.  And we want that in order to guide our behavior and the behavior of those around us in ways that are conducive to our well being.  Ultimately, we want ourselves and others to act in ways that maximize our personal satisfaction — and not in a hedonistic sense — but rather to maximize our sense of contentment and living a fulfilled life.

If we think about scenarios that seem to merit punishment or reward, it would be useful to keep our ultimate moral goal in mind.  The reason I mention this is because, in particular, our feelings of resentment toward those that have wronged us can often lead one to advocate for an excessive amount of punishment to the wrongdoer.  Among other factors, vengeance and retribution often become incorporated into our intuitive sense of justice.  Many have argued that retribution itself (justifying “proportionate” punishment by appealing to concepts like moral desert and justice) isn’t a bad thing, even if vengeance — which lacks inherent limits on punishment, involves personal emotions from the victim, and other distinguishing factors — is in fact a bad thing.  While thinking about such a claim, I think it’s imperative that we analyze our reasons for punishing a wrongdoer in the first place and then analyze the concept of moral desert more closely.

Free Will & It’s Implications for Moral Desert

Free will is an extremely important concept to parse out here, because moral desert is most often intimately tied to the positive claim of our having free will.  That is to say, most concepts of moral desert, whereby it is believed that people deserve punishment and reward for actions that warrant it, fundamentally relies on the premise that people could have chosen to do otherwise but instead chose the path they did out of free choice.  While there are various versions of free will that philosophers have proposed, they all tend to revolve around some concept of autonomous agency.  The folk psychological conception of free will that most people subscribe to is some form of deliberation that is self-caused in some way thus ruling out randomness or indeterminism as the “cause”, since randomness can’t be authored by the autonomous agent, and also ruling out non-randomness or determinism as well, since an unbroken chain of antecedent causes can’t be authored by the autonomous agent either.

So as to avoid a long digression, I’m not going to expound upon all the details of free will and the various versions that others have proposed, but will only mention that the most relevant version that is tied to moral desert is generally some form of having the ability to have chosen to do otherwise (ignoring randomness).  Notice that because indeterminism or determinism is a logical dichotomy, these are the only two options that can possibly exist to describe the ontological underpinnings of our universe (in terms of causal laws that describe how the state of the universe changes over time).  Quantum mechanics allows either of these two options to exist given their consistency with the various interpretations therein that are all empirically identical with one another, but there is no third option available, so quantum mechanics doesn’t buy us any room for this kind of free will either.  Since neither option can produce any form of self-caused or causa sui free will (sometimes referred to as libertarian free will), then the intuitive concept of moral desert that relies on said free will is also rendered impossible if not altogether meaningless.  Therefore moral desert can only exist as a coherent concept if it no longer contains within it any assumptions of the moral agent having an ability to have chosen to do otherwise (again, ignoring randomness).  So what does this realization imply for our preconceptions of justified punishment or even justice itself?

At the very least, the concept of moral desert that is involved in these other concepts needs to be reformulated or restricted given the impossibility and thus the non-existence of libertarian free will.  So if we are to say that people “deserve” anything at all morally speaking (such as a particular punishment), it can only be justified let alone meaningful in some other sense, such as a consequentialist goal that the implementation of the “desert” (in this case, the punishment) effectively accomplishes.  Punishing the wrongdoer can no longer be a means of their getting their due so to speak, but rather needs to be justified by some other purpose such as rehabilitation, future crime deterrence, and/or restitution for the victim (to compensate for physical damages, property loss, etc.)  With respect to this latter factor, restitution, there is plenty of wiggle room here for some to argue for punishment on the grounds of it simply making the victim feel better (which I suppose we could call a form of psychological restitution).  People may try to justify some level of punishment based on making the victim feel better, but vengeance should be avoided at all costs, and one needs to carefully consider what justifications are sufficient (if any) for punishing another with the intention of simply making the victim feel better.

Regarding psychological restitution, it’s useful to bring up the aforementioned concepts of retribution and vengeance, and appreciate the fact that vengeance can easily result in cases where no disinterested party performs the punishment or decides its severity, and instead the victim (or another interested party) is involved with these decisions and processes.  Given the fact that we lack libertarian free will, we can also see how vengeance is not rationally justifiable and therefore why it is important that we take this into account not only in terms of society’s methods of criminal behavioral correction but also in terms of how we behave toward others that we think have committed some wrongdoing.

Deterrence & Fairness of Punishment

As for criminal deterrence, I was thinking about this concept the other day and thought about a possible conundrum concerning its justification (certain forms of deterrence anyway).  If a particular punishment is agreed upon within some legal system on the grounds that it will be sufficient to rehabilitate the criminal (and compensate the victim sufficiently) and an additional amount of punishment is tacked on to it merely to serve as a more effective deterrent, it seems that it would lack justification, with respect to treating the criminal in a fair manner.

To illustrate this, consider the following: if the criminal commits the crime, they are in one of two possible epistemic states — either they knew about the punishment that would follow from committing the crime beforehand, or they didn’t.  If they didn’t know this, then the deterrence addition of the punishment wouldn’t have had the opportunity to perform its intended function on the potential criminal, in which case the criminal would be given a harsher sentence than is necessary to rehabilitate them (and to compensate the victim) which should be the sole purpose of punishing them in the first place (to “right” a “wrong” and to minimize behavioral recurrences).  How could this be justified in terms of what is a fair and just treatment of the criminal?

And then, on the other hand, if they did know the degree of punishment that would follow committing such a crime, but they committed the crime anyway, then the deterrence addition of the punishment failed to perform its intended function even if it had the opportunity to do so.  This would mean that the criminal is once again, given a punishment that is harsher than what is needed to rehabilitate them (and also to compensate the victim).

Now one could argue in the latter case that there are other types of justification to ground the harsher deterrence addition of the punishment.  For example, one could argue that the criminal knew beforehand what the consequences would be, so they can’t plead ignorance as in the first example.  But even in the first example, it was the fact that the deterrence addition was never able to perform its function that turned out to be most relevant even if this directly resulted from the criminal lacking some amount of knowledge.  Likewise, in the second case, even with the knowledge at their disposal, the knowledge was useless in actualizing a functional deterrent.  Thus, in both cases the deterrent failed to perform its intended function, and once we acknowledge that, then we can see that the only purposes of punishment that remain are rehabilitation and compensation for the victim.  One could still try and argue that the criminal had a chance to be deterred, but freely chose to commit the crime anyway so they are in some way more deserving of the additional punishment.  But then again, we need to understand that the criminal doesn’t have libertarian free will so it’s not as if they could have done otherwise given those same conditions, barring any random fluctuations.  That doesn’t mean we don’t hold them responsible for their actions — for they are still being justifiably punished for their crime — but it is the degree of punishment that needs to be adjusted given our knowledge that they lack libertarian free will.

Now one could further object and say that the deterrence addition of the punishment isn’t intended solely for the criminal under our consideration but also for other possible future criminals that may be successfully deterred from the crime given such a deterrence addition (even if this criminal was not).  Regardless of this pragmatic justification, that argument still doesn’t justify punishing the criminal, in such a way, if we are to treat the criminal in a fair way based on their actions alone.  If we bring other possible future criminals into the justification, then the criminal is being punished not only for their wrongdoing but in excess for hypothetical reasons concerning other hypothetical offenders — which is not at all fair.  So we can grant the fact that some may justify these practices on pragmatic consequentialist grounds, but they aren’t consistent with a Rawslian conception of justice as fairness.  Which means they aren’t consistent with many anti-consequentialist views (such as Kantian deontologists for example) that often promote strong conceptions of justice and moral desert in their ethical frameworks.

Conclusion

In summary, I wanted to reiterate the fact that even if our intuitive conceptions of moral desert and justice sometimes align with our rational moral goals, they often lack rational justification and thus often serve to inhibit the implementation of any kind of rational moral theory.  They often produce behaviors that are vengeful, malicious, sadistic, and most often counter-productive to our actual moral goals.  We need to incorporate the fact that libertarian free will does not (and logically can not) exist, into our moral framework, so that we can better strive to treat others fairly even if we still hold people responsible in some sense for their actions.

We can still hold people responsible for their actions (and ought to) by replacing the concept of libertarian free will with a free will conception that is consistent with the laws of physics, with psychology, and neurology, by proposing for example that people’s degree of “free will” with respect to some action is inversely proportional to the degree of conditioning needed to modify such behavior.  That is to say, the level of free will that we have with respect to some kind of behavior is related to our ability to be programmed and reprogrammed such that the behavior can (at least in principle) be changed.

Our punishment-reward systems then (whether in legal, social, or familial domains), should treat others as responsible agents only insofar as to protect the members of that society (or group) from harm and also to induce behaviors that are conducive to our physical and psychological well being — which is the very purpose of our having any reasonable moral theory (that is sufficiently motivating to follow) in the first place.  Anything that goes above and beyond what is needed to accomplish this is excessive and therefore not morally justified.  Following this logic, we should see that many types of punishment including, for example, the death penalty, are entirely unjustified in terms of our moral goals and the strategies of punishment that we should implement to accomplish those goals.  As the saying goes, an eye for an eye makes the whole world blind, and thus barbaric practices such as inflicting pain or suffering (or death sentences) simply to satisfy some intuitions need to be abolished and replaced with an enlightened system that relies on rational justifications rather than intuition.  Only then can we put the primitive, inhumane moral systems of the past to rest once and for all.

We need to work with our psychology (not only the common trends between most human beings but also our individual idiosyncrasies) and thus work under the pretense of our varying degrees of autonomy and behavioral plasticity.  Only then can we maximize our chances and optimize our efforts in attaining fulfilling lives for as many people as possible living in a society.  It is our intuitions (products of evolution and culture) that we must be careful of, as they can (and often have throughout history) led us astray to commit various moral atrocities.  All we can do is try to overcome these moral handicaps the best we can through means of reason and rationality, but we have to acknowledge that these problems exist before we can face them head on and subsequently engineer the right kinds of societal changes to successfully reach our moral goals.

The WikiLeaks Conundrum

I’ve been thinking a lot about WikiLeaks over the last year, especially given the relevant consequences that have ensued with respect to the 2016 presidential election.  In particular, I’ve been thinking about the trade-offs that underlie any type of platform that centers around publishing secret or classified information, news leaks, and the like.  I’m torn over the general concept in terms of whether these kinds of platforms provide a net good for society and so I decided to write a blog post about it to outline my concerns through a brief analysis.

Make no mistake that I appreciate the fact that there are people in the world that work hard and are often taking huge risks to their own safety in order to deliver any number of secrets to the general public, whether governmental, political, or corporate.  And this is by no means exclusive to Wikileaks, but also applies to similar organizations and even individual whistle-blowers like Edward Snowden.  In many cases, the information that is leaked to the public is vitally important to inform us about some magnate’s personal corruption, various forms of systemic corruption, or even outright violations of our constitutional rights (such as the NSA violating our right to privacy as outlined in the fourth amendment).

While the public tends to highly value the increased transparency that these kinds of leaks offer, they also open us up to a number of vulnerabilities.  One prominent example that illustrates some of these vulnerabilities is the influence on the 2016 presidential election, resulting from the Clinton email leaks and the leaks pertaining to the DNC.  One might ask how exactly could those leaks have been a bad thing for the public?  After all it just increased transparency and gave the public information that most of us felt we had a right to know.  Unfortunately, it’s much more complicated than that as it can be difficult to know where to draw the line in terms of what should or should not be public knowledge.

To illustrate this point, imagine that you are a foreign or domestic entity that is highly capable of hacking.  Now imagine that you stand to gain an immense amount of land, money, or power if a particular political candidate in a foreign or domestic election is elected, because you know about their current reach of power and their behavioral tendencies, their public or private ties to other magnates, and you know the kinds of policies that they are likely to enact based on their public pronouncements in the media and their advertised campaign platform.  Now if you have the ability to hack into private information from every pertinent candidate and/or political party involved in that election, then you likely have the ability to not only know secrets about the candidate that can benefit you from their winning (including their perspective of you as a foreign or domestic entity, and/or damning things about them that you can use as leverage to bribe them later on after being elected), but you also likely know about damning things that could cripple the opposing candidate’s chances at being elected.

This point illustrates the following conundrum:  while WikiLeaks can deliver important information to the public, it can also be used as a platform for malicious entities to influence our elections, to jeopardize our national or international security, or to cause any number of problems based on “selective” sharing.  That is to say, they may have plenty of information that would be damning to both opposing political parties, but they may only choose to deliver half the story because of an underlying agenda to influence the election outcome.  This creates an obvious problem, not least because the public doesn’t consider the amount of hacked or leaked information that they didn’t get.  Instead they think they’ve just become better informed concerning a political candidate or some policy issue, when in fact their judgment has now been compromised because they’ve just received a hyper-biased leak and one that was given to them intentionally to mislead them, even though the contents of the leak may in fact be true.  But when people aren’t able to put the new information in the proper context or perspective, then new information can actually make them less informed.  That is to say, the new information can become an epistemological liability, because it unknowingly distorts the facts, leading people to behave in ways that they otherwise would not have if they only had a few more pertinent details.

So now we have to ask ourselves, what can we do about this?  Should we just scrap WikiLeaks?  I don’t think that’s necessary, nor do I think it’s feasible to do even if we wanted to since it would likely just be replaced by any number of other entities that would accomplish the same ends (or it would become delocalized and go back to a bunch of disconnected sources).  Should we assume all leaked information has been leaked to serve some malicious agenda?

Well, a good dose of healthy skepticism could be a part of the solution.  We don’t want to be irrationally skeptical of any and all leaks, but it would make sense to have more scrutiny when it’s apparent that the leak could serve a malicious purpose.  This means that we need to be deeply concerned about this unless or until we reach a point in time where hacking is so common that the number of leaks reaches a threshold where it’s no longer pragmatically possible to selectively share them to accomplish these kinds of half-truth driven political agendas.  Until that point is reached, if it’s ever reached, given the arms race between encryption and hacking, we will have to question every seemingly important leak and work hard to make the public at large understand these concerns and to take them seriously.  It’s too easy for the majority to be distracted by the proverbial carrot dangling in front of them, such that they fail to realize that it may be some form of politically motivated bait.  In the mean time, we need to open up the conversation surrounding this issue, and look into possible solutions to help mitigate our concerns.  Perhaps we’ll start seeing organizations that can better vet the sources of these leaks, or that can better analyze their immediate effects on the global economy, elections, etc., before deciding whether or not they should release the information to the public.  This won’t be an easy task.

This brings me to my last point which is to say that I don’t think people have a fundamental right to know every piece of information that’s out there.  If someone found a way to make a nuclear bomb using household ingredients, should that be public information?  Don’t people understand that many pieces of information are kept private or classified because that’s the only way some organizations can function?  Including organizations that strive to maintain or increase national and international security?  Do people want all information to be public even if it comes at the expense of creating humanitarian crises, or the further consolidation of power by select plutocrats?  There’s often debate over the trade-offs between giving up our personal privacy to increase our safety.  Now the time has come to ask whether our giving up some forms of privacy or secrecy on larger scales (whether we like it or not) is actually detracting from our safety or putting our democracy in jeopardy.

Sustainability, Happiness, and a Science of Morality: Part II

In the first part of this post, I briefly went over some of the larger problems that our global society is currently facing, including the problem of overpopulation and the overall lack of environmental and economic sustainability.  I also mentioned some of the systematic and ideological (including religious and political) barriers that will need to be overcome before we can make any considerable progress in obtaining a sustainable future.

Although it may seem hopeless at times, I believe that we human beings – despite our cognitive biases and vulnerability to irrational and dogmatic behaviors – have an innate moral core in common that is driven by the incentive to increase our level of overall satisfaction and fulfillment in life. When people feel like they are living more fulfilling lives, they want to continue if not amplify the behavior that’s leading to that satisfaction. If a person is shown ways that lead to greater satisfaction and they are able to experience even a slight though noticeable improvement as a result of those prescriptions, I believe that even irrational and dogmatic people do begin to explore outside of their ideological box.

More importantly however, if everyone is shown that their level of satisfaction and fulfillment in life is ultimately a result of their doing what they feel they ought to do above all else (which is morality in a nutshell), then they can begin to recognize the importance and efficacy of basing those oughts on well-informed facts about the world. In other words, people can begin to universally derive every moral ought from a well-informed is, thus formulating their morality based on facts and empirical data and grounded on reason – as opposed to basing their morality on dogmatic and other unreliable beliefs in the supernatural. It’s easy for people to disagree on morals that are based on dogma and the supernatural, because those supernatural beliefs and sources of dogma vary so much from one culture and religion to another, but morals become common if not universal (in at least some cases) when they are based on facts about the world (including objective physical and psychological consequences not only for the person performing the moral action, but also for anyone on the receiving end of that moral action).

Moral Imperatives & Happiness

Science has slowly but surely been uncovering (or at least better approximating) what kinds of behaviors lead to the greatest levels of happiness and overall satisfaction in the collective lives of everyone in society. Since all morals arguably reduce to a special type of hypothetical imperative (i.e. if your fundamental goal is X, then you ought to do Y above all else), and since all goals ultimately reduce to the fundamental goal of increasing one’s life satisfaction and fulfillment, then there exist objective moral facts, whereby if they were known, they would inform a person of which behaviors they ought to do above all else in order to increase their happiness and fulfillment in life. Science may never be able to determine exactly what these objective moral facts are, but it is certainly logical to assume that they exist, namely some ideal set of behaviors for people (at least, those that are sane and non-psychopathic) which, if we only knew what those ideal behaviors were, they would necessarily lead to maximized satisfaction within every person’s life (a concept that has been proposed by many philosophers, and one which has been very well defended in Richard Carrier’s Goal Theory of Ethics).

What science can do however, and arguably what it has already been doing, is to continue to better approximate what these objective moral facts are as we accumulate more knowledge and evidence in psychology, neuroscience, sociology, and even other fields such as economics. What science appears to have found thus far is (among other things) a confirmation of what Aristotle had asserted over two thousand years ago, namely the importance of cultivating what have often been called moral virtues (such as compassion, honesty, and reasonableness), in order to achieve what the Greeks called eudaimonia, or an ultimate happiness with one’s life. This makes perfect sense because cultivating these virtues leads to a person feeling good while exercising behaviors that are also beneficial to everyone else, so then benefiting others is rarely if ever going to feel like a chore (which is an unfortunate side-effect of exclusively employing the moral duty mentality under Kant’s famous deontological ethical framework). Combine this virtue cultivation with the plethora of knowledge about the consequences of our actions that the sciences have been accumulating, thus integrating in John Stuart Mill’s utilitarian or teleological/consequentialist ethical framework, and then we have a good ethical framework that should work very effectively in leading us toward a future where more and more people are happy, fulfilled, and doing what is best for sustaining that happiness in one another, including sustaining the environment that their happiness is dependent on.

A Science of Morality

To give a fairly basic but good example of where science is leading us in terms of morality, consider the fact that science has shown that when people try to achieve ever-increasing levels of wealth at the expense of others, they are doing so because those people believe that wealth will bring them the most satisfaction in life, and thus they believe that maximizing that wealth will bring maximal happiness. However, this belief is incorrect for a number of reasons. For one, studies in psychology have shown that there is a diminishing return of happiness when one increases their income and wealth – which sharply diminishes once a person exceeds an income of about $70K per year (in U.S. dollars / purchasing power). So the idea that increasing one’s income or wealth will indefinitely increase their happiness isn’t supported by the evidence. At best, it has a limited effect on happiness that only works up to a point.

Beyond this, psychology has also shown that there are much more effective ways of increasing happiness, such as cultivating the aforementioned virtues (e.g. compassion, integrity, honesty, reasonableness, etc.) and exercising them while helping others, which leads to internal psychological benefits (which neuroscience can and has quantified to some degree) and also external sociological benefits such as the formation of meaningful relationships which in turn provide even more happiness over time. If we also take into account the amount of time and effort often required to earn more income and wealth (with the intention of producing happiness), it can be shown that the time and effort would have been better spent on trying to form meaningful relationships and cultivating various virtues. Furthermore, if those people gaining wealth could see first hand the negative side-effects that their accumulation of wealth has on many others (such as increased poverty), then doing so would no longer make them as happy. So indeed it can be shown that their belief of what they think maximizes their satisfaction is false, and it can also be shown that there are in fact better ways to increase their happiness and life satisfaction more than they ever thought possible. Perhaps most importantly, it can be shown that the ways to make them happiest also serve to make everyone else happier too.

A Clear Path to Maximizing (Sustainable) Happiness

Perhaps if we begin to invest more in the development and propagation of a science of morality, we’ll start to see many societal problems dissolve away simply because more and more people will begin to realize that the reason why we all think that certain actions are moral actions (i.e. that we ought to do them above all else), is because we feel that doing those actions brings us the most happy and fulfilling lives. If people are then shown much more effective ways that they can increase their happiness and fulfillment, including by maximizing their ability to help others achieve the same ends, then they’re extremely likely to follow those prescribed ways of living, for it could be shown that not doing so would prevent them from gaining the very maximal happiness and fulfillment that they are ultimately striving for. The only reason people wouldn’t heed such advice then is because they are being irrational, which means we need to simultaneously work on educating everyone about our cognitive biases, how to spot logical fallacies and avoid making them, etc.  So then solving society’s problems, such as overpopulation, socioeconomic inequality, or unsustainability, boils down to every individual as well as the collective whole accumulating as many facts as possible about what can maximize our life satisfaction (both now and in the future), and then heeding those facts to determine what we ought to do above all else to achieve those ends.  This is ultimately an empirical question, and a science of morality can help us discover what these facts are.