The Open Mind

Cogito Ergo Sum

Archive for the ‘Ethics’ Category

Virtual Reality & Its Moral Implications

leave a comment »

There’s a lot to be said about virtual reality (VR) in terms of our current technological capabilities, our likely prospects for future advancements, and the vast amount of utility that we gain from it.  But, as it is with all other innovations, with great power comes great responsibility.

While there are several types of VR interfaces on the market, used for video gaming or various forms of life simulation, they do have at least one commonality, namely the explicit goal of attempting to convince the user that what they are experiencing is in fact real in at least some sense.  This raises a number of ethical concerns.  While we can’t deny the fact that even reading books and watching movies influences our behavior to some degree, VR is bound to influence our behavior much more readily because of the sheer richness of the qualia and the brain’s inability to distinguish significant differences between a virtual reality and our natural one.  Since the behavioral conditioning schema that our brain employs has evolved to be well adapted to our natural reality, any virtual variety that increasingly approximates it is bound to increasingly affect our behavior.  So we need to be concerned with VR in terms of how it can affect our beliefs, our biases, and our moral inclinations and other behaviors.

One concern with VR is the desensitization to, or normalization of, violence and other undesirable or immoral behaviors.  Many video games have been criticized over the years for this very reason, with the claim that they promote similar behaviors in the users of those games (most especially younger users with more impressionable minds).  These claims have been significantly validated by the American Psychological Association and the American Academy of Pediatrics, where they have both taken firm stances against children and teens playing violent video games, as a result of the accumulated research and meta studies showing a strong link between violent video gaming and increased aggression, anti-social behavior, and sharp decreases in moral engagement and empathy.

Thus, the increasingly realistic nature of VR and the ever-consistent increase in the capacities one has at their disposal within such a virtual space, is bound to exacerbate these types of problems.  If people are able to simulate rape or pedophilia among other morally reprehensible actions and social taboos, will they too become more susceptible to actually engaging in these behaviors once they leave the virtual space and re-enter the real world?  Even if they don’t increase their susceptibility to perform those behaviors, what does such a virtual escapade do to that person’s moral character?  Are they more likely to condone those behaviors (even if they don’t participate in them directly), or to condone other behaviors that have some kind of moral relevance or cognitive overlap with one another?

On the flip side, what if it was possible to use VR as a therapeutic tool to help cure pedophilia or other behavioral problems?  What if one was able to simulate rape, pedophilia or otherwise to reduce their chances of performing those acts in the real world?  Hardly anyone would argue that a virtual rape or molestation is anywhere near as abhorrent or consequential as real instances of such crimes would be, horrific crimes made against real human beings.  While this may only apply to a small number of people, it is at least plausible that such a therapeutic utility would make the world a better place if it prevented an actual rape or other crime from taking place.  If certain people have hard-wired impulses that would normally ruin their lives or the lives of others if left unchecked, then it would be prudent if not morally obligatory to do what we can to prevent such harms from taking place.  So even though this technology could make the otherwise healthy user begin to engage in bad behaviors, it could also be used as an outlet of expression for those already afflicted with similar impulses.  Just as they’ve used VR to help cure anxiety disorders, phobias, PTSD, and other pathologies, by exposing people to various stimuli that help them to overcome their ills, so too may VR possibly provide a cure for other types of mental illnesses and aggressive predispositions such as those related to murder, sexual assault, etc.

Whether VR is used as an outlet for certain behaviors to prevent them from actually happening in the real world, or as a means of curing a person from those immoral inclinations (where the long term goal is to eventually no longer need any VR treatment at all), there are a few paths that could show some promising results to decrease crime and so forth.  But, beyond therapeutic uses, we need to be careful about how these technologies are used generally and how that usage will increasingly affect our moral inclinations.

If society chose to implement some kind of prohibition to limit the types of things people could do in these virtual spaces, that may be of some use, but beyond the fact that this kind of prohibition would likely be difficult to enforce, it would also be a form of selective prohibition that may not be justified to implement.  If one chose to prohibit simulated rape and pedophilia (for example), but not prohibit murder or other forms of assault and violence, then what would justify such a selective prohibition?  We can’t simply rely on an intuition that the former simulated behaviors are somehow more repugnant than the latter (and besides, many would say that murder is just as bad if not worse anyway).  It seems that instead we need to better assess the consequences of each type of simulated behavior on our behavioral conditioning to see if at least some simulated activities should be prohibited while allowing others to persist unregulated.  On the other hand, if prohibiting this kind of activity is not practical or if it can only be implemented by infringing on certain liberties that we have good reasons to protect, then we need to think about some counter-strategies to either better inform people about these kinds of dangers and/or to make other VR products that help to encourage the right kinds of behaviors.

I can’t think of a more valuable use for VR than to help us cultivate moral virtues and other behaviors that are conducive to our well-being and to our living a more fulfilled life.  Anything from reducing our prejudices and biases through exposure to various simulated “out-groups” (for example), to modifying our moral character in more profound ways through artificial realities that can encourage the user to help others in need and to develop habits and inclinations that are morally praiseworthy.  We can even use this technology (and have already to some degree) to work out various moral dilemmas and our psychological response to them without anybody actually dying or getting physically hurt.  Overall, VR certainly holds a lot of promise, but it also poses a lot of psychological danger, thus making it incumbent upon us to talk more about these technologies as they continue to develop.

The Imperative of Democracy For a Just Society

leave a comment »

How important is democracy for realizing a society that is just?  It seems to me that democracy is an important if not vital component of any just society, because any principles of justice that a society seeks to abide by should be established through means that are also fair and just, and thus those principles (or the laws that instantiate them) should be a legislative product resulting from the deliberation and input of every citizen that is to be bound and protected by such standards.  In this post, I’m going to argue for this position by illustrating how reasonable principles of justice are more likely to be realized (if not only realizable) through a democratic form of government over any other system, and by showing how a democratic system for legislation is the most effective way of protecting and improving principles of justice once they are established in a society.  It’s important to note that I am not arguing that all forms of democracy are necessarily capable of achieving a just society, but rather I’m arguing that some form of democracy is necessary to do so.  One major objection to my overall contention is the argument that democracies can lead to a form of majoritarianism that may oppress minorities and restrict their basic rights, thus precluding even any semblance of justice.  This objection is a very serious one that ought to be considered and so I’ll conclude my argument by responding to it accordingly.

Reasonable principles or descriptions of justice as proposed by many philosophers and other important political figures such as Aristotle, Kant, J.S. Mill, Rawls and others, generally encompass a number of different concepts such as: liberty, freedom, fairness, equality, desert, mutual respect and consideration, and moral rightness, among others.1, 2, 3, 4  I tend to agree with Rawls’ views in particular, where principles of justice revolve around some set of equal rights that is maximally extensive, including equal access and opportunity of holding various political offices and positions.  What’s most important to note about Rawls’ views is the concept of fairness and how the principles of justice can be derived from the original position, i.e., from behind a veil of ignorance.4  If we apply this reasoning to determine what is in fact fair from the perspective of a collective of citizens that hold different sets of values, it stands to reason that the best one can do is to try to find some kind of an overlapping moral consensus that is informed by the very same set of citizens.  It seems that the only political system fit to accomplish this task is going to be some form of a democracy, because only in democracies can the citizens take direct action to influence legislation that is compatible with that overlapping consensus.5 No other political system allows their citizens to have this kind of power.  Furthermore, since all people can only have an equal say in some kind of democratic society, it’s hard to imagine how any other system used to establish principles of justice could have a higher level of fairness.

Maintaining and protecting the principles of justice that are implemented by a society is arguably just as important as establishing them in the first place.  Moreover, if the current established principles of justice (or laws) in a society are at any point perceived as being unjust in light of new information or a change in the overlapping moral consensus of the people that comprise it, there needs to be some mechanism to modify them accordingly.  I would argue that democracy is the most effective way to achieve both the protection of, and the capability of modifying or improving, any implemented principles of justice or laws that instantiate those principles.

To illustrate this point, we can simply imagine that there are two societies, one democratic and one non-democratic, and for the sake of argument we can assume that they both have established principles of justice.  Now let’s consider that some new law has been proposed in both societies that, if enacted and implemented, would result in some gross form of injustice.  I think it’s evident that the democratic society has the best chance of maintaining (or restoring) their established principles of justice because a majority of citizens have the greatest chance of influencing future legislation and/or any future political representation in order to block or reverse the legislation that would have led to any injustice.  If the fate of this decision was merely left in the hands of some subset of people in power, even if it could result in a just outcome, it is less likely to for the simple fact that the interests of a small group in power are statistically less likely to result in a mutually desirable outcome for everyone when all else is equal.  Similarly, if we were to imagine that the overlapping moral consensus changed in both societies, once again, I would argue that democracy would prevail as the best system for modifying or improving any laws in place so as to better conform to any modified principles of justice.  This would be the case because the most thorough way to determine which laws or principles of justice should replace the old ones, would be to survey all members of that society through a process of moral deliberation6 — a task best fit for a democracy.

One strong objection to my argument (i.e. in short, that democracies are an important if not necessary component for a just society) is the argument that democracies can lead to a majoritarian populace that may choose to strip minorities of their basic human rights and liberties, and thus enact some form of injustice.  One could take this objection even further and argue that a majoritarian populace could (perhaps unknowingly) enact legislation that strips every citizen of some or all of their basic rights and liberties.7 Now this is certainly a reasonable objection and one that is worth careful consideration.  However, this argument can only be successful if it can be shown that there are only non-democratic forms of government that guarantee (or at least do a better job of) establishing, protecting, and/or improving the principles of justice (or the laws that instantiate them) in a society.  I haven’t yet seen anyone satisfy the burden of proof required to support such a claim (even if it is a reasonable objection).  In addition, this objection must hold up to the most robust form of democracy at our disposal to demonstrate a fortiori that all other forms of democracy are likewise insufficient and that they are all demonstrably worse than at least one non-democratic alternative.

Now I will grant that this objection is particularly applicable to a pure democracy, where there are no protections whatsoever against majority rule oppressing minorities’ rights.  However, most forms of democracy that exist today are some kind of democratic republic or constitutional democracy, whereby a constitution is put into place to protect some set of inalienable rights that majority rule can’t overturn.8  While this solution isn’t fool proof, it is nevertheless an effective safeguard to limit majoritarian tyranny while retaining the aforementioned maximally-just benefits of democracy.  Furthermore, one could employ a deliberative democracy, which stresses the need to justify the laws enacted that would instantiate any sought-after principles of justice.  A deliberative democracy accomplishes this justification and helps to resolve moral disagreements (to the best of our ability) through a process of open and inclusive moral deliberation, helping to encourage citizens to form a more well-rounded perspective on public policy.6  What better way could there be to achieve a just society than to have equal rights to vote on legislation combined with the societal expectation of justifying any proposed laws through open critical discourse and moral deliberation with one another?  What better way could there be to find the overlapping moral consensus that Rawls pointed to, as idealized in his original position?

As such, I believe the majoritarian objection fails not only because there are democratic systems with safeguards in place to help prevent these kinds of majoritarian problems from occurring (such as a constitution), thus limiting tyranny at least as well as any non-democratic government could, but also because even in the absence of these safeguards (which are of course limited in efficacy), deliberative democratic institutions can further reduce the risk of oppressive tyranny of the majority by their having to justify their positions/votes with the other members of society through moral deliberation.  Combining these two institutions — a constitution and moral deliberation — into one democratic framework, would provide a robust rebuttal to such an objection and also provides a good template of democracy that further supports my overall argument.

In conclusion, I’ve argued that democracy is a vital component for just societies because it offers a means of deriving a society’s principles of justice, through the laws that instantiate them, in the most fair and equitable way known, and because of its strength to adapt to societal changes in order to maintain justice in light of a shift in overlapping consensus or as a possible counter-response to unjust legislation enacted.  In addition, it can in principle provide a way of maximizing justice through institutions that encourage (if not mandate) the use of moral deliberation to justify the votes of any and all citizens.  Among other benefits, this latter principle provides a way of helping to sort out and distinguish between political claims that are self-interested from those that are actually in the public’s best interests.  In doing so, it offers a platform of transparency and dialectic that helps to prevent injustices from coming into fruition.

References

  1. Aristotle, trans. Terence Irwin (1999) Nicomachean Ethics, Second Edition.  Indianapolis:  Hacket, pp. 67-74, 76; 1129a-1132b, 1134a
  2. Immanuel Kant, trans. John Ladd (1999) Metaphysical Elements of Justice, Second Edition.  Indianapolis:  Hackett, 1999., pp. 29, 38, 30-31, 37
  3. John Stuart Mill, ed. Mary Warnock (1962) Utilitarianism and Other Writings.  Cleveland:  World Publishing Company, pp. 296-301, 305, 309, 320-321
  4. Rawls, J. A. (1971) A Theory of Justice. Cambridge, MA: Harvard University Press
  5. Christiano, T. (2006, July 27). Democracy. Retrieved March 25, 2017, from https://plato.stanford.edu/entries/democracy/
  6. Gutmann & Thompson (2014) Moral Disagreement in a Democracy.  Arguing about Political Philosophy.  Routledge Publishing, NY (pp. 596-601)
  7. Mill, John Stuart (1869) On Liberty. London: Longman, Roberts & Green
  8. No author (n.d.). CONSTITUTIONAL DEMOCRACY. Retrieved March 25, 2017, from http://www.civiced.org/resources/publications/resource-materials/390-constitutional-democracy

On Moral Desert: Intuition vs Rationality

leave a comment »

So what exactly is moral desert?  Well, in a nutshell, it is what someone deserves as a result of their actions as defined within the framework of some kind of moral theory.  Generally when people talk about moral desert, it’s often couched in terms of punishment and reward, and our intuitions (whether innate or culturally inherited) often produce strong feelings of knowing exactly what kinds of consequences people deserve in response to their actions.  But if we think about our ultimate goals in implementing any reasonable moral theory, we should quickly recognize the fact that our ultimate moral goal is to have ourselves and everybody else simply abide by that moral theory.  And we want that in order to guide our behavior and the behavior of those around us in ways that are conducive to our well being.  Ultimately, we want ourselves and others to act in ways that maximize our personal satisfaction — and not in a hedonistic sense — but rather to maximize our sense of contentment and living a fulfilled life.

If we think about scenarios that seem to merit punishment or reward, it would be useful to keep our ultimate moral goal in mind.  The reason I mention this is because, in particular, our feelings of resentment toward those that have wronged us can often lead one to advocate for an excessive amount of punishment to the wrongdoer.  Among other factors, vengeance and retribution often become incorporated into our intuitive sense of justice.  Many have argued that retribution itself (justifying “proportionate” punishment by appealing to concepts like moral desert and justice) isn’t a bad thing, even if vengeance — which lacks inherent limits on punishment, involves personal emotions from the victim, and other distinguishing factors — is in fact a bad thing.  While thinking about such a claim, I think it’s imperative that we analyze our reasons for punishing a wrongdoer in the first place and then analyze the concept of moral desert more closely.

Free Will & It’s Implications for Moral Desert

Another relevant topic I’ve written about in several previous posts is the concept of free will.  This is an extremely important concept to parse out here, because moral desert is most often intimately tied to the positive claim of our having free will.  That is to say, most concepts of moral desert, whereby it is believed that people deserve punishment and reward for actions that warrant it, fundamentally relies on the premise that people could have chosen to do otherwise but instead chose the path they did out of free choice.  While there are various versions of free will that philosophers have proposed, they all tend to revolve around some concept of autonomous agency.  The folk psychological conception of free will that most people subscribe to is some form of deliberation that is self-caused in some way thus ruling out randomness or indeterminism as the “cause”, since randomness can’t be authored by the autonomous agent, and also ruling out non-randomness or determinism as well, since an unbroken chain of antecedent causes can’t be authored by the autonomous agent either.

So as to avoid a long digression, I’m not going to expound upon all the details of free will and the various versions that others have proposed, but will only mention that the most relevant version that is tied to moral desert is generally some form of having the ability to have chosen to do otherwise (ignoring randomness).  Notice that because indeterminism or determinism is a logical dichotomy, these are the only two options that can possibly exist to describe the ontological underpinnings of our universe (in terms of causal laws that describe how the state of the universe changes over time).  Quantum mechanics allows either of these two options to exist given their consistency with the various interpretations therein that are all empirically identical with one another, but there is no third option available, so quantum mechanics doesn’t buy us any room for this kind of free will either.  Since neither option can produce any form of self-caused or causa sui free will (sometimes referred to as libertarian free will), then the intuitive concept of moral desert that relies on said free will is also rendered impossible if not altogether meaningless.  Therefore moral desert can only exist as a coherent concept if it no longer contains within it any assumptions of the moral agent having an ability to have chosen to do otherwise (again, ignoring randomness).  So what does this realization imply for our preconceptions of justified punishment or even justice itself?

At the very least, the concept of moral desert that is involved in these other concepts needs to be reformulated or restricted given the impossibility and thus the non-existence of libertarian free will.  So if we are to say that people “deserve” anything at all morally speaking (such as a particular punishment), it can only be justified let alone meaningful in some other sense, such as a consequentialist goal that the implementation of the “desert” (in this case, the punishment) effectively accomplishes.  Punishing the wrongdoer can no longer be a means of their getting their due so to speak, but rather needs to be justified by some other purpose such as rehabilitation, future crime deterrence, and/or restitution for the victim (to compensate for physical damages, property loss, etc.)  With respect to this latter factor, restitution, there is plenty of wiggle room here for some to argue for punishment on the grounds of it simply making the victim feel better (which I suppose we could call a form of psychological restitution).  People may try to justify some level of punishment based on making the victim feel better, but vengeance should be avoided at all costs, and one needs to carefully consider what justifications are sufficient (if any) for punishing another with the intention of simply making the victim feel better.

Regarding psychological restitution, it’s useful to bring up the aforementioned concepts of retribution and vengeance, and appreciate the fact that vengeance can easily result in cases where no disinterested party performs the punishment or decides its severity, and instead the victim (or another interested party) is involved with these decisions and processes.  Given the fact that we lack libertarian free will, we can also see how vengeance is not rationally justifiable and therefore why it is important that we take this into account not only in terms of society’s methods of criminal behavioral correction but also in terms of how we behave toward others that we think have committed some wrongdoing.

Deterrence & Fairness of Punishment

As for criminal deterrence, I was thinking about this concept the other day and thought about a possible conundrum concerning its justification (certain forms of deterrence anyway).  If a particular punishment is agreed upon within some legal system on the grounds that it will be sufficient to rehabilitate the criminal (and compensate the victim sufficiently) and an additional amount of punishment is tacked on to it merely to serve as a more effective deterrent, it seems that it would lack justification, with respect to treating the criminal in a fair manner.

To illustrate this, consider the following: if the criminal commits the crime, they are in one of two possible epistemic states — either they knew about the punishment that would follow from committing the crime beforehand, or they didn’t.  If they didn’t know this, then the deterrence addition of the punishment wouldn’t have had the opportunity to perform its intended function on the potential criminal, in which case the criminal would be given a harsher sentence than is necessary to rehabilitate them (and to compensate the victim) which should be the sole purpose of punishing them in the first place (to “right” a “wrong” and to minimize behavioral recurrences).  How could this be justified in terms of what is a fair and just treatment of the criminal?

And then, on the other hand, if they did know the degree of punishment that would follow committing such a crime, but they committed the crime anyway, then the deterrence addition of the punishment failed to perform its intended function even if it had the opportunity to do so.  This would mean that the criminal is once again, given a punishment that is harsher than what is needed to rehabilitate them (and also to compensate the victim).

Now one could argue in the latter case that there are other types of justification to ground the harsher deterrence addition of the punishment.  For example, one could argue that the criminal knew beforehand what the consequences would be, so they can’t plead ignorance as in the first example.  But even in the first example, it was the fact that the deterrence addition was never able to perform its function that turned out to be most relevant even if this directly resulted from the criminal lacking some amount of knowledge.  Likewise, in the second case, even with the knowledge at their disposal, the knowledge was useless in actualizing a functional deterrent.  Thus, in both cases the deterrent failed to perform its intended function, and once we acknowledge that, then we can see that the only purposes of punishment that remain are rehabilitation and compensation for the victim.  One could still try and argue that the criminal had a chance to be deterred, but freely chose to commit the crime anyway so they are in some way more deserving of the additional punishment.  But then again, we need to understand that the criminal doesn’t have libertarian free will so it’s not as if they could have done otherwise given those same conditions, barring any random fluctuations.  That doesn’t mean we don’t hold them responsible for their actions — for they are still being justifiably punished for their crime — but it is the degree of punishment that needs to be adjusted given our knowledge that they lack libertarian free will.

Now one could further object and say that the deterrence addition of the punishment isn’t intended solely for the criminal under our consideration but also for other possible future criminals that may be successfully deterred from the crime given such a deterrence addition (even if this criminal was not).  Regardless of this pragmatic justification, that argument still doesn’t justify punishing the criminal, in such a way, if we are to treat the criminal in a fair way based on their actions alone.  If we bring other possible future criminals into the justification, then the criminal is being punished not only for their wrongdoing but in excess for hypothetical reasons concerning other hypothetical offenders — which is not at all fair.  So we can grant the fact that some may justify these practices on pragmatic consequentialist grounds, but they aren’t consistent with a Rawslian conception of justice as fairness.  Which means they aren’t consistent with many anti-consequentialist views (such as Kantian deontologists for example) that often promote strong conceptions of justice and moral desert in their ethical frameworks.

Conclusion

In summary, I wanted to reiterate the fact that even if our intuitive conceptions of moral desert and justice sometimes align with our rational moral goals, they often lack rational justification and thus often serve to inhibit the implementation of any kind of rational moral theory.  They often produce behaviors that are vengeful, malicious, sadistic, and most often counter-productive to our actual moral goals.  We need to incorporate the fact that libertarian free will does not (and logically can not) exist, into our moral framework, so that we can better strive to treat others fairly even if we still hold people responsible in some sense for their actions.

We can still hold people responsible for their actions (and ought to) by replacing the concept of libertarian free will with a free will conception that is consistent with the laws of physics, with psychology, and neurology, by proposing for example that people’s degree of “free will” with respect to some action is inversely proportional to the degree of conditioning needed to modify such behavior.  That is to say, the level of free will that we have with respect to some kind of behavior is related to our ability to be programmed and reprogrammed such that the behavior can (at least in principle) be changed.

Our punishment-reward systems then (whether in legal, social, or familial domains), should treat others as responsible agents only insofar as to protect the members of that society (or group) from harm and also to induce behaviors that are conducive to our physical and psychological well being — which is the very purpose of our having any reasonable moral theory (that is sufficiently motivating to follow) in the first place.  Anything that goes above and beyond what is needed to accomplish this is excessive and therefore not morally justified.  Following this logic, we should see that many types of punishment including, for example, the death penalty, are entirely unjustified in terms of our moral goals and the strategies of punishment that we should implement to accomplish those goals.  As the saying goes, an eye for an eye makes the whole world blind, and thus barbaric practices such as inflicting pain or suffering (or death sentences) simply to satisfy some intuitions need to be abolished and replaced with an enlightened system that relies on rational justifications rather than intuition.  Only then can we put the primitive, inhumane moral systems of the past to rest once and for all.

We need to work with our psychology (not only the common trends between most human beings but also our individual idiosyncrasies) and thus work under the pretense of our varying degrees of autonomy and behavioral plasticity.  Only then can we maximize our chances and optimize our efforts in attaining fulfilling lives for as many people as possible living in a society.  It is our intuitions (products of evolution and culture) that we must be careful of, as they can (and often have throughout history) led us astray to commit various moral atrocities.  All we can do is try to overcome these moral handicaps the best we can through means of reason and rationality, but we have to acknowledge that these problems exist before we can face them head on and subsequently engineer the right kinds of societal changes to successfully reach our moral goals.

Demonization Damning Democracy

leave a comment »

After the 2016 presidential election, I’ve had some more time to reflect on the various causes of what has been aptly dubbed Trumpism, and also to reflect on some strategies that we as a nation need to implement in order to successfully move forward.  I’ll start by saying that I suspect that most people will not like this post because most people sit at the extremes of the political spectrum, and thus will likely feel uncomfortable facing any criticism that they think lends legitimacy to their opponents position.  Regardless of this likelihood, I’ve decided to write this post anyway because the criticisms that this post points out reflect exactly this problem — the diminished capacity for the politically divided to be charitable and intellectually honest in terms of their treatment and representation of their opponents’ positions.

Many would be hard pressed to name another period in American history that has been defined by as much political polarization and animosity that we’ve seen in the last year.  The Civil War that transpired in the mid 19th century is perhaps the closest runner up to match this “great divide” plaguing our nation.  In the interest of moving forward, we need to find quicker and more pragmatic ways of bridging such a divide.  We’re not going to agree on most issues, but there are some things we can do a hell of a lot better.  For starters, I think that we all need to stop talking past one another and acknowledge that there were legitimate reasons to vote for Donald Trump (keep in mind that I thought Clinton was the only sane choice which was why I knew I had to vote for her).  The majority of people on both sides of this debate have been demonizing the other rather than being intellectually honest (let alone charitable) about one another’s position.  Unfortunately the damage has already been done and Trump is now going to be our president (barring some miracle occurring between now and January 20th).

I’m in no way attempting to underplay the moral travesty that a large number of voters are responsible for, and which happened despite the fact that they were outnumbered by almost 3 million Democrat voters in the popular vote (which actually set a record for the highest margin direct-democratic victory for any candidate voted against by the electoral college).  I am however trying to open up a civil discourse between progressive liberals such as myself and those that voted for this inexperienced plutocrat for at least some legitimate reasons.  We may still disagree on the importance of those reasons when weighed against all others under consideration, and we may disagree on how effective Trump would be in actually addressing any one of them (even if they were the most important issues), but we should acknowledge those reasons nevertheless.

Economy, Immigration & Terrorism

Before looking at some of these specific reasons, I think it’s important to note the three main issues that they seem to revolve around, namely terrorism, immigration, and the economy.  It’s also interesting to note that all three of these issues are themselves intimately connected with one another with respect to the impetus that turned the election on its head.  For example, many immigrants and refugees from nations that are predominantly Muslim are getting unfairly lumped into a category of would-be terrorists — largely resulting from anti-Muslim sentiments that have escalated since 9/11, and perhaps climaxing with the formation of other Muslim extremist groups such as ISIS.  And on the economic front, Mexican or other Hispanic immigrants in particular are getting flack in part because of their being largely indistinguishable from illegal immigrants, and some people think that their jobs have been or will be taken from illegal immigrants (or taken from legal immigrants that many simply assume are illegal) that are willing to work for below minimum wage.

Of course the irony here is that conservatives that embrace true free market capitalistic principles are shooting themselves in the foot by rejecting this “free market” consequence, i.e., letting the markets decide what wages are fair with no government intervention.  It’s one thing to argue against illegal immigrants breaking immigration laws (which everyone agrees is a problem, even if they disagree on the degree of the problem), but one can’t employ an economic argument against illegal immigrants or legal immigrants based on sub-par wages or a lack of jobs without also acknowledging the need for government-imposed market regulations.  These market regulations include having a minimum wage that is enforced let alone raised to provide a living wage (which is now at risk with Trump’s newly elected Secretary of Labor Andy Puzder, given his history of screwing his fast food workers while raking in millions of dollars per year).

It goes without saying that the anti-immigrant (even anti-legal-immigrant) mentality was only exacerbated when Trump filled his campaign with hateful “build the wall” rhetoric, combined with Trump calling Mexican immigrants criminals and rapists, despite the fact that immigrants comprise a lower percentage of criminals and rapists compared to non-immigrants in the U.S.  None of this helped to foster any support for embracing these generally decent people that are crossing the border looking for a better life in America.  Most of them are looking for better opportunities, for the same reasons our ancestors immigrated to the U.S. long ago (both legally and illegally).  Having said that, it’s also true that illegal immigration is a problem that needs to be addressed, but lying about the actual impact of undocumented immigrants on the economy (by either denying the fact that they can suppress wages in some industries, or by denying that there are benefits that these people can produce in other work sectors), is only going to detract from our ability to solve the problem effectively and ethically.  Hate mongering certainly isn’t going to accomplish anything other than pissing off liberals and hindering bipartisan immigration reform.

As for Islam, people on the right are justifiably pissed off that most people on the left don’t even acknowledge the fact that Islam has dangerous doctrines that have been exploited to radicalize Muslims into Jihadists and Islamists that have fueled various forms of terrorism.  Saying that ISIS isn’t fundamentally Islamic is ridiculous once one sees that its adherents are in fact motivated by a literal reading of the texts (i.e. the Koran and Hadiths) including a true belief in eternal paradise and glory for martyrs that die on the front lines or by flying a plane into a building.

As a progressive liberal, I’m disappointed when regressive liberals call anybody that points this out a racist or an Islamophobe.  It’s true that many people that make these points (generally on the political right) are also racist and Islamophobic, but many of them are not (including some liberals such as myself) and it actually pushed a number of people toward Trump that would have otherwise stayed away from a clown like Trump. If only the left had done a better job being honest about these facts, then they wouldn’t have scared away a number of people that were sitting on the fence of the election.  A number of people that ran away once they believed that Clinton was being either dishonest or delusional on this point, and who subsequently saw Clinton (albeit erroneously) as someone who was not as likely to handle this terrorist threat effectively. It’s clear to me that she was the most likely to handle it effectively despite this concern given the facts that she was by far the most qualified and experienced candidate, including having valuable and relevant experience in helping to take down Osama Bin Laden as Secretary of State.  This misperception, induced by this bit of dishonesty, gave fuel to a ignorant bigot like Trump who was at least right on this one point, even if for all the wrong reasons, and even if he combined this point with bigotry and bogus xenophobic rhetoric based on his ignorance of Islamic culture and Muslims generally.

So while the Trumpers had some legitimate concern here, they and most others on the right failed to acknowledge that Islamic doctrine isn’t the only motivating factor behind ISIS terrorism as there are a number of geopolitical factors at play here and also some number of radicalizing leaders who simply high-jacked Islamic doctrine to fuel terrorism with the primary goal of meeting those geopolitical goals.   Many Trumpers also failed to realize that most Muslims in the world are peaceful people and are not members of ISIS or any other terrorist group or organization.  Many failed to realize that Trump has absolutely no political experience, let alone specific experience pertaining to national or international security, so he is the last person that should claim to know how to handle such a complicated and multi-faceted international conflict.  Furthermore, Trump’s “I’ll bomb the shit out of them” mentality isn’t going to appease the worries of our Muslim allies nor our non-Muslim allies that are seeking diplomatic resolutions at all costs.

I think one of the biggest oversight of Trumpers is their failing to realize that Trump’s threat to place all Muslims residing in the U.S. into a fascist registry and the effects of his anti-Muslim rhetoric are, if anything, accomplishing exactly what ISIS wants.  ISIS wants all Muslims to reject Western values including democracy, equality, and humanism, and what better way to radicalize more Muslims than having a large number of (mostly) white Americans alienating them through harassment and ostracization.  What better way could there be to lose the trust and cooperation of Muslims already residing within our borders — the very Muslims and Muslim communities that we need to work with to combat radicalization and domestic terrorism?  Trump’s hateful behavior and rudderless tactics are likely to create the ultimate Islamic Trojan horse within our own borders.  So while many on the left need to acknowledge that Islam does in fact have an ideological influence on terrorism, and is thus an influence that needs to be addressed honestly, those on the right also need to appreciate the fact that we need to avoid further alienating and angering Muslims such that they are more compelled to join radical groups, the very radical groups that we all (including most other Muslims) despise.

Multi-culturalism & Assimilation

Another big Trump-voter motivational reason is the ongoing clash between multi-culturalism and traditional American culture or perhaps better described as well-established highly homogeneous cultures in America.  Some people were driven to Trumpism by their feeling culturally threatened by immigrants that fail to assimilate to the already well-established cultures in various communities around the country.  If they’ve lived in a community that is composed of only English speaking, reality-TV-watching, hamburger-eating, football fans (to give an example), and then they start seeing other languages and entirely foreign cultures in schools, on the bus, at their workplace, etc., they feel that their way of life is being encroached upon.  When immigrants fail to assimilate to the predominant culture of an area (including learning the English language), with the natives in these communities pressured to adopt bilingual infrastructure, to suspend cultural norms to make new religious exceptions, etc., people understandably get pissed off because humans have evolved to be highly tribal and most of us fear change.

Some feel like there’s a double-standard at play where natives in a community have to adapt to accommodate the immigrants and their cultures, but without having the immigrants compromise by accommodating the native cultures and norms (or at least not to a large enough degree).  As a result, we often see pockets of foreigners that bud off to form micro-communities and groups that are very much like small versions of their home countries.  Then when there’s an influx of these immigrants in schools and certain workplaces, there is increased animosity toward them because they are that much more likely to be treated as an out-group.  This is no doubt further fueled by racism and xenophobia (with vicious cycles of increasing prejudice against immigrants and subsequent assimilation hurdles), but there needs to be a give-take relationship with better cultural assimilation so that the native communities don’t feel that their own culture is threatened while simultaneously feeling forced to preserve and make way for an entirely foreign one.  Additionally, we need to better educate people to be more accepting of reasonable levels of diversity, even if we place pragmatic limits on how far that diversity can go (while still maintaining solidarity and minimizing tribalism).

If it’s not a two-way street, with effective cultural assimilation, then we can expect a lot of people to run away looking for someone like Trump to throw them a proverbial life preserver with his “I’m the greatest and I can fix it all and make America great again” motto (perhaps disguising the implicit motto “make America WHITE again”) even if it’s really nothing but a bunch of snake oil demagoguery so he can get into power and rob the nation blind with a cabinet full of fellow billionaire plutocrats (including those tied to Big-Oil and Goldman Sachs).  Trump learned fairly quickly how effective demagoguery was, likely aided by his insider knowledge of American TV-junkie culture (including The Apprentice), his knowledge of how bigoted so many people are, how attracted they are to controversy and shock-value (rather than basic common decency) and how he could manipulate so many voters through hatred and fear given such weaknesses.  But none of that would have worked if there weren’t some grains of truth in the seeds Trump was sowing, if Trump wasn’t saying things that many Americans were simply too afraid to say out loud (including that which was largely racist, bigoted, and ignorant).

But rather than most people on the left acknowledging the inherent problems with unlimited multi-culturalism, including it leading us down a path where the population becomes less and less cohesive with less solidarity and common goals, the left largely labeled all people with these legitimate concerns as racists and bigots.  While it’s true that a substantial chunk of Trump supporters are racists and bigots (perhaps half, who really knows), an appreciable chunk of them are not racists or bigots.  Much like those that saw obvious problems with Islamic ideology in the modern age as it pertains to terrorist threats (with race being irrelevant, as can be seen by radicals such as Adam Gadahn), many saw problems with immigration in terms of the pragmatic limitations of multi-culturalism (rather than problems with any particular race).  On the other side however, Trump supporters have to at least acknowledge that even if they themselves are not racist, their support of Trump does in fact validate his racist rhetoric and the racist supporters fueled by that rhetoric (even if Trump himself had no racist intentions, although I think he did).  So they may still think it was worth it to support Trump but they can’t have their cake and eat it too.  They have to own up to what Trump’s rhetoric fuels and take at least some responsibility for it, including any consequences that they do not like or endorse.

Political Correctness, Safe Spaces, & Free Speech

Last but not least there are debates regarding things like political correctness (which play into the multi-culturalism battle), safe spaces, and freedom of speech, that deeply affected this election.  For one, I acknowledge and sympathize with others’ aggravation in terms of political correctness, where sometimes people are just trying to communicate some semantic content and don’t want to be bogged down with ever-changing social conventions on what terms one is and is not allowed to use.  But I also understand that social conventions change as a result of what comprises a society’s “stream of consciousness”, including the ever-changing ethical and moral concerns of the day, issues with social justice, stereotypes, marginalization of one group or another, etc.  People on the right should try not to be completely callous and inconsiderate about social conventions and work harder to understand why others are offended by certain terms, and those on the left need to try harder to understand the intentions of those using possibly offensive terms.  If we each work to give a little leeway to the other and try to put ourselves in their shoes and vice versa, we’ll have better chances of getting along and finding common ground rather than being demonized and never getting anywhere in terms of societal progress.  People on the left should work harder to not to be so thin-skinned that everything offends them (in other words, try harder not to be like Trump), and people on the right should try to show a little more empathy and try not to be inconsiderate jerks.

Safe spaces have become another problem.  While it’s true that some events can be and should be exclusive to certain groups (for example keeping Nazi or KKK members out of a Jewish festival or speech), it is crucial that we don’t fall down a slippery slope that abolishes freedom of speech or that establishes constant safe spaces that exacerbate the polarization plaguing our political sphere.  For example, social platforms like Facebook and the like allow people to block the comments of others, feed their wall with news and articles that fulfill their own cognitive confirmation biases and prevent their ideas from being challenged.  The irony is that while many Trumpers raised legitimate concerns and dismay over the concept of safe spaces as espoused by those on the left, they too were guilty of the safe space methodology on their own Facebook pages, etc.  Even my own sister (a Trump supporter after Cruz lost) blocked me after I pointed out a few flaws in her logic and reasoning with regard to a couple Trump apologetic posts she had shared.  After her husband came in to defend her (she never attempts to defend herself for some reason), I refuted his points as well in a civil way and then she blocked me from her wall.  This is a problem because people on both sides are making their own safe spaces not allowing diversity in the opinions and points they are exposed to.  It only increases the in-group/out-group mentality that is ripping this country apart.

Trump is ironically the worst offender using safe spaces that I’ve seen, with his “Sean Hannity” safe space on the radio, his one-dimensional rallies (filled with supporters to boost his ego and who have been encouraged by Trump himself to punch protesters in the face, with him offering to pay their legal fees like some kind of totalitarian mobster), his disdain for the free press, free speech, and journalism in general — not to mention the libel laws he wants to change so that he can sue news organizations that report on facts he doesn’t want made public.  The chronic safe space mentality has got to go, even if we reserve the right to safe spaces for some places and occasions.  There needs to be a careful balance so people are exposed to diverse ideas (not just what they want to hear) and we need to protect free speech (limiting one group opens the doors to limit them all).

Where to go from here?

While there may have been some other legitimate reasons to vote for Trump (I couldn’t think of any others to be honest), these seemed to be the primary ones I noticed at play.  So what do we do now?  Well, people need to stop talking past one another, and better empathize with the opposition and not simply demonize them.  The sooner everyone can acknowledge that the opposition had at least some legitimacy, the sooner we can have more civil discourses and keep moving forward to heal this great divide.

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from http://plato.stanford.edu/entries/repugnant-conclusion/fig2.png

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).

Co-evolution of Humans & Artificial Intelligence

leave a comment »

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

The illusion of Persistent Identity & the Role of Information in Identity

with 8 comments

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.