The Open Mind

Cogito Ergo Sum

On Parfit’s Repugnant Conclusion

with 54 comments

Back in 1984, Derek Parfit’s work on population ethics led him to formulate a possible consequence of total utilitarianism, namely, what was deemed as the Repugnant Conclusion (RC).  For those unfamiliar with the term, Parfit described it accordingly:

For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living.

To better explain this conclusion, let’s consider a few different populations, A, A+, and B, where the width of the bar represents the relative population and the height represents the average “well-being rating” of the people in that sub-population:

Figure 2

Image taken from

In population A, let’s say we have 10 billion people with a “well-being rating” of +10 (on a scale of -10 to +10, with negative values indicating a life not worth living).  Now in population A+, let’s say we have all the people in population A with the mere addition of 10 billion people with a “well-being rating” of +8.  According to the argument as Parfit presents it, it seems reasonable to hold that population A+ is better than population A, or at the very least, not worse than population A.  This is believed to be the case because the only difference between the two populations is the mere addition of more people with lives worth living (even if their well-being isn’t as good as those represented by the “A” population, so it is believed that adding additional lives worth living cannot make an outcome worse when all else is equal.

Next, consider population B where it has the same number of people as population A+, but every person has a “well-being rating” that is slightly higher than the average “well-being rating” in population A+, and that is slightly lower than that of population A.  Now if one accepts that population A+ is better than A (or at least not worse) and if one accepts that population B is better than population A+ (since it has an average well being that is higher) then one has to accept the conclusion that population B is better than population A (by transitive logic;  A <= A+ <B, therefore, A<B).  If this is true then we can take this further and show that a population that is sufficiently large enough would still be better than population A, even if the “well-being rating” of each person was only +1.  This is the RC as presented by Parfit, and he along with most philosophers found it to be unacceptable.  So he worked diligently on trying to solve it, but hadn’t succeeded in the way he hoped for.  This has since become one of the biggest problems in ethics, particularly in the branch of population ethics.

Some of the strategies that have been put forward to resolve the RC include adopting an average principle, a variable value principle, or some kind of critical level principle.  However all of these supposed resolutions are either wrought with their own problems (if accepted) or they are highly unsatisfactory, unconvincing, or very counter-intuitive.  A brief overview of the argument and the supposed solutions and their associated problems can be found here.

I’d like to respond to the RC argument as well because I think that there are at least a few problems with the premises right off the bat.  The foundation for my rebuttal relies on an egoistic moral realist ethics, based on a goal theory of morality (a subset of desire utilitarianism), which can be summarized as follows:

If one wants X above all else, then one ought to Y above all else.  Since it can be shown that ultimately what one wants above all else is satisfaction and fulfillment with one’s life (or what Aristotle referred to as eudaimonia) then one ought to do above all else all possible actions that will best achieve that goal.  The actions required to best accomplish this satisfaction can be determined empirically (based on psychology, neuroscience, sociology, etc.), and therefore we theoretically have epistemic access to a number of moral facts.  These moral facts are what we ought to do above all else in any given situation given all the facts available to us and via a rational assessment of said facts.

So if one is trying to choose whether one population is better or worse than another, I think that assessment should be based on the same egoistic moral framework which accounts for all known consequences resulting from particular actions and which implements “altruistic” behaviors precipitated by cultivating virtues that benefit everyone including ourselves (such as compassion, integrity, and reasonableness).  So in the case of evaluating the comparison between population A and that of A+ as presented by Parfit, which is better?  Well if one applies the veil of ignorance as propagated by the social contract theories of philosophers such as Kant, Hobbes, Locke, and Rousseau, whereby we would hypothetically choose between worlds, not knowing which subpopulation we would end up in, which world ought we to prefer?  It would stand to reason that population A is certainly better than that of A+ (and better than population B) because one has the highest probability of having a higher level of well-being in that population/society (for any person chosen at random).  This reasoning would then render the RC as false, as it only followed from fallacious reasoning (i.e. it is fallacious to assume that adding more people with lives worth living is all that matters in the assessment).

Another fundamental flaw that I see in the premises is the assumption that population A+ contains the same population of high well-being individuals as in A with the mere addition of people with a somewhat lower level of well-being.  If the higher well-being subpopulation of A+ has knowledge of the existence of other people in that society with a lower well-being, wouldn’t that likely lead to a decrease in their well-being (compared to those in the A population that had no such concern)?  It would seem that the only way around this is if the higher well-being people were ignorant of those other members of society or if there were other factors that were not equal between the two high-well-being subpopulations in A and A+ to somehow compensate for that concern, in which case the mere addition assumption is false since the hypothetical scenario would involve a number of differences between the two higher well-being populations.  If the higher well-being subpopulation in A+ is indeed ignorant of the existence of the lower well-being subpopulation in A+, then they are not able to properly assess the state of the world which would certainly factor into their overall level of well-being.

In order to properly assess this and to behave morally at all, one needs to use as many facts as are practical to obtain and operate according to those facts as rationally as possible.  It would seem plausible that the better-off subpopulation of A+ would have at least some knowledge of the fact that there exist people with less well-being than themselves and this ought to decrease their happiness and overall well-being when all else is truly equal when compared to A.  But even if the subpopulation can’t know this for some reason (i.e. if the subpopulations are completely isolated from one another), we do have this knowledge and thus must take account of it in our assessment of which population is better than the other.  So it seems that the comparison of population A to A+ as stated in the argument is an erroneous one based on fallacious assumptions that don’t account for these factors pertaining to the well-being of informed people.

Now I should say that if we had knowledge pertaining to the future of both societies we could wind up reversing our preference if, for example, it turned out that population A had a future that was likely going to turn out worse than the future of population A+ (where the most probable “well-being rating” decreased comparatively).  If this was the case, then being armed with that probabilistic knowledge of the future (based on a Bayesian analysis of likely future outcomes) could force us to switch preferences.  Ultimately, the way to determine which world we ought to prefer is to obtain the relevant facts about which outcome would make us most satisfied overall (in the eudaimonia sense), even if this requires further scientific investigation regarding human psychology to determine the optimized trade-off between present and future well-being.

As for comparing two populations that have the same probability for high well-being, yet with different populations (say “A” and “double A”), I would argue once again that one should assess those populations based on what the most likely future is for each population based on the facts available to us.  If the larger population is more likely to be unsustainable, for example, then it stands to reason that the smaller population is what one ought to strive for (and thus prefer) over the larger one.  However, if sustainability is not likely to be an issue based on the contingent facts of the societies being evaluated, then I think one ought to choose the society that has the best chances of bettering the planet as a whole through maximized stewardship over time.  That is to say, if more people can more easily accomplish goals of making the world a better place, then the larger population would be what one ought to strive for since it would secure more well-being in the future for any and all conscious creatures (including ourselves).  One would have to evaluate the societies they are comparing to one another for these types of factors and then make the decision accordingly.  In the end, it would maximize the eudaimonia for any individual chosen at random both in that present society and in the future.

But what if we are instead comparing two populations that both have “well-being ratings” that are negative?  For example what if we compare a population S containing only one person that has a well-being rating of -10 (the worst possible suffering imaginable) versus another population T containing one million people that have well-being ratings of -9 (almost the worst possible suffering)?  It sure seems that if we apply the probabilistic principle I applied to the positive well being populations, that would lead to preferring a world with millions of people suffering horribly instead of a world with just one person suffering a bit more.  However, this would only necessarily follow if one applied the probabilistic principle while ignoring the egoistically-based “altruistic” virtues such as compassion and reasonableness, as it pertains to that moral decision.  In order to determine which world one ought to prefer over another, just as in any other moral decision, one must determine what behaviors and choices make us most satisfied as individuals (to best obtain eudaimonia).  If people are generally more satisfied (perhaps universally once fully informed of the facts and reasoning rationally) in preferring to have one person suffer at a -10 level over one million people suffering at a -9 level (even if it was you or I chosen as that single person), then that is the world one ought to prefer over the other.

Once again, our preference could be reversed if we were informed that the most likely futures of these populations had their levels of suffering reversed or changed markedly.  And if the scenario changed to, say, 1 million people at a -10 level versus 2 million people at a -9 level, our preferred outcomes may change as well, even if we don’t yet know what that preference ought to be (i.e. if we’re ignorant of some facts pertaining to our psychology at the present, we may think we know, even though we are incorrect due to irrational thinking or some missing facts).  As always, the decision of which population or world is better depends on how much knowledge we have pertaining to those worlds (to make the most informed decision we can given our present epistemological limitations) and thus our assessment of their present and most likely future states.  So even if we don’t yet know which world we ought to prefer right now (in some subset of the thought experiments we conjure up), science can find these answers (or at least give us the best shot at answering them).

Co-evolution of Humans & Artificial Intelligence

leave a comment »

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

The illusion of Persistent Identity & the Role of Information in Identity

leave a comment »

After reading and commenting on a post at “A Philosopher’s Take” by James DiGiovanna titled Responsibility, Identity, and Artificial Beings: Persons, Supra-persons and Para-persons, I decided to expand on the topic of personal identity.

Personal Identity Concepts & Criteria

I think when most people talk about personal identity, they are referring to how they see themselves and how they see others in terms of personality and some assortment of (usually prominent) cognitive and behavioral traits.  Basically, they see it as what makes a person unique and in some way distinguishable from another person.  And even this rudimentary concept can be broken down into at least two parts, namely, how we see ourselves (self-ascribed identity) and how others see us (which we could call the inferred identity of someone else), since they are likely going to differ.  While most people tend to think of identity in these ways, when philosophers talk about personal identity, they are usually referring to the unique numerical identity of a person.  Roughly speaking, this amounts to basically whatever conditions or properties that are both necessary and sufficient such that a person at one point in time and a person at another point in time can be considered the same person — with a temporal continuity between those points in time.

Usually the criterion put forward for this personal identity is supposed to be some form of spatiotemporal and/or psychological continuity.  I certainly wouldn’t be the first person to point out that the question of which criterion is correct has already framed the debate with the assumption that a personal (numerical) identity exists in the first place and even if it did exist, it also assumes that the criterion is something that would be determinable in some way.  While it is not unfounded to believe that some properties exist that we could ascribe to all persons (simply because of what we find in common with all persons we’ve interacted with thus far), I think it is far too presumptuous to believe that there is a numerical identity underlying our basic conceptions of personal identity and a determinable criterion for it.  At best, I think if one finds any kind of numerical identity for persons that persist over time, it is not going to be compatible with our intuitions nor is it going to be applicable in any pragmatic way.

As I mention pragmatism, I am sympathetic to Parfit’s views in the sense that regardless of what one finds the criteria for numerical personal identity to be (if it exists), the only thing that really matters to us is psychological continuity anyway.  So despite the fact that Locke’s view — that psychological continuity (via memory) was the criterion for personal identity — was in fact shown to be based on circular and illogical arguments (per Butler, Reid and others), nevertheless I give applause to his basic idea.  Locke seemed to be on the right track, in that psychological continuity (in some sense involving memory and consciousness) is really the essence of what we care about when defining persons, even if it can’t be used as a valid criterion in the way he proposed.

(Non) Persistence & Pragmatic Use of a Personal Identity Concept

I think that the search for, and long debates over, what the best criterion for personal identity is, has illustrated that what people have been trying to label as personal identity should probably be relabeled as some sort of pragmatic pseudo-identity. The pragmatic considerations behind the common and intuitive conceptions of personal identity have no doubt steered the debate pertaining to any possible criteria for helping to define it, and so we can still value those considerations even if a numerical personal identity doesn’t really exist (that is, even if it is nothing more than a pseudo-identity) and even if a diachronic numerical personal identity does exist but isn’t useful in any way.

If the object/subject that we refer to as “I” or “me” is constantly changing with every passing moment of time both physically and psychologically, then I tend to think that the self (that many people ascribe as the “agent” of our personal identity) is an illusion of some sort.  I tend to side more with Hume on this point (or at least James Giles’ fair interpretation of Hume) in that my views seem to be some version of a no-self or eliminativist theory of personal identity.  As Hume pointed out, even though we intuitively ascribe a self and thereby some kind of personal identity, there is no logical reason supported by our subjective experience to think it is anything but a figment of the imagination.  This illusion results from our perceptions flowing from one to the next, with a barrage of changes taking place with this “self” over time that we simply don’t notice taking place — at least not without critical reflection on our past experiences of this ever-changing “self”.  The psychological continuity that Locke described seems to be the main driving force behind this illusory self since there is an overlap in the memories of the succession of persons.

I think one could say that if there is any numerical identity that is associated with the term “I” or “me”, it only exists for a short moment of time in one specific spatio-temporal slice, and then as the next perceivable moment elapses, what used to be “I” will become someone else, even if the new person that comes into being is still referred to as “I” or “me” by a person that possesses roughly the same configuration of matter in its body and brain as the previous person.  Since the neighboring identities have an overlap in accessible memory including autobiographical memories, memories of past experiences generally, and the memories pertaining to the evolving desires that motivate behavior, we shouldn’t expect this succession of persons to be noticed or perceived by the illusory self because each identity has access to a set of memories that is sufficiently similar to the set of memories accessible to the previous or successive identity.  And this sufficient degree of similarity in those identities’ memories allow for a seemingly persistent autobiographical “self” with goals.

As for the pragmatic reasons for considering all of these “I”s and “me”s to be the same person and some singular identity over time, we can see that there is a causal dependency between each member of this “chain of spatio-temporal identities” that I think exists, and so treating that chain of interconnected identities as one being is extremely intuitive and also incredibly useful for accomplishing goals (which is likely the reason why evolution would favor brains that can intuit this concept of a persistent “self” and the near uni-directional behavior that results from it).  There is a continuity of memory and behaviors (even though both change over time, both in terms of the number of memories and their accuracy) and this continuity allows for a process of conditioning to modify behavior in ways that actively rely on those chains of memories of past experiences.  We behave as if we are a single person moving through time and space (and as if we are surrounded by other temporally extended single person’s behaving in similar ways) and this provides a means of assigning ethical and causal responsibility to something or more specifically to some agent.  Quite simply, by having those different identities referenced under one label and physically attached to or instantiated by something localized, that allows for that pragmatic pseudo-identity to persist over time in order for various goals (whether personal or interpersonal/societal) to be accomplished.

“The Persons Problem” and a “Speciation” Analogy

I came up with an analogy that I thought was very fitting to this concept.  One could analogize this succession of identities that get clumped into one bulk pragmatic-pseudo-identity with the evolutionary concept of speciation.  For example, a sequence of identities somehow constitute an intuitively persistent personal identity, just as a sequence of biological generations somehow constitute a particular species due to the high degree of similarity between them all.  The apparent difficulty lies in the fact that, at some point after enough identities have succeeded one another, even the intuitive conception of a personal identity changes markedly to the point of being unrecognizable from its ancestral predecessor, just as enough biological generations transpiring eventually leads to what we call a new species.  It’s difficult to define exactly when that speciation event happens (hence the species problem), and we have a similar problem with personal identity I think.  Where does it begin and end?  If personal identity changes over the course of a lifetime, when does one person become another?  I could think of “me” as the same “me” that existed one year ago, but if I go far enough back in time, say to when I was five years old, it is clear that “I” am a completely different person now when compared to that five year old (different beliefs, goals, worldview, ontology, etc.).  There seems to have been an identity “speciation” event of some sort even though it is hard to define exactly when that was.

Biologists have tried to solve their species problem by coming up with various criteria to help for taxonomical purposes at the very least, but what they’ve wound up with at this point is several different criteria for defining a species that are each effective for different purposes (e.g. biological-species concept, morpho-species concept, phylogenetic-species concept, etc.), and without any single “correct” answer since they are all situationally more or less useful.  Similarly, some philosophers have had a persons problem that they’ve been trying to solve and I gather that it is insoluble for similar “fuzzy boundary” reasons (indeterminate properties, situationally dependent properties, etc.).

The Role of Information in a Personal Identity Concept

Anyway, rather than attempt to solve the numerical personal identity problem, I think that philosophers need to focus more on the importance of the concept of information and how it can be used to try and arrive at a more objective and pragmatic description of the personal identity of some cognitive agent (even if it is not used as a criterion for numerical identity, since information can be copied and the copies can be distinguished from one another numerically).  I think this is especially true once we take some of the concerns that James DiGiovanna brought up concerning the integration of future AI into our society.

If all of the beliefs, behaviors, and causal driving forces in a cognitive agent can be represented in terms of information, then I think we can implement more universal conditioning principles within our ethical and societal framework since they will be based more on the information content of the person’s identity without putting as much importance on numerical identity nor as much importance on our intuitions of persisting people (since they will be challenged by several kinds of foreseeable future AI scenarios).

To illustrate this point, I’ll address one of James DiGiovanna’s conundrums.  James asks us:

To give some quick examples: suppose an AI commits a crime, and then, judging its actions wrong, immediately reforms itself so that it will never commit a crime again. Further, it makes restitution. Would it make sense to punish the AI? What if it had completely rewritten its memory and personality, so that, while there was still a physical continuity, it had no psychological content in common with the prior being? Or suppose an AI commits a crime, and then destroys itself. If a duplicate of its programming was started elsewhere, would it be guilty of the crime? What if twelve duplicates were made? Should they each be punished?

In the first case, if the information constituting the new identity of the AI after reprogramming is such that it no longer needs any kind of conditioning, then it would be senseless to punish the AI — other than to appease humans that may be angry that they couldn’t themselves avoid punishment in this way, due to having a much slower and less effective means of reprogramming themselves.  I would say that the reprogrammed AI is guilty of the crime, but only if its reprogrammed memory still included information pertaining to having performed those past criminal behaviors.  However, if those “criminal memories” are now gone via the reprogramming then I’d say that the AI is not guilty of the crime because the information constituting its identity doesn’t match that of the criminal AI.  It would have no recollection of having committed the crime and so “it” would not have committed the crime since that “it” was lost in the reprogramming process due to the dramatic change in information that took place.

In the latter scenario, if the information constituting the identity of the destroyed AI was re-instantiated elsewhere, then I would say that it is in fact guilty of the crime — though it would not be numerically guilty of the crime but rather qualitatively guilty of the crime (to differentiate between the numerical and qualitative personal identity concepts that are embedded in the concept of guilt).  If twelve duplicates of this information were instantiated into new AI hardware, then likewise all twelve of those cognitive agents would be qualitatively guilty of the crime.  What actions should be taken based on qualitative guilt?  I think it means that the AI should be punished or more specifically that the judicial system should perform the reconditioning required to modify their behavior as if it had committed the crime (especially if the AI believes/remembers that it has committed the crime), for the better of society.  If this can be accomplished through reprogramming, then that would be the most rational thing to do without any need for traditional forms of punishment.

We can analogize this with another thought experiment with human beings.  If we imagine a human that has had its memories changed so that it believes it is Charles Manson, has all of Charles Manson’s memories and intentions, then that person should be treated as if they are Charles Manson and thus incarcerated/punished accordingly to rehabilitate them or protect the other members of society.  This is assuming of course that we had reliable access to that kind of mind-reading knowledge.  If we did, the information constituting the identity of that person would be what is most important — not what the actual previous actions of the person were — because the “previous person” was someone else, due to that gross change in information.

Some Thoughts on the Orlando Massacre

leave a comment »

My sincerest condolences go out to all the victims and the friends and families of those victims in the Orlando (“Pulse”) night-club shooting.  While it is still uncertain and under investigation whether or not there were any ties between the shooter (whom I won’t bother naming) and some Islamic extremist organization, there was in fact a proclaimed allegiance to such an organization voiced by the shooter himself to the police prior to the incident.  Even if no direct ties are found between the shooter and this or any other radical Islamic extremist organization, the possibility will remain that this was a “self-radicalized” or “self-actualized” Jihadist Muslim.  The man very likely knew that he was going to die one way or another that night (by police or otherwise) and so a belief in martyrdom and in an eternal paradise after death would have been perhaps the most powerful reason to not care about the consequences.  And a person believing that they are carrying out the wishes of an invisible magic man in the sky, and that are doing so in order to achieve eternal paradise, has more than enough motive to commit this kind of heinous act.

Obviously we don’t know what the man was thinking and can’t confirm his alleged motives, but if we take any of his own words seriously, then this is yet another incident that demands that the difficult religious conversation that many people want to avoid be opened further.  A conversation involving the topic of reforming Islam with the secular moderates that claim membership in that religion, and a conversation involving a recognition that when those religious texts are plainly read in their entirety, they clearly advocate for violence and oppression against non-believers.  There may be some good messages in those texts, as there are in just about any book — but to deny the heinous contents that also exist in those very same texts and to deny the real religious motivations of these murderers who are inspired by those texts is nothing but intellectual dishonesty and delusion.

Regressive liberals aren’t making things any easier as they throw out accusations of racism and bigotry even in cases where it is only the religious ideas themselves that are being criticized, with no mention of any race or ethnicity.  That has to stop too.  It’s true that many conservatives that are also racists and bigots and that have racist motives behind their anti-Islamic agenda, are also some of the same conservatives that are mentioning the dangerous ideas in Islam.  But as an intellectually honest liberal myself, I can both recognize and abhor those racist motives common to many conservative social and political circles, yet also agree with some of those conservatives’ claims pertaining to the dangers of certain Islamic religious ideas.

I suspect that one of the reasons for the origin of the regressive liberal movement and its commitment to eliminating any and all criticism of Islam is that it has conflated the racism and bigotry directed at Muslims that is often coming from conservatives (including political clowns like Donald Trump), with the criticism against Islamic ideas that make no mention of race.  I suspect that because many of these anti-Islamic claims are also coming from the same conservative sphere, that regressive liberals have unfortunately lumped all anti-Islamic claims into the same category (some form of racism and bigotry against Islam), when those two kinds of claims should be in entirely different categories.  There are criticisms of Islamic ideas that have nothing to do with race and there are criticisms of Muslims that are clearly racist — and the latter is what liberals and everyone should continue to fight against.  But the former type of criticisms are simply a part of a reasoned discussion on the topic and one that needs to take place in the public sphere.  I have propagated the former type of criticisms (based on reason and evidence, not prejudice or racism) and I have seen many other free-thinker and humanist advocates do so as well.

Bottom line — we have to begin to talk more about the dangers of believing in and relying on faith, dogma, revelation, and any other belief system not grounded on reason and evidence.  These epistemological “methods” are not only demonstrably unreliable and fallacious, but they are also being high-jacked by various terrorist organizations that have their own aims.  Even if the leaders of a dangerous extremist organization don’t actually believe in the religious ideas that they proclaim as their motivation, they know that if others do, and if others are already willing to die for their faith and to do it for such compelling reasons as eternal paradise, then those leaders can get people to commit heinous acts.  As Voltaire once said “Those who can make you believe absurdities can also make you commit atrocities.”  These words of wisdom still apply, even if the leaders that (initially) spread those absurdities don’t believe them. Now I think that many of the leaders that spread these ideas do believe them, but I’m betting that there are also many that do not.  If people begin to see the dangers of faith and dogma and are instilled with an ultimate appreciation and priority of reason and evidence, then these radical recruitments will be far less effective if not rendered entirely ineffective.

Religious ideas don’t get a free pass from criticism just because people hold them to be sacred.  Because, sacred or not, the reality is that this kind of muddled thinking can and has ended many innocent people’s lives throughout human history.  Bad ideas have bad consequences and it doesn’t matter where those bad ideas come from.  Despite the fact that we are living in a post-enlightenment age, not everyone has accepted that paradigm shift yet.  We must keep trying to spread the fruits of the enlightenment for the good of humanity.  It is our moral obligation to do so.

Conscious Realism & The Interface Theory of Perception

with 6 comments

A few months ago I was reading an interesting article in The Atlantic about Donald Hoffman’s Interface Theory of Perception.  As a person highly interested in consciousness studies, cognitive science, and the mind-body problem, I found the basic concepts of his theory quite fascinating.  What was most interesting to me was the counter-intuitive connection between evolution and perception that Hoffman has proposed.  Now it is certainly reasonable and intuitive to assume that evolutionary natural selection would favor perceptions that are closer to “the truth” or closer to the objective reality that exists independent of our minds, simply because of the idea that perceptions that are more accurate will be more likely to lead to survival than perceptions that are not accurate.  As an example, if I were to perceive lions as inert objects like trees, I would be more likely to be naturally selected against and eaten by a lion when compared to one who perceives lions as a mobile predator that could kill them.

While this is intuitive and reasonable to some degree, what Hoffman actually shows, using evolutionary game theory, is that with respect to organisms with comparable complexity, those with perceptions that are closer to reality are never going to be selected for nearly as much as those with perceptions that are tuned to fitness instead.  More so, truth in this case will be driven to extinction when it is up against perceptual models that are tuned to fitness.  That is to say, evolution will select for organisms that perceive the world in a way that is less accurate (in terms of the underlying reality) as long as the perception is tuned for survival benefits.  The bottom line is that given some specific level of complexity, it is more costly to process more information (costing more time and resources), and so if a “heuristic” method for perception can evolve instead, one that “hides” all the complex information underlying reality and instead provides us with a species-specific guide to adaptive behavior, that will always be the preferred choice.

To see this point more clearly, let’s consider an example.  Let’s imagine there’s an animal that regularly eats some kind of insect, such as a beetle, but it needs to eat a particular sized beetle or else it has a relatively high probability of eating the wrong kind of beetle (and we can assume that the “wrong” kind of beetle would be deadly to eat).  Now let’s imagine two possible types of evolved perception: it could have really accurate perceptions about the various sizes of beetles that it encounters so it can distinguish many different sizes from one another (and then choose the proper size range to eat), or it could evolve less accurate perceptions such that all beetles that are either too small or too large appear as indistinguishable from one another (maybe all the wrong-sized beetles whether too large or too small look like indistinguishable red-colored blobs) and perhaps all the beetles that are in the ideal size range for eating appear as green-colored blobs (that are again, indistinguishable from one another).  So the only discrimination in this latter case of perception is between red and green colored blobs.

Both types of perception would solve the problem of which beetles to eat or not eat, but the latter type (even if much less accurate) would bestow a fitness advantage over the former type, by allowing the animal to process much less information about the environment by not focusing on relatively useless information (like specific beetle size).  In this case, with beetle size as the only variable under consideration for survival, evolution would select for the organism that knows less total information about beetle size, as long as it knows what is most important about distinguishing the edible beetles from the poisonous beetles.  Now we can imagine that in some cases, the fitness function could align with the true structure of reality, but this is not what we ever expect to see generically in the world.  At best we may see some kind of overlap between the two but if there doesn’t have to be any then truth will go extinct.

Perception is Analogous to a Desktop Computer Interface

Hoffman analogizes this concept of a “perception interface” with the desktop interface of a personal computer.  When we see icons of folders on the desktop and drag one of those icons to the trash bin, we shouldn’t take that interface literally, because there isn’t literally a folder being moved to a literal trash bin but rather it is simply an interface that hides most if not all of what is really going on in the background — all those various diodes, resistors and transistors that are manipulated in order to modify stored information that is represented in binary code.

The desktop interface ultimately provides us with an easy and intuitive way of accomplishing these various information processing tasks because trying to do so in the most “truthful” way — by literally manually manipulating every diode, resistor, and transistor to accomplish the same task — would be far more cumbersome and less effective than using the interface.  Therefore the interface, by hiding this truth from us, allows us to “navigate” through that computational world with more fitness.  In this case, having more fitness simply means being able to accomplish information processing goals more easily, with less resources, etc.

Hoffman goes on to say that even though we shouldn’t take the desktop interface literally, obviously we should still take it seriously, because moving that folder to the trash bin can have direct implications on our lives, by potentially destroying months worth of valuable work on a manuscript that is contained in that folder.  Likewise we should take our perceptions seriously, even if we don’t take them literally.  We know that stepping in front of a moving train will likely end our conscious experience even if it is for causal reasons that we have no epistemic access to via our perception, given the species-specific “desktop interface” that evolution has endowed us with.

Relevance to the Mind-body Problem

The crucial point with this analogy is the fact that if our knowledge was confined to the desktop interface of the computer, we’d never be able to ascertain the underlying reality of the “computer”, because all that information that we don’t need to know about that underlying reality is hidden from us.  The same would apply to our perception, where it would be epistemically isolated from the underlying objective reality that exists.  I want to add to this point that even though it appears that we have found the underlying guts of our consciousness, i.e., the findings in neuroscience, it would be mistaken to think that this approach will conclusively answer the mind-body problem because the interface that we’ve used to discover our brains’ underlying neurobiology is still the “desktop” interface.

So while we may think we’ve found the underlying guts of “the computer”, this is far from certain, given the possibility of and support for this theory.  This may end up being the reason why many philosophers claim there is a “hard problem” of consciousness and one that can’t be solved.  It could be that we simply are stuck in the desktop interface and there’s no way to find out about the underlying reality that gives rise to that interface.  All we can do is maximize our knowledge of the interface itself and that would be our epistemic boundary.

Predictions of the Theory

Now if this was just a fancy idea put forward by Hoffman, that would be interesting in its own right, but the fact that it is supported by evolutionary game theory and genetic algorithm simulations shows that the theory is more than plausible.  Even better, the theory is actually a scientific theory (and not just a hypothesis), because it has made falsifiable predictions as well.  It predicts that “each species has its own interface (with some similarities between phylogenetically related species), almost surely no interface performs reconstructions (read the second link for more details on this), each interface is tailored to guide adaptive behavior in the relevant niche, much of the competition between and within species exploits strengths and limitations of interfaces, and such competition can lead to arms races between interfaces that critically influence their adaptive evolution.”  The theory predicts that interfaces are essential to understanding evolution and the competition between organisms, whereas the reconstruction theory makes such understanding impossible.  Thus, evidence of interfaces should be widespread throughout nature.

In his paper, he mentions the Jewel beetle as a case in point.  This beetle has a perceptual category, desirable females, which works well in its niche, and it uses it to choose larger females because they are the best mates.  According to the reconstructionist thesis, the male’s perception of desirable females should incorporate a statistical estimate of the true sizes of the most fertile females, but it doesn’t do this.  Instead, it has a category based on “bigger is better” and although this bestows a high fitness behavior for the male beetle in its evolutionary niche, if it comes into contact with a “stubbie” beer bottle, it falls into an infinite loop by being drawn to this supernormal stimuli since it is smooth, brown, and extremely large.  We can see that the “bigger is better” perceptual category relies on less information about the true nature of reality and instead chooses an “informational shortcut”.  The evidence of supernormal stimuli which have been found with many species further supports the theory and is evidence against the reconstructionist claim that perceptual categories estimate the statistical structure of the world.

More on Conscious Realism (Consciousness is all there is?)

This last link provided here shows the mathematical formalism of Hoffman’s conscious realist theory as proved by Chetan Prakash.  It contains a thorough explanation of the conscious realist theory (which goes above and beyond the interface theory of perception) and it also provides answers to common objections put forward by other scientists and philosophers on this theory.

Transcendental Argument For God’s Existence: A Critique

with 2 comments

Theist apologists and theologians have presented many arguments for the existence of God throughout history including the Ontological Argument, Cosmological Argument, Fine-Tuning Argument, the Argument from Morality, and many others — all of which having been refuted with various counter arguments.  I’ve written about a few of these arguments in the past (1, 2, 3), but one that I haven’t yet touched on is that of the Transcendental Argument for God (or simply TAG).  Not long ago I heard the Christian apologist Matt Slick conversing/debating with the well renowned atheist Matt Dillahunty on this topic and then I decided to look deeper into the argument as Slick presents it on his website.  I have found a number of problems with his argument, so I decided to iterate them in this post.

Slick’s basic argument goes as follows:

  1. The Laws of Logic exist.
    1. Law of Identity: Something (A) is what it is and is not what it is not (i.e. A is A and A is not not-A).
    2. Law of Non-contradiction: A cannot be both A and not-A, or in other words, something cannot be both true and false at the same time.
    3. Law of the Excluded Middle: Something must either be A or not-A without a middle ground, or in other words, something must be either true or false without a middle ground.
  2. The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.
  3. They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.
  4. The Laws of Logic are not the product of human minds because human minds are different — not absolute.
  5. But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.
  6. Furthermore, if there are only two options to account for something, i.e., God and no God, and one of them is negated, then by default the other position is validated.
  7. Therefore, part of the argument is that the atheist position cannot account for the existence of The Laws of Logic from its worldview.
  8. Therefore God exists.

Concepts are Dependent on and the Product of Physical Brains

Let’s begin with number 2, 3, and 4 from above:

The Laws of Logic are conceptual by nature — are not dependent on space, time, physical properties, or human nature.  They are not the product of the physical universe (space, time, matter) because if the physical universe were to disappear, The Laws of Logic would still be true.  The Laws of Logic are not the product of human minds because human minds are different — not absolute.

Now I’d like to first mention that Matt Dillahunty actually rejected the first part of Slick’s premise here, as Dillahunty explained that while logic (the concept, our application of it, etc.) may in fact be conceptual in nature, the logical absolutes themselves (i.e. the laws of logic) which logic is based on are in fact neither conceptual nor physical.  My understanding of what Dillahunty was getting at here is that he was basically saying that just as the concept of an apple points to or refers to something real (i.e. a real apple) which is not equivalent to the concept of an apple, so also does the concept of the logical absolutes refer to something that is not the same as the concept itself.  However, what it points to, Dillahunty asserted, is something that isn’t physical either.  Therefore, the logical absolutes themselves are neither physical nor conceptual (as a result, Dillahunty later labeled “the essence” of the LOL as transcendent).  When Dillahunty was pressed by Slick to answer the question, “then what caused the LOL to exist?”, Dillahunty responded by saying that nothing caused them (or we have no reason to believe so) because they are transcendent and are thus not a product of anything physical nor conceptual.

If this is truly the case, then Dillahunty’s point here does undermine the validity of the logical structure of Slick’s argument, because Slick would then be beginning his argument by referencing the content and the truth of the logical absolutes themselves, and then later on switching to the concept of the LOL (i.e. their being conceptual in their nature, etc.).  For the purposes of this post, I’m going to simply accept Slick’s dichotomy that the logical absolutes (i.e. the laws of logic) are in fact either physical or conceptual by nature and then I will attempt to refute the argument anyway.  This way, if “conceptual or physical” is actually a true dichotomy (i.e. if there are no other options), despite the fact that Slick hasn’t proven this to be the case, his argument will be undermined anyway.  If Dillahunty is correct and “conceptual or physical” isn’t a true dichotomy, then even if my refutation here fails, Slick’s argument will still be logically invalid based on the points Dillahunty raised.

I will say however that I don’t think I agree with the point that Dillahunty made that the LOL are neither physical nor conceptual, and for a few reasons (not least of all because I am a physicalist).  My reasons for this will become more clear throughout the rest of this post, but in a nutshell, I hold that concepts are ultimately based on and thus are a subset of the physical, and the LOL would be no exception to this.  Beyond the issue of concepts, I believe that the LOL are physical in their nature for a number of other reasons as well which I’ll get to in a moment.

So why does Slick think that the LOL can’t be dependent on space?  Slick mentions in the expanded form of his argument that:

They do not stop being true dependent on location. If we travel a million light years in a direction, The Laws of Logic are still true.

Sure, the LOL don’t depend on a specific location in space, but that doesn’t mean that they aren’t dependent on space in general.  I would actually argue that concepts are abstractions that are dependent on the brains that create them based on those brains having recognized properties of space, time, and matter/energy.  That is to say that any concept such as the number 3 or the concept of redness is in fact dependent on a brain having recognized, for example, a quantity of discrete objects (which when generalized leads to the concept of numbers) or having recognized the color red in various objects (which when generalized leads to the concept of red or redness).  Since a quantity of discrete objects or a color must be located in some kind of space — even if three points on a one dimensional line (in the case of the number 3), or a two-dimensional red-colored plane (in the case of redness), then we can see that these concepts are ultimately dependent on space and matter/energy (of some kind).  Even if we say that concepts such as the color red or the number 3 do not literally exist in actual space nor are made of actual matter, they do have to exist in a mental space as mental objects, just as our conception of an apple floating in empty space doesn’t actually lie in space nor is made of matter, it nevertheless exists as a mental/perceptual representation of real space and real matter/energy that has been experienced by interaction with the physical universe.

Slick also mentions in the expanded form of his argument that the LOL can’t be dependent on time because:

They do not stop being true dependent on time. If we travel a billion years in the future or past, The Laws of Logic are still true.

Once again, sure, the LOL do not depend on a specific time, but rather they are dependent on time in general, because minds depend on time in order to have any experience of said concepts at all.  So not only are concepts only able to be formed by a brain that has created abstractions from physically interacting with space, matter, and energy within time (so far as we know), but the mind/concept-generating brain itself is also made of actual matter/energy, lying in real space, and operating/functioning within time.  So concepts are in fact not only dependent on space, time, and matter/energy (so far as we know), but are in fact also the product of space, time, and matter/energy, since it is only certain configurations of such that in fact produce a brain and the mind that results from said brain.  Thus, if the LOL are conceptual, then they are ultimately the product of and dependent on the physical.

Can Truth Exist Without Brains and a Universe?  Can Identities Exist Without a Universe?  I Don’t Think So…

Since Slick himself even claims that The Laws of Logic (LOL) are conceptual by nature, then that would mean that they are in fact also dependent on and the product of the physical universe, and more specifically are dependent on and the product of the human mind (or natural minds in general which are produced by a physical brain).  Slick goes on to say that the LOL can’t be dependent on the physical universe (which contains the brains needed to think or produce those concepts) because “…if the physical universe were to disappear, The Laws of Logic would still be true.”  It seems to me that without a physical universe, there wouldn’t be any “somethings” with any identities at all and so the Law of Identity which is the root of the other LOL wouldn’t apply to anything because there wouldn’t be anything and thus no existing identities.  Therefore, to say that the LOL are true sans a physical universe would be meaningless because identities themselves wouldn’t exist without a physical universe.  One might argue that abstract identities would still exist (like numbers or properties), but abstractions are products of a mind and thus need a brain to exist (so far as we know).  If one argued that supernatural identities would still exist without a physical universe, this would be nothing more than an ad hoc metaphysical assertion about the existence of the supernatural which carries a large burden of proof that can’t be (or at least hasn’t been) met.  Beyond that, if at least one of the supernatural identities was claimed to be God, this would also be begging the question.  This leads me to believe that the LOL are in fact a property of the physical universe (and appear to be a necessary one at that).

And if truth is itself just another concept, it too is dependent on minds and by extension the physical brains that produce those minds (as mentioned earlier).  In fact, the LOL seem to be required for any rational thought at all (hence why they are often referred to as the Laws of Thought), including the determination of any truth value at all.  So our ability to establish the truth value of the LOL (or the truth of anything for that matter) is also contingent on our presupposing the LOL in the first place.  So if there were no minds to presuppose the very LOL that are needed to establish its truth value, then could one say that they would be true anyway?  Wouldn’t this be analogous to saying that 1 + 1 = 2 would still be true even if numbers and addition (constructs of the mind) didn’t exist?  I’m just not sure that truth can exist in the absence of any minds and any physical universe.  I think that just as physical laws are descriptions of how the universe changes over time, these Laws of Thought are descriptions that underlie what our rational thought is based on, and thus how we arrive at the concept of truth at all.  If rational thought ceases to exist in the absence of a physical universe (since there are no longer any brains/minds), then the descriptions that underlie that rational thought (as well as their truth value) also cease to exist.

Can Two Different Things Have Something Fundamental in Common?

Slick then erroneously claims that the LOL can’t be the product of human minds because human minds are different and thus aren’t absolute, apparently not realizing that even though human minds are different from one another in many ways, they also have a lot fundamentally in common, such as how they process information and how they form concepts about the reality they interact with generally.  Even though our minds differ from one another in a number of ways, we nevertheless only have evidence to support the claim that human brains produce concepts and process information in the same general way at the most fundamental neurological level.  For example, the evidence suggests that the concept of the color red is based on the neurological processing of a certain range of wavelengths of electromagnetic radiation that have been absorbed by the eye’s retinal cells at some point in the past.  However, in Slick’s defense, I’ll admit that it could be the case that what I experience as the color red may be what you would call the color blue, and this would in fact suggest that concepts that we think we mutually understand are actually understood or experienced differently in a way that we can’t currently verify (since I can’t get in your mind and compare it to my own experience, and vice versa).

Nevertheless, just because our minds may experience color differently from one another or just because we may differ slightly in terms of what range of shades/tints of color we’d like to label as red, this does not mean that our brains/minds (or natural minds in general) are not responsible for producing the concept of red, nor does it mean that we don’t all produce that concept in the same general way.  The number 3 is perhaps a better example of a concept that is actually shared by humans in an absolute sense, because it is a concept that isn’t dependent on specific qualia (like the color red is).  The concept of the number 3 has a universal meaning in the human mind since it is derived from the generalization of a quantity of three discrete objects (which is independent of how any three specific objects are experienced in terms of their respective qualia).

Human Brains Have an Absolute Fundamental Neurology Which Encompasses the LOL

So I see no reason to believe that human minds differ at all in their conception of the LOL, especially if this is the foundation for rational thought (and thus any coherent concept formed by our brains).  In fact, I also believe that the evidence within the neurosciences suggests that the way the brain recognizes different patterns and thus forms different/unique concepts and such is dependent on the fact that the brain uses a hardware configuration schema that encompasses the logical absolutes.  In a previous post, my contention was that:

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

So I believe that our brain produces and distinguishes these different “object” identities by having a neurological scheme that represents each perceived identity (each object) with a unique set of neurons that function in a unique way and thus which have their own unique identity.  Therefore, it would seem that the absolute nature of the LOL can easily be explained by how the brain naturally encompasses them through its fundamental hardware schema.  In other words, my contention is that our brain uses this wiring schema because it is the only way that it can be wired to make any discriminations at all and validly distinguish one identity from another in perception and thought, and this ability to discriminate various aspects of reality would be evolutionarily naturally-selected for based on the brain accurately modeling properties of the universe (in this case different identities/objects/causal-interactions existing) as it interacts with that environment via our sensory organs.  Which would imply that the existence of discrete identities is a property of the physical universe, and the LOL would simply be a description of what identities are.  This would explain why we see the LOL as absolute and fundamental and presuppose them.  Our brains simply encompass them in the most fundamental aspect of our neurology as it is a fundamental physical property of the universe that our brains model.

I believe that this is one of the reasons that Dillahunty and others believe that the LOL are transcendent (neither physical nor conceptual), because natural brains/minds are neurologically incapable of imagining a world existing without them.  The problem then only occurs because Dillahunty is abstracting a hypothetical non-physical world or mode of existence, yet doesn’t realize that he is unable to remove every physical property from any abstracted world or imagined mode of existence.  In this case, the physical property that he is unable to remove from his hypothetical non-physical world is his own neurological foundation, the very foundation that underlies all concepts (including that of existential identities) and which underlies all rational thought.  I may be incorrect about Dillahunty’s position here, but this is what I’ve inferred anyway based on what I’ve heard him say while conversing with Slick about this topic.

Human (Natural) Minds Can’t Account for the LOL, But Disembodied Minds Can?

Slick even goes on to say in point 5 that:

But, since the Laws of Logic are always true everywhere and not dependent on human minds, it must be an absolute transcendent mind that is authoring them.  This mind is called God.

We can see here that he concedes that the LOL is in fact the product of a mind, only he rejects the possibility that it could be a human mind (and by implication any kind of natural mind).  Rather, he insists that it must be a transcendent mind of some kind, which he calls God.  The problem with this conclusion is that we have no evidence or argument that demonstrates that minds can exist without physical brains existing in physical space within time.  To assume so is simply to beg the question.  Thus, he is throwing in an ontological/metaphysical assumption of substance dualism as well as that of disembodied minds, not only claiming that there must exist some kind of supernatural substance, but that this mysterious “non-physical” substance also has the ability to constitute a mind, and somehow do so without any dependence on time (even though mental function and thinking is itself a temporal process).  He assumes all of this of course without providing any explanation of how this mind could work even in principle without being made of any kind of stuff, without being located in any kind of space, and without existing in any kind of time.  As I’ve mentioned elsewhere concerning the ad hoc concept of disembodied minds:

…the only concept of a mind that makes any sense at all is that which involves the properties of causality, time, change, space, and material, because minds result from particular physical processes involving a very complex configuration of physical materials.  That is, minds appear to be necessarily complex in terms of their physical structure (i.e. brains), and so trying to conceive of a mind that doesn’t have any physical parts at all, let alone a complex arrangement of said parts, is simply absurd (let alone a mind that can function without time, change, space, etc.).  At best, we are left with an ad hoc, unintelligible combination of properties without any underlying machinery or mechanism.

In summary, I believe Slick has made several errors in his reasoning, with the most egregious being his unfounded assumption that natural minds aren’t capable of producing an absolute concept such as the LOL simply because natural minds have differences between one another (not realizing that all minds have fundamental commonalities), and also his argument’s reliance on the assumption that an ad hoc disembodied mind not only exists (whatever that could possibly mean) but that this mind can somehow account for the LOL in a way that natural minds can not, which is nothing more than an argument from ignorance, a logical fallacy.  He also insists that the Laws of Logic would be true without any physical universe, not realizing that the truth value of the Laws of Logic can only be determined by presupposing the Laws of Logic in the first place, which is circular, thus showing that the truth value of the Laws of Logic can’t be used to prove that they are metaphysically transcendent in any way (even if they actually happen to be metaphysically transcendent).  Lastly, without a physical universe of any kind, I don’t see how identities themselves can exist, and identities seem to be required in order for the LOL to be meaningful at all.

Darwin’s Big Idea May Be The Biggest Yet

with 13 comments

Back in 1859, Charles Darwin released his famous theory of evolution by natural selection whereby inherent variations in the individual members of some population of organisms under consideration would eventually lead to speciation events due to those variations producing a differential in survival and reproductive success and thus leading to the natural selection of some subset of organisms within that population.  As Darwin explained in his On The Origin of Species:

If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being’s own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

While Darwin’s big idea completely transformed biology in terms of it providing (for the first time in history) an incredibly robust explanation for the origin of the diversity of life on this planet, his idea has since inspired other theories pertaining to perhaps the three largest mysteries that humans have ever explored: the origin of life itself (not just the diversity of life after it had begun, which was the intended scope of Darwin’s theory), the origin of the universe (most notably, why the universe is the way it is and not some other way), and also the origin of consciousness.

Origin of Life

In order to solve the first mystery (the origin of life itself), geologists, biologists, and biochemists are searching for plausible models of abiogenesis, whereby the general scheme of these models would involve chemical reactions (pertaining to geology) that would have begun to incorporate certain kinds of energetically favorable organic chemistries such that organic, self-replicating molecules eventually resulted.  Now, where Darwin’s idea of natural selection comes into play with life’s origin is in regard to the origin and evolution of these self-replicating molecules.  First of all, in order for any molecule at all to build up in concentration requires a set of conditions such that the reaction leading to the production of the molecule in question is more favorable then the reverse reaction where the product transforms back into the initial starting materials.  If merely one chemical reaction (out of a countless number of reactions occurring on the early earth) led to a self-replicating product, this would increasingly favor the production of that product, and thus self-replicating molecules themselves would be naturally selected for.  Once one of them was produced, there would have been a cascade effect of exponential growth, at least up to the limit set by the availability of the starting materials and energy sources present.

Now if we assume that at least some subset of these self-replicating molecules (if not all of them) had an imperfect fidelity in the copying process (which is highly likely) and/or underwent even a slight change after replication by reacting with other neighboring molecules (also likely), this would provide them with a means of mutation.  Mutations would inevitably lead to some molecules becoming more effective self-replicators than others, and then evolution through natural selection would take off, eventually leading to modern RNA/DNA.  So not only does Darwin’s big idea account for the evolution of diversity of life on this planet, but the basic underlying principle of natural selection would also account for the origin of self-replicating molecules in the first place, and subsequently the origin of RNA and DNA.

Origin of the Universe

Another grand idea that is gaining heavy traction in cosmology is that of inflationary cosmology, where this theory posits that the early universe underwent a period of rapid expansion, and due to quantum mechanical fluctuations in the microscopically sized inflationary region, seed universes would have resulted with each one having slightly different properties, one of which that would have expanded to be the universe that we live in.  Inflationary cosmology is currently heavily supported because it has led to a number of predictions, many of which that have already been confirmed by observation (it explains many large-scale features of our universe such as its homogeneity, isotropy, flatness, and other features).  What I find most interesting with inflationary theory is that it predicts the existence of a multiverse, whereby we are but one of an extremely large number of other universes (predicted to be on the order of 10^500, if not an infinite number), with each one having slightly different constants and so forth.

Once again, Darwin’s big idea, when applied to inflationary cosmology, would lead to the conclusion that our universe is the way it is because it was naturally selected to be that way.  The fact that its constants are within a very narrow range such that matter can even form, would make perfect sense, because even if an infinite number of universes exist with different constants, we would only expect to find ourselves in one that has the constants within the necessary range in order for matter, let alone life to exist.  So any universe that harbors matter, let alone life, would be naturally selected for against all the other universes that didn’t have the right properties to do so, including for example, universes that had too high or too low of a cosmological constant (such as those that would have instantly collapsed into a Big Crunch or expanded into a heat death far too quickly for any matter or life to have formed), or even universes that didn’t have the proper strong nuclear force to hold atomic nuclei together, or any other number of combinations that wouldn’t work.  So any universe that contains intelligent life capable of even asking the question of their origins, must necessarily have its properties within the required range (often referred to as the anthropic principle).

After our universe formed, the same principle would also apply to each galaxy and each solar system within those galaxies, whereby because variations exist in each galaxy and within each substituent solar system (differential properties analogous to different genes in a gene pool), then only those that have an acceptable range of conditions are capable of harboring life.  With over 10^22 stars in the observable universe (an unfathomably large number), and billions of years to evolve different conditions within each solar system surrounding those many stars, it isn’t surprising that eventually the temperature and other conditions would be acceptable for liquid water and organic chemistries to occur in many of those solar systems.  Even if there was only one life permitting planet per galaxy (on average), that would add up to over 100 billion life permitting planets in the observable universe alone (with many orders of magnitude more life permitting planets in the non-observable universe).  So given enough time, and given some mechanism of variation (in this case, differences in star composition and dynamics), natural selection in a sense can also account for the evolution of some solar systems that do in fact have life permitting conditions in a universe such as our own.

Origin of Consciousness

The last significant mystery I’d like to discuss involves the origin of consciousness.  While there are many current theories pertaining to different aspects of consciousness, and while there has been much research performed in the neurosciences, cognitive sciences, psychology, etc., pertaining to how the brain works and how it correlates to various aspects of the mind and consciousness, the brain sciences (though neuroscience in particular) are in their relative infancy and so there are still many questions that haven’t been answered yet.  One promising theory that has already been shown to account for many aspects of consciousness is Gerald Edelman’s theory of neuronal group selection (NGS) otherwise known as neural Darwinism (ND), which is a large scale theory of brain function.  As one might expect from the name, the mechanism of natural selection is integral to this theory.  In ND, the basic idea consists of three parts as read on the Wiki:

  1. Anatomical connectivity in the brain occurs via selective mechanochemical events that take place epigenetically during development.  This creates a diverse primary neurological repertoire by differential reproduction.
  2. Once structural diversity is established anatomically, a second selective process occurs during postnatal behavioral experience through epigenetic modifications in the strength of synaptic connections between neuronal groups.  This creates a diverse secondary repertoire by differential amplification.
  3. Re-entrant signaling between neuronal groups allows for spatiotemporal continuity in response to real-world interactions.  Edelman argues that thalamocortical and corticocortical re-entrant signaling are critical to generating and maintaining conscious states in mammals.

In a nutshell, the basic differentiated structure of the brain that forms in early development is accomplished through cellular proliferation, migration, distribution, and branching processes that involve selection processes operating on random differences in the adhesion molecules that these processes use to bind one neuronal cell to another.  These crude selection processes result in a rough initial configuration that is for the most part fixed.  However, because there are a diverse number of sets of different hierarchical arrangements of neurons in various neuronal groups, there are bound to be functionally equivalent groups of neurons that are not equivalent in structure, but are all capable of responding to the same types of sensory input.  Because some of these groups should in theory be better than others at responding to some particular type of sensory stimuli, this creates a form of neuronal/synaptic competition in the brain, whereby those groups of neurons that happen to have the best synaptic efficiency for the stimuli in question are naturally selected over the others.  This in turn leads to an increased probability that the same network will respond to similar or identical signals in the future.  Each time this occurs, synaptic strengths increase in the most efficient networks for each particular type of stimuli, and this would account for a relatively quick level of neural plasticity in the brain.

The last aspect of the theory involves what Edelman called re-entrant signaling whereby a sampling of the stimuli from functionally different groups of neurons occurring at the same time leads to a form of self-organizing intelligence.  This would provide a means for explaining how we experience spatiotemporal consistency in our experience of sensory stimuli.  Basically, we would have functionally different parts of the brain, such as various maps in the visual centers that pertain to color versus others that pertain to orientation or shape, that would effectively amalgamate the two (previously segregated) regions such that they can function in parallel and thus correlate with one another producing an amalgamation of the two types of neural maps.  Once this re-entrant signaling is accomplished between higher order or higher complexity maps in the brain, such as those pertaining to value-dependent memory storage centers, language centers, and perhaps back to various sensory cortical regions, this would create an even richer level of synchronization, possibly leading to consciousness (according to the theory).  In all of the aspects of the theory, the natural selection of differentiated neuronal structures, synaptic connections and strengths and eventually that of larger re-entrant connections would be responsible for creating the parallel and correlated processes in the brain believed to be required for consciousness.  There’s been an increasing amount of support for this theory, and more evidence continues to accumulate in support of it.  In any case, it is a brilliant idea and one with a lot of promise in potentially explaining one of the most fundamental aspects of our existence.

Darwin’s Big Idea May Be the Biggest Yet

In my opinion, Darwin’s theory of evolution through natural selection was perhaps the most profound theory ever discovered.  I’d even say that it beats Einstein’s theory of Relativity because of its massive explanatory scope and carryover to other disciplines, such as cosmology, neuroscience, and even the immune system (see Edelman’s Nobel work on the immune system, where he showed how the immune system works through natural selection as well, as opposed to some type of re-programming/learning).  Based on the basic idea of natural selection, we have been able to provide a number of robust explanations pertaining to many aspects of why the universe is likely to be the way it is, how life likely began, how it evolved afterward, and it may possibly be the answer to how life eventually evolved brains capable of being conscious.  It is truly one of the most fascinating principles I’ve ever learned about and I’m honestly awe struck by its beauty, simplicity, and explanatory power.