CGI, Movies and Truth…

After watching Rogue One: A Star Wars Story, which I liked, though not nearly as much as the original trilogy (Episodes IV, V, and VI), it got me thinking more about something I hadn’t thought about since the most recent presidential election.  As I watched Grand Moff Tarkin and Princess Leia, both characters made possible in large part thanks to CGI (as well as the help of actors Ingvild Deila and Guy Henry), I realized that although this is still a long way away, it is inevitable that (barring a nuclear world war or some other catastrophe that kills us all or sets us back to a pre-industrialized age) the pace of this technology will eventually lead to CGI products that are completely indistinguishable from reality.

This means that eventually, the “fake news” issue that many have been making a lot of noise about as of late, will one day take a new and ugly turn for the worse.  Not only is video and graphic technology accelerating at a fairly rapid pace to exacerbate this problem, but similar concerns are also arising as a result of voice editing software.  By simply gathering several seconds of sample audio from a person of interest, various forms of software are getting better and better at synthesizing their speech in order to mimic them — putting whatever words into “their” mouths that one so desires.

The irony here is that this means that despite the fact that we are going to continue becoming more and more globally interconnected, technologically advanced, and gain more global knowledge, it seems that we will eventually reach a point where each individual becomes less and less able to know what is true and what isn’t in all the places that you are electronically connected to.  One reason for this is that, as per the opening reference to Rogue One, it will become increasingly difficult to judge the veracity of videos that go viral on the internet and/or through news outlets.  We can imagine seeing a video (or many series of videos) released on the news and throughout the internet containing shocking events with real world leaders or other famous people, places, and so forth, events that could possibly start a civil or world war, alter one’s vote, or otherwise — but with the caveat that these events are entirely manufactured by some Machiavellian warmonger or power seeking elite.

Pragmatically speaking, we must still live our lives trusting what we see in proportion to the evidence we have, thus believing ordinary claims with a higher degree of confidence than extraordinary ones.  We will still need to hold to the general rule of extraordinary claims requiring extraordinary evidence in order to meet their burden of proof.  But it will become more difficult to trust certain forms of evidence (including in a court of law), so we’ll have to take that into consideration so that actions that result in more catastrophic consequences (if your assumptions/information turn out to be based on false evidence) require a higher burden of proof — once we are able to successfully pass some kind of graphics Touring Test.

This is by no means an endorsement for conspiracy theories generally nor any other anti-intellectual or dogmatic non-sense. We don’t want people to start doubting everything they see nor to start doubting everything they don’t WANT to see (which would be a proverbial buffet for our cognitive biases and the conspiracy theorists that make use of these epistemological flaws regularly), we still need to take this dynamic technological factor into account to maintain a world view based on proper Bayesian reasoning.

On the brighter side of things, we are going to get to enjoy much of what the new CGI capabilities will bring to us, because movies and all visual entertainment are going to be revolutionarily changed forever in many ways that will be worth celebrating, including our use of virtual reality generally (many various forms that we do and will continue to consciously and rationally desire). We just need to pay attention and exercise some careful moral deliberation as we develop these technologies. Our brains simply didn’t evolve to easily avoid being duped by artificial realities like the ones we’re developing (we already get duped far too often within our actual reality), so we need to engineer our path forward in a way that will better safeguard us from our own cognitive biases so we can maximize our well being once this genie is out of the bottle.

Advertisement

Co-evolution of Humans & Artificial Intelligence

In my last post, I wrote a little bit about the concept of personal identity in terms of what some philosophers have emphasized and my take on it.  I wrote that post in response to an interesting blog post written by James DiGiovanna over at A Philosopher’s Take.  James has written another post related to the possible consequences of integrating artificial intelligence into our societal framework, but rather than discussing personal identity as it relates to artificial intelligence, he discussed how the advancements made in machine learning and so forth are leading to the future prospects of effective companion AI, or what he referred to as programmable friends.  The main point he raised in that post was the fact that programmable friends would likely have a very different relationship dynamic with us compared with our traditional (human) friends.  James also spoke about companion AI  in terms of their also being laborers (as well as being friends) but for the purposes of this post I won’t discuss these laborer aspects of future companion AI (even if the labor aspect is what drives us to make companion AI in the first place).  I’ll be limiting my comments here to the friendship or social dynamic aspects only.  So what aspects about programmable AI should we start thinking about?

Well for one, we don’t currently have the ability to simply reprogram a friend to be exactly as we want them to be, in order to avoid conflicts entirely, to share every interest we have, etc., but rather there is a bit of a give-and-take relationship dynamic that we’re used to dealing with.  We learn new ways of behaving and looking at the world and even new ways of looking at ourselves when we have friendships with people that differ from us in certain ways.  Much of the expansion and beneficial evolution of our perspectives are the result of certain conflicts that arise between ourselves and our friends, where different viewpoints can clash against one another, often forcing a person to reevaluate their own position based on the value they place on the viewpoints of their friends.  If we could simply reprogram our friends, as in the case with some future AI companions, what would this do to our moral, psychological and intellectual growth?  There would be some positive effects I’m sure (from having less conflict in some cases and thus an increase in short term happiness), but we’d definitely be missing out on a host of interpersonal benefits that we gain from having the types of friendships that we’re used to having (and thus we’d likely have less overall happiness as a result).

We can see where evolution ties in to all this, whereby we have evolved as a social species to interact with others that are more or less like us, and so when we envision these possible future AI friendships, it should become obvious why certain problems would be inevitable largely because of the incompatibility with our evolved social dynamic.  To be sure, some of these problems could be mitigated by accounting for them in the initial design of the companion AI.  In general, this could be done by making the AI more like humans in the first place and this could be something advertised as some kind of beneficial AI “social software package” so people looking to get companion AI would be inclined to get this version even if they had the choice to go for the entirely reprogrammable version.

Some features of a “social software package” could be things like a limit on the number of ways the AI could be reprogrammed such that only very serious conflicts could be avoided through reprogramming, but without the ability to avoid all conflicts.  It could be such that the AI are able to have a weight on certain opinions, just as we do, and to be more assertive with regard to certain propositions and so forth.  Once the AI has learned its human counterpart’s personality, values, opinions, etc., it could also be programmed with the ability to intentionally challenge that human by offering different points of view and by its using the Socratic method (at least from time to time).  If people realized that they could possibly gain wisdom, knowledge, tolerance, and various social aptitudes from their companion AI, I would think that would be a marked selling point.

Another factor that I think will likely play a role in mitigating the possible social dynamic clash between companion AI (that are programmable) and humans is the fact that humans are also likely to become more and more integrated with AI technology generally.  That is, as humans are continuing to make advancements in AI technology, we are also likely to integrate a lot of that technology into ourselves, to make humans more or less into cyborgs a.k.a. cybernetic organisms.  If we see the path we’re on already with all the smart phones, apps, and other gadgets and computer systems that have started to become extensions of ourselves, we can see that the next obvious step (which I’ve mentioned elsewhere, here and here) is to remove the external peripherals so that they are directly accessible via our consciousness with no need of interfacing with external hardware and so forth.  If we can access “the cloud” with our minds (say, via bluetooth or the like), then the apps and all the fancy features can become a part of our minds, adding to the ways that we will be able to think, providing an internet worth of knowledge at our cognitive disposal, etc.  I could see this technology eventually allowing us to change our senses and perceptions, including an ability to add virtual objects that are amalgamated with the rest of the external reality that we perceive (such as adding virtual friends that we see and interact with that aren’t physically present outside of our minds even though they appear to be).

So if we start to integrate these kinds of technologies into ourselves as we are also creating companion AI, then we may also end up with the ability to reprogram ourselves alongside those programmable companion AI.  In effect, our own qualitatively human social dynamic may start to change markedly and become more and more compatible with that of the future AI.  The way I think this will most likely play out is that we will try to make AI more like us as we try to make us more like AI, where we co-evolve with one another, trying to share advantages with one another and eventually becoming indistinguishable from one another.  Along this journey however we will also become very different from the way we are now, and after enough time passes, we’ll likely change so much that we’d be unrecognizable to people living today.  My hope is that as we use AI to also improve our intelligence and increase our knowledge of the world generally, we will also continue to improve on our knowledge of what makes us happiest (as social creatures or otherwise) and thus how to live the best and most morally fruitful lives that we can.  This will include improving our knowledge of social dynamics and the ways that we can maximize all the interpersonal benefits therein.  Artificial intelligence may help us to accomplish this however paradoxical or counter-intuitive that may seem to us now.

Neurological Configuration & the Prospects of an Innate Ontology

After a brief discussion on another blog pertaining to whether or not humans possess some kind of an innate ontology or other forms of what I would call innate knowledge, I decided to expand on my reply to that blog post.

While I agree that at least most of our knowledge is acquired through learning, specifically through the acquisition and use of memorized patterns of perception (as this is generally how I would define knowledge), I also believe that there are at least some innate forms of knowledge, including some that would likely result from certain aspects of our brain’s innate neurological configuration and implementation strategy.  This proposed form of innate knowledge would seem to bestow a foundation for later acquiring the bulk of our knowledge that is accomplished through learning.  This foundation would perhaps be best described as a fundamental scaffold of our ontology and thus an innate aspect that our continually developing ontology is based on.

My basic contention is that the hierarchical configuration of neuronal connections in our brains is highly analogous to the hierarchical relationships utilized to produce our conceptualization of reality.  In order for us to make sense of the world, our brains seem to fracture reality into many discrete elements, properties, concepts, propositions, etc., which are all connected to each other through various causal relationships or what some might call semantic hierarchies.  So it seems plausible if not likely that the brain is accomplishing a fundamental aspect of our ontology by our utilizing an innate hardware schema that involves neurological branching.

As the evidence in the neurosciences suggests, it certainly appears that our acquisition of knowledge through learning what those discrete elements, properties, concepts, propositions, etc., are, involves synaptogenesis followed by pruning, modifying, and reshaping a hierarchical neurological configuration, in order to end up with a more specific hierarchical neurological arrangement, and one that more accurately correlates with the reality we are interacting with and learning about through our sensory organs.  Since the specific arrangement that eventually forms couldn’t have been entirely coded for in our DNA (due to it’s extremely high level of complexity and information density), it ultimately had to be fine-tuned to this level of complexity after it’s initial pre-sensory configuration developed.  Nevertheless, the DNA sequences that were naturally selected for to produce the highly capable brains of human beings (as opposed to the DNA that guides the formation of the brain of a much less intelligent animal), clearly have encoded increasingly more effective hardware implementation strategies than our evolutionary ancestors.  These naturally selected neurological strategies seem to control what particular types of causal patterns the brain is theoretically capable of recognizing (including some upper limit of complexity), and they also seem to control how the brain stores and organizes these patterns for later use.  So overall, my contention is that these naturally selected strategies in themselves are a type of knowledge, because they seem to provide the very foundation for our initial ontology.

Based on my understanding, after many of the initial activity-independent mechanisms for neural development have occurred in some region of the developing brain such as cellular differentiation, cellular migration, axon guidance, and some amount of synapse formation, then the activity-dependent mechanisms for neuronal development (such as neural activity caused by the sensory organs in the process of learning), finally begin to modify those synapses and axons into a new hierarchical arrangement.  It is especially worth noting that even though much of the synapse formation during neural development is mediated by activity-dependent mechanisms, such as the aforementioned neural activity produced by the sensory organs during perceptual development and learning, there is also spontaneous neural activity forming many of these synapses even before any sensory input is present, thus contributing to the innate neurological configuration (i.e. that which is formed before any sensation or learning has occurred).

Thus, the subsequent hierarchy formed through neural/sensory stimulation via learning appears to begin from a parent hierarchical starting point based on neural developmental processes that are coded for in our DNA as well as synaptogenic mechanisms involving spontaneous pre-sensory neural activity.  So our brain’s innate (i.e. pre-sensory) configuration likely contributes to our making sense of the world by providing a starting point that reflects the fundamental hierarchical nature of reality that all subsequent knowledge is built off of.  In other words, it seems that if our mature conceptualization of reality involves a very specific type of hierarchy, then an innate/pre-sensory hierarchical schema of neurons would be a plausible if not expected physical foundation for it (see Edelman’s Theory of Neuronal Group Selection within this link for more empirical support of these points).

Additionally, if the brain’s wiring has evolved in order to see dimensions of difference in the world (unique sensory/perceptual patterns that is, such as quantity, colors, sounds, tastes, smells, etc.), then it would make sense that the brain can give any particular pattern an identity by having a unique schema of hardware or unique use of said hardware to perceive such a pattern and distinguish it from other patterns.  After the brain does this, the patterns are then arguably organized by the logical absolutes.  For example, if the hardware scheme or process used to detect a particular pattern “A” exists and all other patterns we perceive have or are given their own unique hardware-based identity (i.e. “not-A” a.k.a. B, C, D, etc.), then the brain would effectively be wired such that pattern “A” = pattern “A” (law of identity), any other pattern which we can call “not-A” does not equal pattern “A” (law of non-contradiction), and any pattern must either be “A” or some other pattern even if brand new, which we can also call “not-A” (law of the excluded middle).  So by the brain giving a pattern a physical identity (i.e. a specific type of hardware configuration in our brain that when activated, represents a detection of one specific pattern), our brains effectively produce the logical absolutes by nature of the brain’s innate wiring strategy which it uses to distinguish one pattern from another.  So although it may be true that there can’t be any patterns stored in the brain until after learning begins (through sensory experience), the fact that the DNA-mediated brain wiring strategy inherently involves eventually giving a particular learned pattern a unique neurological hardware identity to distinguish it from other stored patterns, suggests that the logical absolutes themselves are an innate and implicit property of how the brain stores recognized patterns.

In short, if it is true that any and all forms of reasoning as well as the ability to accumulate knowledge simply requires logic and the recognition of causal patterns, and if the brain’s innate neurological configuration schema provides the starting foundation for both, then it would seem reasonable to conclude that the brain has at least some types of innate knowledge.

Artificial Intelligence: A New Perspective on a Perceived Threat

There is a growing fear in many people of the future capabilities of artificial intelligence (AI), especially as the intelligence of these computing systems begins to approach that of human beings.  Since it is likely that AI will eventually surpass the intelligence of humans, some wonder if these advancements will be the beginning of the end of us.  Stephen Hawking, the eminent British physicist, was recently quoted by the BBC as saying “The development of full artificial intelligence could spell the end of the human race.”  BBC technology correspondent Rory Cellan-Jones said in a recent article “Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.”  Hawking then said “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Hawking isn’t alone with this fear, and clearly this fear isn’t ill-founded.  It doesn’t take a rocket scientist to realize that human intelligence has allowed us to overcome just about any environmental barrier we’ve come across, driving us to the top of the food chain.  We’ve all seen the benefits of our high intelligence as a species, but we’ve also seen what can happen due to that intelligence being imperfect, having it operate on incomplete or fallacious information, and ultimately lacking an adequate capability of accurately determining the long-term consequences of our actions.  Because we have such a remarkable ability to manipulate our environment, that manipulation can be extremely beneficial or extremely harmful as we’ve seen with the numerous species we’ve threatened on this planet (some having gone extinct).  We’ve even threatened many fellow human beings in the process, whether intentionally or not.  Our intelligence, combined with some elements of short-sightedness, selfishness, and aggression, has led to some pretty abhorrent products throughout human history — anything from the mass enslavement of others spanning back thousands of years to modern forms of extermination weaponry (e.g. bio-warfare and nuclear bombs).  If AI reaches and eventually surpasses our level of intelligence, it is reasonable to consider the possibility that we may find ourselves on a lower rung of the food chain (so to speak), potentially becoming enslaved or exterminated by this advanced intelligence.

AI: Friend or Foe?

So what exactly prompted Stephen Hawking to make these claims?  As the BBC article mentions, “His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI…The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.  Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.”

Reading this article suggests another possibility or perspective that I don’t think a lot of people are considering with regard to AI technology.  What if AI simply replaces us gradually, by actually becoming the new “us”?  That is, as we further progress in Cyborg (i.e. cybernetic organism) technologies, using advancements similar to Stephen Hawking’s communication ability upgrade, we are ultimately becoming amalgams of biological and synthetic machines anyway.  Even the technology that we currently operate through an external peripheral interface (like smart phones and all other computers) will likely become integrated into our bodies internally.  Google glasses, voice recognition, and other technologies like those used by Hawking are certainly taking us in that direction.  It’s not difficult to imagine one day being able to think about a particular question or keyword, and having an integrated blue-tooth implant in our brain recognize the mental/physiological command cue, and successfully retrieve the desired information wirelessly from an online cloud or internet database of some form.  Going further still, we will likely one day be able to take sensory information that enters the neuronal network of our brain, and once again, send it wirelessly to supercomputers stored elsewhere that are able to process the information with billions of pattern recognition modules.  The 300 million or so pattern recognition modules that are currently in our brain’s neo-cortex would be dwarfed by this new peripheral-free interface and wirelessly accessible technology.

For those that aren’t very familiar with the function or purpose of the brain’s neo-cortex, we use its 300 million or so pattern recognition modules to notice patterns in the environment around us (and meta patterns of neuronal activity within the brain), thus being able to recognize and memorize sensory data, and think.  Ultimately, we use this pattern recognition to accomplish goals, solve problems, and gain knowledge from past experience.  In short, these pattern recognition modules are our primary source or physiological means for our intelligence.  Thus, being able to increase the number of pattern recognition modules (as well as the number of hierarchies between different sets of them), will only increase our intelligence.  Regardless of whether we integrate computer chips in our brain to do at least some or all of this processing locally, or use a wireless means of communication to an external supercomputer farm or otherwise, we will likely continue to integrate our biology with synthetic analogs to increase our capabilities.

When we realize that a higher intelligence allows us to better predict the consequences of our actions, we can see that our increasing integration with AI will likely have incredible survival benefits.  This integration will also catalyze further technologies that could never have been accomplished with biological brains alone, because we simply aren’t naturally intelligent enough to think beyond a certain level of complexity.  As Hawking said regarding AI that could surpass our intelligence, “It would take off on its own, and re-design itself at an ever increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”  Yes, but if that AI becomes integrated in us, then really it is humans that are finding a way to circumvent slow biological evolution with a non-biological substrate that supercedes it.

At this time I think it is relevant to mention something else I’ve written about previously, which is the advancements being made in genetic engineering and how they are taking us into our last and grandest evolutionary leap, a “conscious evolution”, thus being able to circumvent our own slow biological evolution through an intentionally engineered design.  So as we gain more knowledge in the field of genetic engineering (combined with the increasing simulation and computing power afforded by AI), we will likely be able to catalyze our own biological evolution such that we can evolve quickly as we increase our Cyborg integrations with AI.  So we will likely see an increase in genetic engineering capabilities developing in close parallel with AI advancements, with each field substantially contributing to the other and ultimately leading to our transhumanism.

Final Thoughts

It seems clear that advancements in AI are providing us with more tools to accomplish ever-more complex goals as a species.  As we continue to integrate AI into ourselves, what we now call “human” is simply going to change as we change.  This would happen regardless, as human biological evolution continues its course into another speciation event, similar to the one that led to humans in the first place.  In fact, if we wanted to maintain the way we are right now as a species, biologically speaking, it would actually require us to use genetic engineering to do so, because genetic differentiation mechanisms (e.g. imperfect DNA replication, mutations, etc.) are inherent in our biology.  Thus, for those that argue against certain technologies based on a desire to maintain humanity and human attributes, they must also realize that the very technologies they despise are in fact their only hope for doing so.  More importantly, the goal of maintaining what humans currently are goes against our natural evolution, and we should embrace change, even if we embrace it with caution.

If AI continues to become further integrated into ourselves, forming a Cyborg amalgam of some kind, as it advances to a certain point we may choose one day to entirely eliminate the biological substrate of that amalgam, if it is both possible and advantageous to do so.  Even if we maintain some of our biology, and merely hybridize with AI, then Hawking was right to point out that “The development of full artificial intelligence could spell the end of the human race.”  Although, rather than a doomsday scenario like we saw in the movie The Terminator, with humans and machines at war with one another, the end of the human race may simply mean that we will end up changing into a different species, just as we’ve done throughout our evolutionary history.  Only this time, it will be a transition from a purely biological evolution to a cybernetic hybrid variation.  Furthermore, if it happens, it will be a transition that will likely increase our intelligence (and many other capabilities) to unfathomable levels, giving us an unprecedented ability to act based on more knowledge of the consequences of our actions as we move forward.  We should be cautious indeed, but we should also embrace our ongoing evolution and eventual transhumanism.