Neuroscience Arms Race & Our Changing World View

At least since the time of Hippocrates, people began to realize that the brain was the physical correlate of consciousness and thought.  Since then, the fields of psychology, neuroscience, and several inter-related fields have emerged.  There have been numerous advancements made within the field of neuroscience during the last decade or so, and in that same time frame there has also been an increased interest in the social, religious, philosophical, and moral implications that have precipitated from such a far-reaching field.  Certainly the medical knowledge we’ve obtained from the neurosciences has been the primary benefit of such research efforts, as we’ve learned quite a bit more about how the brain works, how it is structured, and the ongoing neuropathology that has led to improvements in diagnosing and treating various mental illnesses.  However, it is the other side of neuroscience that I’d like to focus on in this post — the paradigm shift relating to how we are starting to see the world around us (including ourselves), and how this is affecting our goals as well as how to achieve them.

Paradigm Shift of Our World View

Aside from the medical knowledge we are obtaining from the neurosciences, we are also gaining new perspectives on what exactly the “mind” is.  We’ve come a long way in demonstrating that “mental” or “mind” states are correlated with physical brain states, and there is an ever growing plethora of evidence which suggests that these mind states are in fact caused by these brain states.  It should come as no surprise then that all of our thoughts and behaviors are also caused by these physical brain states.  It is because of this scientific realization that society is currently undergoing an important paradigm shift in terms of our world view.

If all of our thoughts and behaviors are mediated by our physical brain states, then many everyday concepts such as thinking, learning, personality, and decision making can take on entirely new meanings.  To illustrate this point, I’d like to briefly mention the well known “nature vs. nurture” debate.  The current consensus among scientists is that people (i.e. their thoughts and behavior) are ultimately products of both their genes and their environment.

Genes & Environment

From a neuroscientific perspective, the genetic component is accounted for by noting that genes have been shown to play a very large role in directing the initial brain wiring schema of an individual during embryological development and through gestation.  During this time, the brain is developing very basic instinctual behavioral “programs” which are physically constituted by vastly complex neural networks, and the body’s developing sensory organs and systems are also connected to particular groups of these neural networks.  These complex neural networks, which have presumably been naturally selected for in order to benefit the survival of the individual, continue being constructed after gestation and throughout the entire ontogenic evolution of the individual (albeit to lesser degrees over time).

As for the environmental component, this can be further split into two parts: the internal and the external environment.  The internal environment within the brain itself, including various chemical concentration gradients partly mediated by random Brownian motion, provides some gene expression constraints as well as some additional guidance to work with the genetic instructions to help guide neuronal growth, migration, and connectivity.  The external environment, consisting of various sensory stimuli, seems to modify this neural construction by providing a form of inputs which may cause the constituent neurons within these neural networks to change their signal strength, change their action potential threshold, and/or modify their connections with particular neurons (among other possible changes).

Causal Constraints

This combination of genetic instructions and environmental interaction and input produces a conscious, thinking, and behaving being through a large number of ongoing and highly complex hardware changes.  It isn’t difficult to imagine why these insights from neuroscience might modify our conventional views of concepts such as thinking, learning, personality, and decision making.  Prior to these developments over the last few decades, the brain was largely seen as a sort of “black box”, with its internal milieu and functional properties remaining mysterious and inaccessible.  From that time and prior to it, for millennia, many people have assumed that our thoughts and behaviors were self-caused or causa sui.  That is, people believed that they themselves (i.e. some causally free “consciousness”, or “soul”, etc.) caused their own thoughts and behavior as opposed to those thoughts and behaviors being ultimately caused by physical processes (e.g. neuronal activity, chemical reactions, etc.).

Neuroscience (as well as biochemistry and its underlying physics) has shed a lot of light on this long-held assumption and, as it stands, the evidence has shown this prior assumption to be false.  The brain is ultimately controlled by the laws of physics since every chemical reaction and neural event that physically produces our thoughts, choices, and behaviors, have never been shown to be causally free from these physically guiding constraints.  I will mention that quantum uncertainty or quantum “randomness” (if ontologically random) does provide some possible freedom from physical determinism.  However, these findings from quantum physics do not provide any support for self-caused thoughts or behaviors.  Rather, it merely shows that those physically constrained thoughts and behaviors may never be completely predictable by physical laws no matter how much data is obtained.  In other words, our thoughts and behaviors are produced by highly predictable (although not necessarily completely predictable) physical laws and constraints as well as some possible random causal factors.

As a result of these physical causal constraints, the conventional perspective of an individual having classical free will has been shattered.  Our traditional views of human attributes including morality, choices, ideology, and even individualism are continuing to change markedly.  Not surprisingly, there are many people uncomfortable with these scientific discoveries including members of various religious and ideological groups that are largely based upon and thus depend on the very presupposition of precepts such as classical free will and moral responsibility.  The evidence that is compiling from the neurosciences is in fact showing that while people are causally responsible for their thoughts, choices, and behavior (i.e. an individual’s thoughts and subsequent behavior are constituents of a causal chain of events), they are not morally responsible in the sense that they can choose to think or behave any differently than they do, for their thoughts and behavior are ultimately governed by physically constrained neural processes.

New World View

Now I’d like to return to what I mentioned earlier and consider how these insights from neuroscience may be drastically modifying how we look at concepts such as thinking, learning, personality, and decision making.  If our brain is operating via these neural network dynamics, then conscious thought appears to be produced by a particular subset of these neural network configurations and processes.  So as we continue to learn how to more directly control or alter these neural network arrangements and processes (above and beyond simply applying electrical potentials to certain neural regions in order to bring memories or other forms of imagery into consciousness, as we’ve done in the past), we should be able to control thought generation from a more “bottom-up” approach.  Neuroscience is definitely heading in this direction, although there is a lot of work to be done before we have any considerable knowledge of and control over such processes.

Likewise, learning seems to consist of a certain type of neural network modification (involving memory), leading to changes in causal pattern recognition (among other things) which results in our ability to more easily achieve our goals over time.  We’ve typically thought of learning as the successful input, retention, and recall of new information, and we have been achieving this “learning” process through the input of environmental stimuli via our sensory organs and systems.  In the future, it may be possible to once again, as with the aforementioned bottom-up thought generation, physically modify our neural networks to directly implant memories and causal pattern recognition information in order to “learn” without any actual sensory input, and/or we may be able to eventually “upload” information in a way that bypasses the typical sensory pathways thus potentially allowing us to catalyze the learning process in unprecedented ways.

If we are one day able to more directly control the neural configurations and processes that lead to specific thoughts as well as learned information, then there is no reason that we won’t be able to modify our personalities, our decision-making abilities and “algorithms”, etc.  In a nutshell, we may be able to modify any aspect of “who” we are in extraordinary ways (whether this is a “good” or “bad” thing is another issue entirely).  As we come to learn more about the genetic components of these neural processes, we may also be able to use various genetic engineering techniques to assist with the necessary neural modifications required to achieve these goals.  The bottom line here is that people are products of their genes and environment, and by manipulating both of those causal constraints in more direct ways (e.g. through the use of neuroscientific techniques), we may be able to achieve previously unattainable abilities and perhaps in a relatively miniscule amount of time.  It goes without saying that these methods will also significantly affect our evolutionary course as a species, allowing us to enter new landscapes through our substantially enhanced ability to adapt.  This may be realized through these methods by finding ways to improve our intelligence, memory, or other cognitive faculties, effectively giving us the ability to engineer or re-engineer our brains as desired.

Neuroscience Arms Race

We can see that increasing our knowledge and capabilities within the neurosciences has the potential for drastic societal changes, some of which are already starting to be realized.  The impact that these fields will have on how we approach the problem of criminal, violent, or otherwise undesirable behavior can not be overstated.  Trying to correct these issues by focusing our efforts on the neural or cognitive substrate that underlie them, as opposed to using less direct and more external means (e.g. social engineering methods) that we’ve been using thus far, may lead to much less expensive solutions as well as solutions that may be realized much, much more quickly.

As with any scientific discovery or subsequent technology produced from it, neuroscience has the power to bestow on us both benefits as well as disadvantages.  I’m reminded of the ground-breaking efforts made within nuclear physics several decades ago, whereby physicists not only gained precious information about subatomic particles (and their binding energies) but also how to release these enormous amounts of energy from nuclear fusion and fission reactions.  It wasn’t long after these breakthrough discoveries were made before they were used by others to create the first atomic bombs.  Likewise, while our increasing knowledge within neuroscience has the power to help society improve by optimizing our brain function and behavior, it can also be used by various entities to manipulate the populace for unethical reasons.

For example, despite the large number of free market proponents who claim that the economy need not be regulated by anything other than rational consumers and their choices of goods and services, corporations have clearly increased their use of marketing strategies that take advantage of many humans’ irrational tendencies (whether it is “buy one get one free” offers, “sales” on items that have artificially raised prices, etc.).  Politicians and other leaders have been using similar tactics by taking advantage of voters’ emotional vulnerabilities on certain controversial issues that serve as nothing more than an ideological distraction in order to reduce or eliminate any awareness or rational analysis of the more pressing issues.

There are already research and development efforts being made by these various entities in order to take advantage of these findings within neuroscience such that they can have greater influence over people’s decisions (whether it relates to consumers’ purchases, votes, etc.).  To give an example of some of these R&D efforts, it is believed that MRI (Magnetic Resonance Imaging) or fMRI (functional Magnetic Resonance Imaging) brain scans may eventually be able to show useful details about a person’s personality or their innate or conditioned tendencies (including compulsive or addictive tendencies, preferences for certain foods or behaviors, etc.).  This kind of capability (if realized) would allow marketers to maximize how many dollars they can squeeze out of each consumer by optimizing their choices of goods and services and how they are advertised. We have already seen how purchases made on the internet, if tracked, begin to personalize the advertisements that we see during our online experience (e.g. if you buy fishing gear online, you may subsequently notice more advertisements and pop-ups for fishing related goods and services).  If possible, the information found using these types of “brain probing” methods could be applied to other areas, including that of political decision making.

While these methods derived from the neurosciences may be beneficial in some cases, for instance, by allowing the consumer more automated access to products that they may need or want (which will likely be a selling point used by these corporations for obtaining consumer approval of such methods), it will also exacerbate unsustainable consumption and other personal or societally destructive tendencies and it is likely to continue to reduce (or eliminate) whatever rational decision making capabilities we still have left.

Final Thoughts

As we can see, neuroscience has the potential to (and is already starting to) completely change the way we look at the world.  Further advancements in these fields will likely redefine many of our goals as well as how to achieve them.  It may also allow us to solve many problems that we face as a species, far beyond simply curing mental illnesses or ailments.  The main question that comes to mind is:  Who will win the neuroscience arms race?  Will it be those humanitarians, scientists, and medical professionals that are striving to accumulate knowledge in order to help solve the problems of individuals and societies as well as to increase their quality of life?  Or will it be the entities that are trying to accumulate similar knowledge in order to take advantage of human weaknesses for the purposes of gaining wealth and power, thus exacerbating the problems we currently face?

Advertisement

The Co-Evolution of Language and Complex Thought

Language appears to be the most profound feature that has arisen during the evolution of the human mind.  This feature of humanity has led to incredible thought complexity, and also provided the foundation for the most simplistic thoughts imaginable.  Many of us may wonder how exactly language is related to thought and also how the evolution of language has affected the evolution of thought complexity.  In this post, I plan to discuss what I believe to be some evolutionary aspects of psycholinguistics.

Mental Languages

It is clear that humans think in some form of language, whether it is accomplished as an interior monologue using our native spoken language and/or some form of what many call “mentalese” (i.e. a means of thinking about concepts and propositions without the use of words).  Our thoughts are likely accomplished by a combination of these two “types” of language.  The fact that young infants and aphasics (for example) are able to think, clearly implies that not all thoughts are accomplished through a spoken language.  It is also likely that the aforementioned “mentalese” is some innate form of mental symbolic representation that is primary in some sense, supported by the fact that it appears to be necessary in order for spoken language to develop or exist at all.  Considering that words and sentences do not have any intrinsic semantic content or value (at least non-iconic forms) illustrates that this “mentalese” is in fact a prerequisite for understanding or assigning the meaning of words and sentences.  Complex words can always be defined by a number of less complex words, but at some point a limit is reached whereby the most simple units of definition are composed of seemingly irreducible concepts and propositions.  Furthermore, those irreducible concepts and propositions do not require any words to have meaning (for if they did, we would have an infinite regress of words being defined by words being defined by words, ad infinitum).  The only time we theoretically require symbolic representation of semantic content using words is if the concepts are to be easily (if at all) communicated to others.

While some form of mentalese is likely the foundation or even ultimate form of thought, it is my contention that communicable language has likely had a considerable impact on the evolution of the human mind — not only in the most trivial or obvious way whereby communicated words affect our thoughts (e.g. through inspiration, imagination, and/or reflection of new knowledge or perspectives), but also by serving as a secondary multidimensional medium for symbolic representation. That is, spoken language (as well as its subsequent allotrope, written language) has provided a form of combinatorial leverage somewhat independent of (although working in harmony with) the mental or cognitive faculties that innately exist for thought.

To be sure, spoken language has likely co-evolved with our mentalese, as they seem to affect one another in various ways.  As new types or combinations of propositions and concepts are discovered, the spoken language has to adapt in order to make those new propositions and concepts communicable to others.  What interests me more however, is how communicable language (spoken or written) has affected the evolution of thought complexity itself.

Communicable Language and Thought Complexity

Words and sentences, which primarily developed in order to communicate instances of our mental language to others, have also undoubtedly provided a secondary multidimensional medium for symbolic representation.  For example, when we use words, we are able to compress a large amount of information (i.e. many concepts and propositions) into small tokens with varying densities.  This type of compression has provided a way to maximize our use of short-term and long-term memory in order for more complex thoughts and mental capabilities to develop (whether that increase in complexity is defined as longer strings of concepts or propositions, or otherwise).

When we think of a sentence to ourselves, we end up utilizing a phonological/auditory loop, whereby we can better handle and organize information at any single moment by internally “hearing” it.  We can also visualize the words in multiple ways including how the mouth movements of people speaking those words would look like (and we can use our tactile and/or motor memory to mentally simulate how our mouth feels when these words are spoken), and if a written form of the communicable language exists, we can actually visualize the words as they would appear in their written form (as well as the aforementioned tactile/motor memory to mentally simulate how it feels to write those words).  On top of this, we can often visualize each glyph in multiple formats (i.e. different sizes, shapes, fonts, etc.).  This has provided a multidimensional memory tool, because it serves to represent the semantic information in a way that our brain can perceive and associate with multiple senses (in this case through our auditory, visual, and somatosensory cortices).  In some cases, when a particular written language uses iconic glyphs (as opposed to arbitrary symbols), the user can also visualize the concept represented by the symbol in an idealized fashion.  Associating information with multiple cognitive faculties or perceptual systems means that more neural network patterns of the brain will be involved with the attainment, retention, and recall of that information.  For those of us that have successfully used various pneumonic devices and other memory-enhancing “tricks”, we can clearly see the efficacy and importance of communicable language and its relationship to how we think about and combine various concepts and propositions.

By enhancing our memory, communicable language has served as an epistemic catalyst allowing us to build upon our previous knowledge in ways that would have likely been impossible without said language.  Once written language was developed, we were no longer limited by our own short-term and long-term memory, for we had a way of recording as much information as possible, and this also allowed us to better formulate new ideas and consider thoughts that would have otherwise been too complex to mentally visualize or keep track of.  Mathematics, for example, exponentially increased in complexity once we were able to represent the relationships between variables in a written form.  While previously we would have been limited by our short-term and long-term memory, written language allowed us to eventually formulate incredibly long (sometimes painfully long) mathematical expressions.  Once written language was converted further into an electro-mechanical language (i.e. through the use of computers), our “writing” mediums, information acquisition mechanisms, and pattern recognition capabilities, were further aided and enhanced exponentially thus providing yet another platform for an increased epistemic or cognitive “breathing space”.  If our brains underwent particular mutations after communicable language evolved, it may have provided a way to ratchet our way into entirely new cognitive niches or capabilities.  That is, by communicable language providing us with new strings of concepts and propositions, there may have been an unprecedented natural selection pressure/opportunity (if an advantageous brain mutation accompanied this new cognitive input) in order for our brain to obtain an entirely new and possibly more complex fundamental concept or way of thinking.

Summary

It seems evident to me that communicable language, once it had developed, served as an extremely important epistemic catalyst and multidimensional cognitive tool that likely had a great influence on the evolution of the human brain.  While some form of mentalese was likely a prerequisite and precursor to any subsequent forms of communicable language, the cognitive “breathing space” that communicable language provided, seems to have had a marked impact on the evolution of human thought complexity, and on the amount of knowledge that we’ve been able to obtain from the world around us.  I have no doubt that the current external linguistic tools we use (i.e. written and electronic forms of handling information) will continue to significantly alter the ongoing evolution of the human mind.  Our biological forms of memory will likely adapt in order to be economically optimized and better work with those external media.  Likewise, our increasing access to new types of information may have provided (and may continue to provide) a natural selective pressure or opportunity for our brains to evolve in order to think about entirely new and potentially more complex concepts, thereby periodically increasing the lexicon or conceptual database of our “mentalese” (assuming that those new concepts provide a survival/reproductive advantage).