Is There Design in Nature?
Is There Design in Nature?
By Neal Kendall
(Presented at the 2016 Science Symposium)
Table of Contents
- 1 Introduction
- 1.1 Author Notes
- 2 Complex Specified Information (CSI)
- 3 Causation
- 3.1 Material vs Teleological (“Agency” ) Causation
- 3.1.1 Material Causation
- 3.1.2 Agency “Intelligent” Causation
- 3.2 Limitations of Material Causation
- 3.3 Is the Universe Causally Closed?
- 3.1 Material vs Teleological (“Agency” ) Causation
- 4 Complexity of Life
- 4.1 Life Primer
- 4.1.1 DNA Polymerase
- 4.1.2 DNA Helicase
- 4.1.3 RNA Transcription – Make RNA from DNA - RNA Polymerize
- 4.1.4 RNA Splicing – Making RNA Transcripts - Spliceosome
- 4.1.5 Ribosome Translation – Make Proteins from RNA
- 4.1.6 Bacteria Flagellum
- 4.1.7 Eye
- 4.2 Protein Sampling Problem
- 4.3 “Codes”
- 4.3.1 DNA Transcription-Translation Code
- 4.3.2 Epi-Genetic Code
- 4.3.3 Sugar Code
- 4.3.4 Membrane Code
- 4.3.5 “Endogenous” Electric Code
- 4.3.6 Summary of Codes
- 4.4 Summary – Complexity of Life
- 4.1 Life Primer
- 5 Origin of Life (Abiogenesis)
- 5.1 RNA World
- 5.2 Calculations of Probabilities on the Origin of Life
- 6 Neo-Darwinism
- 6.1 What is Neo-Darwinism
- 6.2 Current Status of Neo-Darwinism
- 7 Are Materialist Explanations for the Evolution of Life Valid?
- 7.1 Sudden Appearance of Complex Biologic Features
- 7.1.1 Fossil Record
- 7.1.2 Mechanisms of Rapid Evolutionary Change
- 7.1.3 De Novo Genes
- 7.1.4 Computer Simulations
- 7.2 Non-Randomness of Evolutionary Change
- 7.2.1 Epigenetics
- 7.2.2 Natural Genetic Engineering - Transposons
- 7.2.3 Horizontal Gene Transfer
- 7.2.4 Overlapping Codes
- 7.2.5 Edge of Evolution – Manyuan Long vs Doug Axe & Michael Behe
- 7.1 Sudden Appearance of Complex Biologic Features
- 8 Convergent Evolution
- 8.1 Convergences at the Organism Level
- 8.2 Convergences at the Organ or Tissue Level
- 8.3 Convergences at the molecular Level
- 9 Summary of Neo-Darwinian Evolution
- 9.1 Trends - Non-Randomness, Saltation
- 9.2 Visualizing Darwinian Evolution
- 9.3 God of the Gaps
- 9.4 Intelligent Design
- 9.5 Where is the Debate Heading?
- 10 Vitalism
- 10.1 What Has to Be Explained?
- 10.2 Why was Vitalism Dismissed?
- 10.3 Cell Intelligence
- 10.4 “Self Organization”
- 10.5 Molecular Location in the Cell
- 10.5.1 DNA Replication
- 10.5.2 RNA Transcription
- 10.5.3 RNA Splicing
- 10.5.4 mRNA Translation (Protein Synthesis)
- 10.6 Evolution
- 10.7 Summary - Vitalism
- 11 Emergence of Consciousness and Mind
- 11.1 A Brief Review of Current Neuroscience
- 11.2 Consciousness
- 11.2.1 Qualia - Perception (“The Hard Problem” )
- 11.2.2 Intentionality
- 11.3 Theories of Mind
- 11.3.1 Dualism
- 11.3.2 Epiphenomenalism
- 11.3.3 Eliminative Materialism
- 11.3.4 Type Physicalism or Identity Theory
- 11.3.5 Functionalism
- 11.4 Property Dualism – Emergence
- 11.4.1 Emergent Mind - In Relation to Theism / Atheism
- 11.4.2 Computational Theory 103
- 11.4.3 Problems with the Emergent Theory of Mind
- 11.5 Summary – Emergence of Consciousness and Mind
- 12 Falsifications of Materialism
- 12.1 Falsification of Materialism #1: Dreams
- 12.1.1 Terminology
- 12.1.2 Attributes of Dreams
- 12.1.3 Emulation of the Senses
- 12.1.4 What a Materialist Needs to Explain
- 12.1.5 Possible Materialist Explanations
- 12.1.6 Falsification Using Probabilities of Complex Specified Information....126
- 12.1.7 Near Death Experiences, End of Life Experiences, DMT and other “Hallucinations”
- 12.1.8 Summary - Dreams
- 12.2 Falsification of Materialism #2: Continuity of Thought
- 12.2.1 Materialism’s Claims
- 12.2.2 Complex Specified Information
- 12.2.3 Foreknowledge
- 12.2.4 Continuity of Thought and Free Will
- 12.2.5 Probabilities
- 12.2.6 Neo-Darwinism
- 12.2.7 Continuity of Thought - Summary
- 12.3 Falsification of Materialism #3: Constancy and Resumption of Self
- 12.3.1 Materialist Claims
- 12.3.2 Near Death Experiences
- 12.3.3 Summary – Continuity and Resumption of Self
- 12.4 Summary – Falsifications of Materialism
- 12.1 Falsification of Materialism #1: Dreams
- 13 Are Materialist Objections to Substance Dualism Valid?
- 13.1 Causal Closure of the Universe
- 13.2 Correlations
- 13.2.1 Man with Almost no Brain
- 13.2.2 Girl with Half Brain Removed
- 13.2.3 Persistent Vegetative State
- 13.2.4 Persinger and God Helmut
- 13.2.5 Callosotomy
- 13.3 Libet-Type Experiments and Free Will
- 13.4 Non-Local Consciousness
- 13.4.1 Consciousness and Quantum Physics
- 13.4.2 Radio Receiver Model
- 13.5 Summary
- 14 Mystical Experiences and Hallucinations
- 14.1 Near Death Experiences
- 14.1.1 Out-Of-Body
- 14.1.2 Tunnel and Light, Deceased Relatives, Divine Beings
- 14.1.3 Life Review
- 14.1.4 A Barrier
- 14.1.5 Ineffable Content
- 14.1.6 Time
- 14.1.7 Changed Lives
- 14.1.8 Notable Near Death Experience Cases
- 14.2 Neuroscientists Views on Near Death Experiences
- 14.2.1 Psychological Factors
- 14.2.2 N-Dimethyltryptamine (DMT)
- 14.2.3 Demon Haunted World – Alien Abduction Experiences
- 14.3 Deathbed Visions
- 14.4 After Death Communication
- 14.5 Induced After Death Communication
- 14.6 Summary – Mystical Experiences and Hallucinations
- 14.1 Near Death Experiences
- 15 What Are These Mystical Experiences?
- 15.1 Out-of-Body Experiences
- 15.1.1 Dissociation – Out-of-Body
- 15.1.2 Near Death Experience – Out-of-Body
- 15.2 The AWARE Near Death Experience Study
- 15.3 Are Materialist Explanations of Near Death Experiences Reasonable?
- 15.3.1 Near Death Experience – Life Review
- 15.4 Summary – What are These Mystical Experiences
- 15.1 Out-of-Body Experiences
- 16 The Power of Love
- 16.1 Shared Death Experiences
- 16.2 Shared Induced After Death Communication
- 16.3 Summary – The Power of Love
- 17 Summary
- 17.1 Creative Complex Specified Information Flow
- 17.2 Is Materialism Waning?
- 17.3 Cultural War - Materialism vs Idealism
This paper is a broad brush approach to the question as to whether there is intelligent design in nature—whether nature exhibits the signature of teleology. Design, on the one hand and strictly material causes on the other, constitute a binary proposition. By law of the excluded middle one can infer design if material causes are deemed insufficient to account for what we see in nature. Either intelligence is a necessary causation for the splendid complexity nature reveals, or it isn’t. Claiming that there is some contingency in nature is not a nullification of the design argument.
Materialists claim that natural processes can produce all cases of “apparent design” exhibited by nature. Teleologists claim that there are at least some attributes of nature that will forever elude purely natural causative explanations. The statements in The Urantia Book pertaining to living organisms and the human mind are without question supportive of the notion that intelligent purpose is necessary and that material processes are insufficient to account for complexity of living organisms.
The Urantia midwayers have assembled over fifty thousand facts of physics and chemistry which they deem to be incompatible with the laws of accidental chance, and which they contend unmistakably demonstrate the presence of intelligent purpose in the material creation. [58:2.3] (P/ 665)
The approach I will take in this paper will be to show that material causes are insufficient to account for the repeated and rapid appearance of what I will referred to as “Complex Specified Information” (CSI) (which I will define in detail below). The paper will focus on two broad categories of phenomenon to infer design: 1) Complex specified information exhibited by living organisms—the origin, evolution and basic operation of living organisms, and 2) Complex Specified Information exhibited by human consciousness, thought, imagination, mystical experiences and even hallucinations. The paper will depart from these themes a bit in the final section to discuss a positive approach to detecting design through the power of love as it relates to mystical experiences.
I will show that the vast amounts of complex specified information that nature exhibits precludes any theory that limits itself to strictly material causes. The strongest case against materialism relates to the qualities of human consciousness and mind. These involve dreams, thought streams, mystical experiences and hallucinations. For the mystical experiences to count as powerful evidence against materialism it is not necessary that you accept them as genuine revelations. It is only necessary that you accept that the personal accounts about the audible and visual experiences are as they are described.
I will offer three (3) “falsifications of materialism,” which are accessible to anyone with just a bit of reflection. These falsifications of materialism reveal levels of complex specified information that is quantifiable and intuitive. I believe these falsifications of materialism are unassailable. In fact, I have presented them on a couple well-known atheist forums as well as the primary intelligent design forum, where many leading atheists participate, and no one has been able to offer any substantive contravening evidence to any of them.
The paper will not cover the origin of the universe or the so called fine-tuning of the universe physical parameters. I will comment briefly here: Materialists scientists would have us believe that the universe sprang forth from nothing. When the arguments are looked at closely however, one quickly realizes that “nothing” is actually not nothing, but something. Frankly, I have not examined the Big Bang theory much at all. My hunch is that much like evolutionary science and neuroscience, theory far out paces the evidence. It strikes me as hubris for scientists to think they know what was really going on in the first few instances of the universe 15 billion years ago. After all, we can’t even figure out what happened to the Lindberg baby 80 years ago.
The fine tuning argument, the centerpiece of theistic evolutionist’s claim for design, is under assault as well from materialist scientists who holds that the “many universes” model is a perfectly viable scientific theory. The many universes model extends the probabilities to infinity—whatever is needed to make them work—to remove the notion that the universe seems to be “balanced on the edge of a knife” as physicist Paul Davies once put it.
1.1 Author Notes
This paper really has to be read on-line and not printed. There are many links to videos and debates. Reading on-line also saves trees.
This is a very much a work in progress. It is really little more than a robust table of contents. There is so much more that could be said on each topic. I pulled this together over about a nine month period while working in a demanding career, raising two teenage girls, a wife, and
attending to two needy dogs and a cat. It was a great challenge in multiplexing. That said, I did not have anyone editing or reviewing the paper which is as close to a book as it is an article. I have discovered over the years I am not a great editor or proofreader of my own work. So I apologize in advance for missing words, redundancy, flipped phraseology, etc.
I am by nature a skeptical person and my skepticism runs in all directions. Given that, I can say that there is no person or no single book that I can put all my faith in. Having said that, there is nothing in The Urantia Book related to the life sciences and human consciousness and mind that I have discovered to be in error. Quite the contrary; in all cases, what I have discovered is that the statements in The Urantia Book are very reasonable and are in fact being confirmed by the evidence as I understand it, despite what the vast majority of scientists might claim. The lone exception to this is vitalism which is the most profound misalignment between the accepted theories in the life sciences and the statements in The Urantia Book. However, as I discuss in the paper, the issue of vitalism has been misunderstood and its dismissal among virtually every scientist in the world is premature. In fact, I believe there is a lot of evidence that vitalism is true.
I am not a scientist, nor am I an academically trained philosopher. I am a hobbyist in these areas. It is best to think of this as a reporting of recent scientific research that bears on the questions of design in the universe. However don’t discount the role of a reporter. Reporters, like industry analysts for example, often—usually—have a better perspective than those down in the trenches of a scientific of technology endeavor about what is really going on. If you wanted to really know what was going on in a particular industry, i.e. what direction an industry is heading, what technological changes might be taking place that effect products, my contention would be that you would gain greater insight by consulting an industry analyst whose role it is to assess the direction of an industry for financial purposes, than you would be by consulting an engineer busily writing code to develop a particular product for that industry. Over time, I have come to view my lack of formal academic training in science and philosophy as an asset and not a liability. It frees the mind up by avoiding the “echo chamber” given the unanimity of thought that plagues the universities.
With that, let’s get started.
To understand the primary purpose of this paper it is necessary to understand the concept of “Complex Specified Information” (CSI).The concept was introduced by Philosopher, Mathematician and Theologian William Dembski a leading Intelligent Design theorist in his book, The Design Inference. It is a bit of a clumsy term but I don’t know what else to use. Sometimes I will make the term even more clumsy by adding the word “creative” in front of it. Complex specified information can best be understood as complex structures, functions, processes or systems that produce function or meaning or achieve something important. That complex information achieves something, is most commonly assessed by whether it conforms to an independently given pattern.
Let’s take a closer look at each word in the phrase.
“Complex(ity)” involves the intricate arrangement of many components. Components can be physical things such as molecules or building materials or people or components or they can be abstract things such as language or thoughts or mental images or numbers or human actions.
“Specified” means that these components in a complex system are arranged very specifically and that there is little or no tolerance for perturbation of the system in order to maintain its structure or function or what it achieves—that is to say, CSI systems are highly constrained.
“Information” as defined initially by Claude Shannon was the flow of something down a communication channel. It can perhaps best be thought of as a “ruling out” of alternatives as the serial unveiling of a string of characters, for example, rules out a large set of alternative possibilities. In this way, the string of characters has greater meaning—greater information—with each word disclosed in the string.
Complex specified information systems display a high degree of intricacy which means they have a lot of dependencies and interdependencies between components. This is why they have many constraints. Any modification of the complex specified information arrangement or system is likely to break it, i.e. be ruinous to its meaning or function or whatever it achieves.
Because CSI systems have many components and interdependencies and constraints, they are highly improbable. The improbability arises by virtue of the relatively small set of arrangements that a complex specified information system could assumed in order to achieve something, compared to the much larger set of possible arrangements that this set of components could have assume that achieve nothing and that does not correspond to any independently known pattern.
2.2 Complex Specified Information vs Randomness
It is important to emphasize that having the attribute of improbability is not, in and of itself, sufficient to define a complex specified information system. Complexity can be confused with disorder—chaos. Disordered systems have some complexity in that they are, or could be, many components and the arrangement of the many components could be highly improbable. Given two strings: 1) a string of 1000 human text characters that yield no understanding, or 2) a string of 1000 human text characters of a Shakespeare Sonnet, without knowing the origin of either string, how could one assess the relative probabilities of each? Consider the following two character strings:
As from my soul which in thy breast doth lie: That is my home of love…
Because complex specified information systems are improbable and achieve something (usually by virtue of corresponding to some known pattern), they can be differentiated from a disordered system. Clearly in the two example strings of characters above, the second string achieves something and there would be a strong inference of design despite the fact that had they been generated randomly, the probability of each occurring would be the same.
Despite the obviousness of this, disorder and complexity are commonly confused and in fact, when debating in forums you can count on a materialist-atheist confusing the two. There is an entire cottage industry of these confused individuals who seek to convince people that the poor probabilities, especially as they relate to evolution, are manageable. A materialist will claim that an unlikely sequence of things is simply an illusion because any string of characters or DNA base pairs for example, has the same probabilities regardless of whether they render any meaning. And this is true.
The first obvious problem with this view point is that by this type of reasoning, anything, no matter how seemingly improbable, is not only possible but probable. Suppose you were to put an antenna out in the middle of the solar system and captured a signal. When you decoded the signal with the ASCII character set you found that the text was word for word the text of Genesis. Clearly, in that case, you would know that the signal was from an intelligent source. You would know this because it conforms to a pre-specified pattern. When you present the case this way to an adversary in debate, they quietly disappear from the forum.
2.3 Complex Specified Information vs Order
Complexity is also often confused with “order.” Ordered arrangements or systems are not the same as complex systems although ordered systems have some degree of complexity by virtue of their specificity. To be an ordered system, every component must be what it is for the order—the pattern—to be sustained. Therefore it is highly specific. But the ordered arrangement is deterministic whereas the complex specified information arrangement or system is not deterministic; it is “creative.”
One way to distinguish between order and complexity is to reflect on what would be required to create an ordered arrangement of components on the one hand vs a complex arrangement on the other. Let’s say you want to explain to someone how to produce a particular arrangement of human text characters. Or alternatively explain how to write a computer program written in a low level programming language to produce a particular arrangement of human text characters. What would the instruction set look like for producing an ordered arrangement vs the instruction set for creating the complex system? Would the instruction set in each case involve many instructions or would it involve few instructions? Let’s take two character strings:
As from my soul which in thy breast doth lie: That is my home of love…
While both strings are equally improbably were they to be generated randomly, the instruction set in computer machine level programming code for the ordered pattern would be quite brief because it is deterministic and therefore can be produced by an algorithm. It would require a database of letters and then incrementing a counter and grabbing the letter associated with the counter and printing the character the proper number of times. As the number of characters is increased, the complexity of the code does not increase much at all.
The code for something meaningful in human text (such as a Shakespeare Sonnet) would require each character to be entered in the code. As the string gets longer, for example to produce a short story, the complexity of the program used to produce a short story, increases linearly. Whereas you really couldn’t produce a program to write poetry in any reasonable time frame; you could easily write a program to generate any ordered character string. Materialists often confuse order with complexity and they claim from this confusion that since order can be produced by nature, nature can produce complexity. In fact even well-educated and seasoned atheists often confuse order and complexity.
Despite the fact that an ordered string and a creative string are equally improbable to be produced by chance, the instruction set required to produce each string reveals the greater complexity of the Sonnet or story. Furthermore, the Sonnet is more meaningful in that it conforms to a known pattern. A Sonnet is creative; the ordered string is deterministic in that it can be created using an algorithm.
2.4 Quantifying Complex Specified Information
Complex specified information can be quantified but it is difficult to quantify complexity in many cases. The easiest way to quantify complexity is to distill a complex system down to a human character description. Let’s say there are two widgets A & B that you are describing to an agreed upon level of detail. Widget A requires 1000 human text characters to describe and the Widget B requires 5000 human text characters to describe. Clearly the second widget is more complex.
Since a widget could be described in many different ways, establishing an absolute measure of complexity is next to impossible. But when comparing like things, a relative quantification of complexity can be done as follows: The relative complexity of Widget A is 261000. The relative complexity of Widget B is 265000. It is quite a bit more involved than that but as a rough comparison I think that makes the point.
Actually the convention is to represent information quantity in binary form. To do this you could convert the text characters to their binary ASCII equivalent (ASCII is a code computers use to map binary digits to a human text character set) which assigns 8 binary bits to each character. In that case, the quantity for Widget A would be 28000. As I mentioned though, there are many ways to describe something in human text; therefore the real quantity of complex specified information would be considerably less than that.
Here are some examples of complexity: A house of cards is a complex system. The more cards stacked, the higher the complexity assuming the structure involved is equally sensitive to perturbation, i.e. equally constrained. A small house of cards can be more complex than a somewhat larger house of cards if the small house has far greater interdependencies and constraints.
Planning a wedding reception for 150 is far more complex than planning a luncheon for 12. Aside from the larger group, there are many dependencies and constraints—flowers, music, seating arrangements, food selection, etc. in a wedding than a luncheon not to mention having to deal with a “bridezilla.” Building a house is complex; a two story house is more complex than a single story house all other things being equal.
Writing a story or a poem is a complex endeavor. A poem would be more complex than a story of the same length because of the constraints related to writing poetry imposed by rhythm, rhyme, metaphor, etc. Writing a song would be more complex than writing a poem of the same quality because now you have introduced music with a melody, cords and a bass line. And of course living systems exhibit vast quantities of complex specified information. So does human intellect.
To summarize, complex specified systems have many components, have many dependencies as well as interdependencies, are highly constrained and therefore are highly improbable. Complex specified information systems achieve something—they conform to some known pattern. And they are non-deterministic.
Philosophy can be divided into two main categories: Idealism and Materialism. Idealism in brief, asserts that mind is the ultimate foundation of all reality. That is not to say that material reality does not exist (although some make the claim that it is an illusion), just that mind is the fundamental reality. Materialism, or more properly physicalism, is the theory that everything is physical, that there is nothing beyond the physical. Physicalism and materialism are often used interchangeably. Physicalism includes energy and the physical laws that nature abides by. Nevertheless, I will simply use the term “materialism” throughout.
Materialism is the most common form of monism which is the view that reality is composed of just one type of substance. For materialists that substance is matter. The other main category of monism, which is uncommon in Western thought, claims that mind is the fundamental universe “substance” and that matter is illusory.
For a modern scientific materialist there is no other substance beyond matter and energy. This means that there is no such thing as an immaterial mind, for example, or a vital force that infuses life with its marvelous qualities and even further that particles themselves are not endowed with some quality of mind—panpyschism.
3.1 Material vs Teleological (“Agency” ) Causation
One of the fundamental differences between the claims of materialism vs idealism pertains to the nature of causation. There are two general categories of causation that I want to define: 1) material causation and 2) agency or “intelligent” causation.
The claim made in this paper is that material causation cannot account for the creative complex specified information that we observe in nature whether occurring in life organisms, in our subjective experience, in the behavior of others as we observe them. Therefore an inference of design—teleology can be made.
3.1.1 Material Causation
Material causation is lawful and deterministic. Mathematics is used to describe nature because nature (the “physical sciences” anyway) conform to laws. However, there are two types of randomness that are involved in material causes as well despite the otherwise exclusively deterministic nature of material cause. There is randomness in the epistemological sense in that, although a cause may be deterministic, we cannot understand its true nature due to technological limitations. Particles can diffuse through water and seem to act randomly. But these particles are really behaving according to physical law, but we are limited in our ability to completely understand it.
The other type of randomness pertains to quantum physics. The randomness related to quantum physics is often said to be “ontological randomness” which is an academic way of saying that there is an inherent randomness in nature at the particle level. I discuss quantum physics a bit more further down in this section.
3.1.2 Agency “Intelligent” Causation
Agency causation is causation involving mind which is commonly understood to be immaterial. I suggest that agency causation, intelligent causation and teleology are synonymous. Idealists, of course, claim that agency causation is the primary causation especially causation related to what we would call creativity. Mind—immaterial mind—an idealist would say, is the true cause of creative complex specified information.
Materialists will sometimes use the term agency as well. But when a materialist uses the term agency they of course do not mean that there is an immaterial mind at work. Rather, they mean that nature has acquired a secondary level cause—a system—that, although deterministic, exhibits the characteristics of minded agency. In other words these material systems might be said to “emulate” true agency. These “systems” are algorithmic. Algorithms are a carrying out of a pre-arranged plan in a deterministic way.
The fundamental difference between material agency and a true minded agency is that true agency can be creative—they can readily create complex specified information. Materialists claim that material cause can produce complex specified information by, in effect, emulating an agent, but acknowledge that such creativity is ultimately the result of randomness and determinism.
3.2 Limitations of Material Causation
Materialists view reality—nature—as a hierarchy of increasingly complex layers. At the lowest level of the material hierarchy are material particles described by physics, particles combine to form elements and molecules which are studied by chemists, molecules combine and produce life which is the domain of biology, and living entities—cells—can combine in ways to produce mind (understood to be entirely material) which is the domain of psychology and finally, the minds of humanity are collectively studied by sociology. Most materialists believe that it is the lower levels that completely determine the attributes of the higher level. This is called “reductionism.”
Some materialists believe that the lower levels of material reality do not fully determine the higher levels. They would suggest that nature has in effect emulated agency causation. Life itself, and the theory of the emergent physical mind—property dualism—are examples materialists would point to as exhibiting agency that has arisen through deterministic causes interspersed with some occasional randomness. This exception to reductionism—emergence—is the idea that new qualities emerge that cannot be inferred or predicted from an examination of the lower levels.
Because nature abides by laws, nature’s ability to produce something that exhibits complex specified information, must involve stochastic (random) quality. This random quality serves as the input to a process that materialists claim can build complexity incrementally. Materialists believe they have discovered such a process related to the most complex things we observe—living organisms. The process of course is the tandem interworking proposed by Neo-Darwinism— random mutation and natural selection. A good deal of the focus of this paper will be in assessing whether this process is capable of producing the complexity that we see in living organisms.
Here is the very important thing to remember…The secret of materialism’s claim to be able to explain how complexity might arise through material causes stems from what scientists perceive to be the success of discovering a law of nature that can build complexity. That law of nature is natural selection. Natural selection, materialists claim, explains how life evolved including consciousness and higher level thought. Materialists acknowledge that natural selection cannot explain the origin of life (abiogenesis). But they are working on ways they can apply the principle of selection to do so. As you will see, they have not come close to doing that.
But natural selection can only build complexity, Neo-Darwinists would acknowledge—if it can build complexity at all—by an incremental process whereby a small random beneficial change occurs and is then locked in by natural selection. Stated another way, natural selection is not a sufficient material cause; it can at best be a necessary cause. Random mutation is the other necessary material cause. Together—random mutation and natural selection—materialists would claim, constitute a sufficient cause to explain complex specified information.
The fundamental problem with Neo-Darwinism according to Intelligent Design proponents—teleologists—is related primarily to random mutation. There are limitations of natural selection as well, but there are profound difficulties involved in the probabilities they point to related to random mutation. That Intelligent Design is focused almost exclusively on the mutation side of the Neo-Darwinian mechanism is commonly misunderstood.
The probabilities of a small, random, beneficial change occurring are far better than the probabilities of a large systemic beneficial change occurring because the probabilities of a large change are prohibitive. If you build complexity by small incremental changes and lock them in by natural selection at each step, the probabilities of each small step are added together to form a much lower overall probability of an entire system built over vast amounts of time. For a large systemic change that encompasses the equivalent of those several small random changes, to occur in one step, would mean that you multiply the probabilities of each distinct change required to achieve the probabilities of what would be a single large systemic change. The bottom line is that nature can make no leaps “Natura non facit saltus” given that they are limited to material causation.
What we will see is that not only is natural selection inadequate to create complexity in the evolution of life, but it is especially incapable of creating complexity insofar as the origin of life. But, it gets far worse for materialism. The aspect of reality that produces the greatest degree of complexity over the shortest period of time is in the mind of humans. Explaining why this is the case, will be the most important focus of the paper.
The origin of life is a separate study from Neo-Darwinism. The origin of life is recognized as “unsolved” but materialists invoke “promissory materialism” when confronted with such profound gaps in the explanatory narrative of nature’s ability to produce complexity.
Consciousness is also an unsolved problem but here again, materialists will tell us with the highest level of assurance, that they know that consciousness and all thought is reducible to brain chemistry.
Rest assured materialist scientists are busying themselves trying to show that these seemingly scientifically intractable problems—these apparent limitations to material causes that nature may seem to have, are not insurmountable. The goal of the scientific enterprise is to describe or model all reality in purely naturalistic terms. The confidence they have that all phenomenon will be explainable in purely naturalist terms seems to have arisen from what they believe to have been the crowning achievement of science in the 20th century—the establishment of the Modern
Synthesis “Neo-Darwinism.” All other unsolved problems will eventually fall in line. The ultimate goal of many materialist scientists is to convince us all that they, and only they, are the experts and more importantly that eternal annihilation is our ultimate destiny.
3.3 Is the Universe Causally Closed?
Most materialists view reality—the universe—as a physically causally closed system that does not permit any outside influence. Classical physics seems to discount the possibility of an influence or interaction by some imagined immaterial agent or force. Classical physics was deterministic in theory and precluded the idea of free will and mind-brain interaction where mind is understood to be immaterial.
Therefore, materialists study reality under the assumption that natural laws govern all events, i.e. that there are no outside immaterial influences. This doctrinal exclusion of any outside immaterial forces is referred to as “Methodological Naturalism.” For some, methodological naturalism does not necessarily mean that other, nonmaterial influences may not exist but just that when conducting science they cannot be assumed to affect empirical research. Methodological naturalism started out being a method of inquiry but has evolved to a wholehearted acceptance of materialism for the vast majority of academic scientists.
Is the universe casually closed? William Hasker Cornell University philosopher of mind says this about the causal closure argument (as applied to substance dualism),
“The hoariest objection specifically to Cartesian dualism (but one still frequently taken as decisive) is that, because of the great disparity between mental and physical substances, causal interaction between them is unintelligible and impossible. This argument may well hold the all-time record for overrated objections to major philosophical positions.”
Tuffs University Philosopher Daniel Dennett uses the causal closure argument in his book, Consciousness Explained and regards it as decisive.
“No physical energy or mass is associated with them [signals from the mind to the brain]. How then do they make a difference to what happens in the brain cells they must affect, if the mind is to have any influence over the body?”
The problem with this statement is that it is based on classical “Newtonian” physics. Quantum mechanics, at least the most commonly accepted interpretation, the “Copenhagen Interpretation,” nullifies the deterministic and mechanistic view of the universe by showing that nature is inherently probabilistic. By extension, quantum physics is an enabling theory for an open universe and therefore offers a denial of the principle of the causal closure of the universe.
It should be noted that there are some interpretations of quantum mechanics that deny that nature is inherently probabilistic and view nature as deterministic nonetheless. The “Many Worlds” interpretation is one such example, but it is a minority (although growing) view.
Quantum physicist Henry Stapp notes that extending classical physics to the brain/mind would have our thoughts controlled “bottom-up” by the deterministic motion of particles and fields. Stapp comments on Dennett’s statement (above),
“Classical physics allows no mechanism for a “top-down” conscious influence…[and] there’s a quantum loophole in Dennett’s argument: No mass or energy is necessarily required to determine which of the set of possible states a [quantum] wave function will collapse upon observation.”
Note that the “quantum wave function” is discussed below. Stapp continues,
“In view of the turmoil that has engulfed philosophy during the three centuries since Newton’s successors cut the bond between mind and matter, the re-bonding achieved by physicists during the first half of the twentieth century must be seen as a momentous development.
“The only objections I know to applying the basic principles of orthodox contemporary physics to brain dynamics are, first, the forcefully expressed opinions of some non- physicists that the classical approximation provides an entirely adequate foundation for understanding mind-brain dynamics, in spite of quantum calculations that indicate just the opposite; and second, the opinions of some conservative physicists, who, apparently for philosophical reasons, contend that the successful orthodox quantum theory, which is intrinsically dualistic, should be replaced by a theory that re-converts human consciousness into a causally inert witness to the mindless dance of atoms, as it was in 1900. Neither of these opinions has any rational basis in contemporary physics.”
The claim of the Copenhagen interpretation is that the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead must be considered “a final renunciation of the classical idea of ‘causality.’" There is an intrinsic randomness associated with observations of micro particles such as photons, electrons, atoms and even larger particles and molecules behave as “probability waves.”
The probability wave can perhaps best be thought of as an unrealized potential that materializes in a random, i.e. probabilistic way. The probability wave is not really a physical wave, it is an abstraction, it just represents a known-probability distribution of the way a particle will materialize once observed. Probability waves “collapse” i.e. becomes distinct particles in a particular location when they are “observed.” And they collapse with an inherent randomness when observed. How does nature decide on a particular result? No one knows; nature gives only probability.
Here is how MIT physicists, Bruce Rosenblum and Fred Kuttner, summarize it in their book, the Quantum Enigma: Physics Encounters Consciousness:
“Quantum physics does not tell the probability of where an object is, but rather the probability that, if you look, you will observe the object at a particular place. The object has no “actual position” before that position is observed. In quantum mechanics the position of an object is not independent of its observation at that position. The observed cannot be separated from the observer.”
What constitutes an “observation?” It is a measurement or detection resulting from an interaction between a classical object (a large object) and a quantum object (a micro particle). A human observer does not appear to be necessary although some claim that a human observer is ultimately necessary. This is the subject of some heated debates.
There is an uncertainty inherent in the properties of all wave-like systems—Heisenberg Uncertainty principle—that arises in quantum mechanics simply due to the matter-wave nature of all quantum objects. It is a fundamental property of quantum systems. This fundamental property means that the universe may not be causally closed and further that there is no way for human observers to determine whether it is or not.
The first step in understanding why it is implausible to suppose that Neo-Darwinism or any other purely materialist theory of evolution can account for the complex specified information exhibited by living organisms, is to look at the staggering complexity of living organisms revealed by current research. We can then assess whether material causation can account for this complexity and if not ascribe it to agency causation—intelligent design.
You will be hearing a lot about microbiologist Dr. James Shapiro in this paper, so let me introduce him here in full. From Wikipedia: “James Shapiro was elected to Phi Beta Kappa in 1963 and was a Marshall Scholar from 1964 to 1966. He won the Darwin Prize Visiting Professorship of the University of Edinburgh in 1993. In 1994, he was elected as a fellow of the American Association for the Advancement of Science for ‘innovative and creative interpretations of bacterial genetics and growth, especially the action of mobile genetic elements and the formation of bacterial colonies.’ And in 2001, he was made an honorary officer of the Order of the British Empire for his service to the Marshall Scholarship program. In 2014 he was chosen to give the 3rd annual ‘Nobel Prize Laureate - Robert G. Edwards’ lecture.”
“The cell is a multilevel information-processing entity, and the genome is only a part of the entire interactive complex. They acquire information about external and internal conditions, transmit and process that information inside the cell, compute the appropriate biochemical or biomechanical response, and activate the molecules needed to execute that response.”
Bruce Alberts, president of the National Academy of Sciences:
“We have always underestimated cells. … The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. … Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.”
Neither James Shapiro nor Bruce Alberts would claim that naturalistic process cannot account for these features of living organisms. But that is an assumption based on their materialist predisposition.
According to Intelligent Design proponent, medical doctor and molecular biologist Michael
“Molecular biology has shown that even the simplest of all living systems on the earth today, bacterial cells, are exceedingly complex objects. Although the tiniest bacterial cells are incredibly small, weighing less than 10-12 gms, each is in effect a veritable micro-miniaturized factory containing thousands of exquisitely designed pieces of intricate molecular machinery, made up altogether of one hundred thousand million atoms, far more complicated than any machine built by man and absolutely without parallel in the nonliving world.”
4.1 Life Primer
Since all biologic functions require proteins to carry out all essential function, assessing complexity in terms of proteins is a good place to start. In order to understand the role of proteins it is helpful to understand how they work in living organisms. Neo-Darwinists make light of the verage person’s sense of incredulousness when observing the complexity of living things and expressing doubt about the theory. Making light of, or being dismissive about, the complexity of life is not an argument; it is a tactic they use to diffuse what is a real problem.
Please watch the videos at the links below which gives an overview of life’s essential processes (Skip to 1:15 mark and view till the 6:30 mark in the first video).
The functions shown are carried out by complex protein and RNA assemblies and molecular machines—Helicase, DNA polymerase, RNA polymerase, Spliceosome, Ribosomes, etc.
Proteins are made up of a series of concatenated amino acids (‘residues” ). A typical protein will have 300 or more—often many more—amino acids in a string but some have less. The sequence of amino acids in a protein is somewhat specific, meaning that a few changes here and there will often render a protein non-functional, i.e. unable to fold properly and therefore unable to catalyze a reaction or serve as any sort of a useful structure. Proteins have various domains or subsets of functional segments that can be swapped and assembled like Legos.
The following subsections briefly discuss the complexity of some of the key molecular complexes you saw in the animations in terms of protein composition. These molecular complexes support the essential functions in living organisms. All the examples of molecular machines are subcellular, i.e. at the molecular level (except the eye).
I cannot possibly come close to covering but a small tip of the iceberg of the complexity of these marvelous molecular devices. By way of example, there is an entire 300 plus page book just on the Helicase—the molecular device that unwinds the DNA in preparation for DNA replication.
4.1.1 DNA Polymerase
DNA polymerase is the protein complex that is most central in DNA replication. A cell copies its DNA every cycle. Without the ability to copy DNA reliably there would be no life. There are a variety of types of DNA polymerase depending on the type of organism and they even vary depending on cell type. These protein complexes work in pairs to create two identical DNA strands from a single original double stranded DNA molecule.
DNA polymerase works with the Helicase and a variety of other protein complexes during the replication process. DNA polymerase synthesizes new strands of DNA at a rate of about 749 nucleotides per second. The error rate during replication is believed to be in the range of 10-7 and 10-8, based on studies of E. coli and bacteriophage DNA replication. There is a proof reading and error correction feature built into the DNA polymerase protein complex.
The DNA polymerase is composed of two protein domains, a polymerase domain and a proofreading domain. The polymerase domain is composed of three subdomains. There are perhaps as many as fifteen (15) different genes that produce DNA polymerase protein machine.
4.1.2 DNA Helicase
In order for DNA Polymerase to replicate the DNA, the DNA has to be unzipped by a molecular machine called the DNA Helicase. There are other types of Helicases that facilitate the variety of metabolic processes related to RNA such as translation, transcription, ribosome biogenesis, RNA splicing, RNA transport, RNA editing, and RNA degradation. The DNA Helicase moves along in front of what is called the “replication fork” (the splitting of the double stranded DNA) which enables DNA replication. The DNA Helicase continuously opens and unwinds the DNA double helix with a rotational speed of up to 10,000 rotations per minute, which rivals the rotational speed of jet engine turbines. Here is an image of the DNA Helicase.
The DNA Helicase is very complex. There is an entire 400 page text book dedicated to describing its structures and functions.
4.1.3 RNA Transcription – Make RNA from DNA - RNA Polymerize
The RNA Polymerase copies one strand of the DNA double helix into mRNA in a process called transcription along with several other molecules which are collectively referred to as the “Transcription Initiation Complex.” The RNA Polymerase complex is composed of twelve (12) protein subunits for a combined 3000 amino acids. There are two large complexes and the rest are relatively small and unique to each type of polymerase. RNA polymerase transcribes the DNA at a rate of about 50 bases per second. A typical mRNA that codes for an average protein takes about 20 seconds in a prokaryotic cell and about 3 minutes in a eukaryotic cell to transcribe. Watch the animation at the link below to see this marvelous molecular machine in action.
4.1.4 RNA Splicing – Making RNA Transcripts - Spliceosome
The spliceosome is perhaps the most remarkable molecular machines in the cell. The spliceosome has been described as one of "the most complex macromolecular machines known, composed of as many as 300 distinct proteins and five RNAs". The small RNAs in each subunit are typically about 100 to 300 nucleotide base pairs long.
Watch the animations at the link below to see this marvelous molecular machine in action. The animation above reveals this astonishing machine at work on the initial transcript mRNA. When genes are transcribed from DNA, an mRNA is produced. But the protein coding areas “exons” of the initial transcript are separate by long stretches of non-coding regions called “introns.” Introns are typically 80 to 90 percent of a raw transcribed mRNA from DNA. These introns need to be removed in a process called splicing. The spliceosome cuts out these non-coding regions and rejoins the exons, i.e. the protein-coding segments.
4.1.5 Ribosome Translation – Make Proteins from RNA
The Ribosome is highly complex molecular machine use to synthesize proteins from mRNA following transcription and splicing. The process of protein synthesis is called “translation.” The ribosome, along with a variety of other associated molecules, concatenates amino acids together as they are fed to it from distinct transfer RNAs which carry each amino acid to the ribosome. The ribosome contains about eighty (80) distinct proteins and a variety of different RNAs. So the ribosome manufacturers proteins and is itself composed of many proteins.
The following is a link to an animation of the ribosome in action:
4.1.6 Bacteria Flagellum
Molecular biologist, and Intelligent Design proponent, Michael Behe Lehigh University advanced the concept of “Irreducible Complexity” in his ground breaking 1996 book, Darwin’s Black Box. Irreducible Complexity simply means that in a system with many components, if it is the case that some, or many, most or all of these components are essential for the system to perform its function, the system is said to be “irreducibly complex.” In other words, you cannot reduce its complexity beyond a certain point without destroying its function. By extension, this means that there is no way to build such an irreducibly complex system incrementally through Neo-Darwinian mechanisms.
In all molecular machines there are many essential components due to the interdependencies between components. You can demonstrate irreducible complexity by conducting what are called knock-out experiments. Knock out experiments involve disabling a gene for a specific protein in the molecular machine to determine if the protein produced by that gene is essential to build the molecular machine or to allow it to perform its function. Irreducible complexity is a specific instance, for example, of complex specified information as it applies to living organisms.
One of the examples Behe used in his book was the bacterial flagellum. The flagellum is composed of about forty (40) or more proteins all of which are essential based on “knock out” experiments. For a detailed description the flagellum and its operation refer to:
To counter the idea of irreducible complexity, supporters of Neo-Darwinism offer the idea of “cooption” or exaptation which claims that perhaps some of these components in the flagellum, evolved to carry out some other function. The genes were then duplicated and mutated and then coopted (reused) and gradually integrated to produce the new function—the flagellum.
More specifically they point out that ten (10) of the genes that code for proteins of the flagellum are also present in another molecular machine—the Type III secretory system—in the bacteria (TTSS). The TTSS pump transports proteins across the cell membrane of bacteria. Therefore, it is possible that the TTSS system evolved and some of its genes (all ten perhaps) were duplicated and coopted for the flagellum.
There are a few problems with this cooption theory. First off the flagellum has perhaps has many as forty (40) proteins, so you would still need to explain where the other thirty (30) came from.
Secondly it appears that the flagellum arose prior to the TTSS system and therefore the flagellum could not have coopted the TTSS systems proteins. Third, even if the TTSS system evolved first and its ten genes were coopted, you would still need to explain the TTSS system with its ten (10) new genes.
With this type of story-telling using the TTSS system in the case of the flagellum, Neo-Darwinists have pretty much declared victory, claimed irreducible complexity to have been debunked and moved on. In fact if you were to google “irreducible complexity” you would no doubt see many articles claiming that irreducible complexity has been debunked.
When Michael Behe’s book came out, James Shapiro declared in National Review that:
"There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations."
The whole debate has been a long, detailed, drawn-out debate with nasty edge to it. My view is that the concept of irreducible complexity has not been debunked at all—not even close. I find the explanations offered by Neo-Darwinists to be cartoonish. That irreducible complexity has not been debunked is no doubt what renowned atheist philosopher Thomas Nagel was referring to in his book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False when he said:
“Those who have seriously criticized these arguments [primarily the argument for irreducible complexity] have certainly shown that there are ways to resist the design conclusion; but the general force of the negative part of the intelligent design position—skepticism about the likelihood of the orthodox reductive view, given the available evidence—does not appear to me to have been destroyed in these exchanges.”
For a good overview of the debate without a lot of the typical nastiness, refer the following video (view to the 20:10 mark):
Thus far we have been looking at complexity at the molecular machine level. These devices are all subcellular. And many of the functions these molecular machines perform must have been in place at some level in the very earliest cells. I wanted to now take a look at a multicellular organ—the eye.
In the Cambrian Explosion we see a vast array of complex new animal body plans with complex new features such as the eye. When looking at the complexity of a particular adaptation—an organ—such as the eye, the best way to think about it is to look at the number of new cell types, i.e. tissues, and then also, if possible, how many new proteins would be required for each new cell type. Ultimately what we need to know is how many new proteins (and hence genes to define them) are required for a new complex feature. This will give us a rough sense at the complexity involved and the probabilities involved. This is difficult to do, but we can make an educated guess. But the educated guess would have to be based on the ratio of sequences of DNA that yield a viable protein vs the overall set of possible DNA sequences.
The Neo-Darwinian accounting of the evolution of the eye is really not evidence, but rather a collection of imaginative stories. What science there is, is based on the most simplistic high level description of how the eye might have evolved at the gross anatomical level.
Trilobites appeared as part of the Cambrian explosion. The trilobite eye is “very similar to the structure seen in the eyes of today's horseshoe crabs.”
From the diagram below you can see that there are at least seven cell types in the compound eye and they all have to fit together. It is not clear how many new proteins there are in each cell type of the compound eye. Clearly though, many new proteins would be necessary.
Also, an eye would have no value at all without a way to transmit the signal to the brain. And the brain would have to be equipped with a way to process the signal to do something related to survival, that being a requirement of evolution by natural selection.
And the information content expressed as the number of new proteins (an their complexity) is one thing, but this is to say nothing about how all the various types of cells that make up the eye are arranged symmetrically and in a way to allow the eye to function.
Moreover, what is often glossed over in Neo-Darwinian explanations of the eye is that the paleontological record of the Cambrian for example, indicates that there were many new animals and each new animal had a variety of new complex adaptations ranging from digestion, to locomotion to the senses including the eye. It is hard enough to imagine how these various features could have been culled out piece by piece through mutation and selection but how natural selection acts on multiple disparate nascent features is a further problem that is unresolved.
4.2 Protein Sampling Problem
Clearly there are many proteins involved in the essential functions in living organisms. And in order to make these proteins many other proteins are required. It is a massive chicken and egg problem. But how would one gain a sense whether or not a process like Neo-Darwinism could create these marvelous molecular devices and features? How could you quantify the complex specified information content?
Since proteins are required for any biologic function, one way to gain at least some insight into the scope of the problem would be to determine how plausible (or not) it is for nature to find viable proteins by chance, i.e. how rare are they? To do this one would need to assess the ratio of DNA sequences that code for viable proteins to the DNA sequences that do not code for a viable protein. It is analogous to asking: What is the ratio of the number sequences of say 300 human text words that produce a meaningful sentence, versus those sequences of text words that are unintelligible or gibberish.
The problem involved in a random search through a large “space” of possible molecular sequences to find a viable molecular sequence is often referred to as the “protein sampling problem.” It is fundamentally an information problem—a complex specified information problem in fact. Neo-Darwinists do their best to ignore it by claiming the powers of natural selection enable the process to skirt around any sort of improbabilities critics might throw their way. But Neo- Darwinism is a tandem process with two necessary causes—mutation and selection. You need the raw material of change (mutation) prior to selection in the first place that produces a functional protein. Furthermore, the process needs to discover viable DNA sequences that lead to viable proteins at each step along the way otherwise they could not be selected.
Assessing the impact of the protein sampling problem is not an easy problem given the size of protein space (number of combinations, 20300 assuming most proteins have about 300 amino acids [there are 20 different types of amino acids in living systems]), but there have been estimates. Mathematician Hubert Yockey found that the probability of evolution finding the cytochrome-c protein sequence which is only 150 amino acids (residues) is about one in 1090. To put that number in perspective, a target the size of a grain of sand in the Sahara amounts to about one part in 1020.
But this is a target for a specific protein albeit a protein of modest size. What about the ratio of viable DNA-protein sequences vs inviable DNA-protein sequences assuming any function at all?
In the early 1990s, Robert Sauer professor biology at MIT, made an estimate on a protein of 92 residues (that is a small protein, most are about 300 amino acids, some are 1000 amino acids). Sauer calculated that the ratio of DNA sequences leading to viable proteins to those yielding no viable proteins to be 1 in 1064.
Biologist Doug Axe working at Cambridge University conducted the most recent study in 2004. Dr. Axe’s study has been well-documented in Stephen Meyer’s books Signature in the Cell and Doubting Darwin. Doug Axe and Stephen Meyer are Intelligent Design proponents. Axe conducted a study to determine the significance of the sampling problem for a protein with 150 amino acids.
The outcome of his research was that the ratio of viable to inviable DNA sequences is 1 in 1077. In other words, for every DNA sequence that yields a viable protein, there are roughly 1077 DNA sequences that do not. Obviously for a larger (and more realistic) protein with 300 or more amino acids, the probabilities related to the sampling problem diminish significantly—become much less probable.
Most functions in organism require multiple proteins aggregated together to form a protein molecular machine or complex. We saw multiple protein complexes and machines are required for any essential cellular function. This means that you would add the exponents (the 77 in 1077) for the probability of each protein for any particular molecular device and for all the molecular
devices which work together to achieve a particular function such as duplicating the DNA. Adding all the exponents of all the proteins necessary to carry out any biologic function takes what is an extraordinarily daunting problem in probabilities to one that is hopelessly implausible.
It is important to note that typically only about 50% of the amino acids in a protein have to be precisely what they are in order for a protein to support a function. In other words the degree of specificity is about 50%. The calculations above account for that.
To put the number 1 in 1077 in perspective, there are only 1065 atoms in the Milky Way. Scientists have estimated that a total of about 1030 organisms that have lived on earth since the origin of life and that there have been a maximum number of about 1043 trials, i.e. mutational attempts in all the genomes of all the organisms that have ever lived. Yet the Cambrian explosion, which is the period throughout which most animal body plans appeared, allows for only about 20 million years. (There is some debate about that, but were one to take the fossil record at face value, these creatures appeared instantly.)
Clearly, if this research holds up—if it is really the case that DNA sequences that yield viable proteins is so sparse in sequence space—there is no plausible way that any of the necessary proteins in life could have arisen.
But there is one other possibility. Imagine a vast circle representing the space of all possible sequences of DNA for say 300 amino acids. That is an enormous space: That’s 20300 (there are 20 amino acids, i.e. 20 sets of 3 DNA base pairs that code for these amino acids). Then imagine inside that circle, many dots each representing a family of DNA sequences yielding viable proteins. The collection of dots would represent roughly 1/1077 of the space within the overall sequence space that has to be sampled, i.e. searched through. The question is, how tightly located are these viable sequences in the overall vast possible space? If they are tightly located, it is at least possible that, by a stroke of luck, nature could have hit the jackpot and found this isolated section of rich DNA- protein sequence space. From there, incremental changes could have, some would argue, navigate around to the others.
Doug Axe looked at this possibility and found that that is not the case. Proteins do not appear to be huddled together in a small sector in the vast sequence space where a lucky strike might yield a gold rush of proteins. The DNA sequences coding for viable proteins are dispersed throughout the sequence space. Therefore navigating from one viable sequence to another is highly improbable.
The depiction below shows several dots in a circle. In the analogy, the dots are the DNA sequences that code for viable proteins. The circle—all the empty space—is all the potential DNA sequences that yield nothing. The diagram is of course, just a depiction and not intended to reflect reality. The vastness of the nothingness is grossly understated given the limitations of computer graphics.
Note that the dots are dispersed. This means that the DNA sequences between the large families of protein folds are very different. Finding one does not mean you have hit a lucky strike and with a few changes in the DNA sequence could find the rest. In other words, areas of viable proteins are dispersed and could not be traversed by Neo-Darwinian mechanisms because there are vast areas of nothingness where no set of incremental steps—which have to be functional if Neo-Darwinism is true—could be taken to get from one dot the next.
Darwinists attempt to counter these poor probabilities by reducing the number of amino acids required in early proteins and assuming less specificity. And as always they invoke the magical mystical powers of natural selection. But selection is of little help if you cannot navigate from one viable protein to the next without a smooth pathway where each step is useful—more useful than the previous. And it appears to be the case that there is no smooth pathway from protein to protein throughout sequence space.
Another aspect of complexity in living systems is the existence of codes. There are no codes that are known to have been produced by natural processes. Neo-Darwinists of course claim that the DNA transcription-translation code to make proteins from DNA was created by natural processes. But this is a supposition based on materialism. You have heard about the DNA code. But there is actually more than one code. I will first discuss the DNA Transcription-Translation code since it is the best understood and well-known.
4.3.1 DNA Transcription-Translation Code
What is the DNA transcription-translation code? The DNA transcription-translation code is a way that the information in DNA first transcribed into RNA and then translated to make proteins. It is the most essential process in all of biology. It is important to note that this is a code and not a template. There is a difference. A code involves an intermediary that allows for an unconstrained, one might say arbitrary, arrangement of things. In this case these “things” are the amino acids whose sequence defines a particular protein. Proteins, you recall, do all the essential work in the cell and form the structures. A template would be a case where the physical structure of the DNA itself directly defined the arrangement of protein components (amino acids).
In the diagram below, if you look intently enough you will see that there is a transfer RNA molecule carries an amino acid on one end and matches a set of three bases to the messenger RNA in the protein assembly molecular machine called the ribosome. So the sequences in the DNA indirectly create the sequences in the protein. It is a lot like a language. There are 4 blocks that form 20 different associations in groups of threes called codons which then map and match to a similar set of 3 components on the transfer RNAs. These 3 components define the 20 amino acids that are used in the concatenation of similar elements that form the proteins.
So the DNA transcription-translation code is a code that is of course copied during DNA replication and is therefore involved in DNA inheritance.
But there are other “codes” some of which strike more as templates.
4.3.2 Epi-Genetic Code
I will talk more about epigenetics in the following section. But for now I will only say that there is a code, an “epigenetic code” that involves different types of modifications that can be applied to the DNA molecule which act to inhibit the transcription (the making of a messenger RNA) of a particular sequence of DNA. This inhibiting of transcription disallows that sequence from being translated into proteins.
4.3.3 Sugar Code
The sugar code or glycocode which is not part of the DNA, appears to play an important role in the development of multicellular animals. According to Intelligent Design developmental biologist, Jonathan Wells, “surface glycans in early mouse embryos change in a highly ordered and stage specific manner. The research shows that they mediate cellular orientation, migration, and responses to regulatory factors during development.”
The sugar code is determined by a complex set of patterns in the sugar molecules on the cell surface—the cell membrane. These sugar molecules can attached to lipids or proteins. Because of the complexity of the sugar molecule and the various forms that these molecules can take, the sugar molecules can carry high amounts of information on the cell membrane surface. In fact by some estimates the glycocode can convey far more information than the genome itself.
The sugar code is interpreted by proteins called lectins. “These lectins associate specifically with three dimensional structures of other molecules.” The information in the sugars attached to the cell membrane appear to affect cell to cell communication and how cells interact with one another during development.
4.3.4 Membrane Code
There are also patterns of the three dimensional arrangements of membrane-associated proteins, lipids and carbohydrates in the cell membrane that appear to play a role in the development of multi-cellular animals prior to the initiation of the genetic regulatory networks. The Membrane code, as Jonathan Wells calls it, “determines the spatial gradient for things in the cell.” They do this by providing targets and sources for intracellular transport and signaling.
4.3.5 “Endogenous” Electric Code
Perhaps the most interesting “code” are the electric fields, although code might not be the proper term. “Endogenous” means that they are generated from within. Electric fields emanate from the cell membrane during the embryonic stages of development and appear to provide information important to the spatial layout of a developing animal. The inside of every living cell is electrically negative with respect to its external environment.
That electric fields are involved in the development of multi-cellular animals is evidenced by the fact that perturbations of these fields result in abnormalities in, or a halting of, the developmental process.
Please view the video at the link below. It shows that electric fields—bioelectric signals—cause groups of cells to form patterns marked by differing membrane voltage levels. When a developing frog embryo is stained with a dye, the negatively charged areas shine brightly while other areas appear darker creating an "electric face."
4.3.6 Summary of Codes
There are a couple of important general points to be made about these codes. These codes—like all codes—have a source and a target—a coder and a decoder. This means that there is a reciprocal dependency. There would be no point to an encoding structure without the ability to decode. As we learned in the section in Complex Specified Information, the quantity of complexity is determined in part by dependencies and especially interdependencies. There is an obvious interdependency in a code by virtue of the need for an encoder and a decoder.
Moreover, importantly, and this is a controversial and contentious topic …since these other codes are involved in the defining the form of a multi-cellular animal during development and yet are not specified in the DNA they cannot therefore be part of the Neo-Darwinian process.
Therefore, it would appear that there is good evidence, despite the outcry from ultra-Darwinists, that the DNA gene regulatory networks (genetic algorithms or programs), once hailed as the secret as to how an animal is given its form (because there was no alternative), does not completely specify the form of an animal. This represents another unwelcome surprise for Neo-Darwinism. The subject is still somewhat unresolved, but the preponderance of evidence seems to suggest that the DNA does not entirely specify the form of an animal.
Although it is true that DNA specifies the materials of a cell including all the components involved in these other codes, nevertheless, due to RNA splicing and editing processes that occur after transcription of RNA from DNA but before translation of proteins, DNA sequences do not fully specify the final functional forms of most membrane components. According to Jonathan Wells, “these networks must be localized in spatial domains for the embryo to differentiate into various cell types and organs, and those domains must be spatially ordered with respect to each other for the organism to develop its proper morphology.”
Furthermore, the complexity of the information associated with the cell membrane that appears to account for some important architectural aspects of form of multi-cellular animals, is extraordinarily complex. And it is likely to be very specific. Therefore random changes to the information in the membrane are unlikely to be something that a purely naturalistic account of the evolution of life could subsume and is therefore unlikely to serve as a supplement for materialism as the Neo-Darwinian explanation falters.
As an interesting aside: You might occasionally hear of an Intelligent Design scientist who questions even Darwin’s “fact of evolution” i.e. common descent. And because of that you might be tempted to dismiss everything that they are saying. However, one should know that the reasons for this reluctance to accept common descent are because of this inconvenient finding (the finding that the form of an animal appears not to be entirely determined by the DNA) and also because of the number of new genes (“Orphan Genes” ) in each new species. This second point I will address in the next section. Epigenetics and horizontal gene transfer (also discussed in the next section) are other viable reason for questioning the doctrine of common descent. My feeling is that the truth or falseness of common descent is irrelevant to the question of design in nature—teleology.
4.4 Summary – Complexity of Life
There are many details I have glossed over in this section. The effect of omitting these details is to have made Neo-Darwinism seem more plausible than it really is. For example, I have omitted any discussion of energy requirements for these molecular machines to do their work and I have not discussed the fact that proteins, in order to do anything, have to be folded in a precise three dimensional way. In most cases proteins require another category of proteins called chaperones to do this. Some have claimed that if the protein folding process were entirely random, in other words not determined by physics and chemistry and unaided by chaperone proteins, that even a single 100 residue protein could not find its functional conformation by chance given the entire history ofthe universe.
Living systems clearly exhibit massive amounts of complex specified information and, it appears, not all of it is stored in the DNA. Materialists claim that this complexity can be assembled by emulating agency causation in place of true agency-design-through natural selection along with random mutation. The next few sections of this paper evaluate the viability of that claim.
Materialists believe that life is entirely mechanistic meaning that no true agency causation is required to account for it. Life simply emerged according to materialists, somehow. But there is no good theory as to how life arose from matter. There are plenty of theories but the whole endeavor seems to have come to an impasse. The Urantia Book has this to say about the origin of life:
“When, in accordance with approved formulas, the physical patterns have been provided, then do the Life Carriers catalyze this lifeless material, imparting through their persons the vital spirit spark; and forthwith do the inert patterns become living matter.”
Clearly the statements in the Book are at odds with mainstream science. According to Michael Denton in Evolution: Still a Theory in Crisis:
“Absolutely no plausible well-developed hypothetical evolutionary sequence has ever been presented showing how the cell might have evolved via a series of simpler cell-like systems.
In Evolution: A Theory in Crisis I wrote: ‘Between a living cell and the most highly ordered non-biological system… there is a chasm as vast and absolute as it is possible to conceive.’ Thirty years on, the situation is entirely unchanged.
“No one has provided even the vaguest outlines of a feasible scenario, let alone a convincing one.”
Consider these statements by the pre-eminent atheist in the world—Richard Dawkins:
As a refresher, the following link is to an excellent video covering the essential functions of living organisms.
The origin of life is confounding for a few reasons. First off, the complexity of life is astounding as you have seen. There are many interdependencies. Every fundamental process requires multiple molecular machines which owe their existence to other molecular machines which often owe their existence to the first set of molecular machines among others. Secondly, unlike the evolution of life, which can claim a selection process to lock in incremental change and therefore needing only incremental amounts of random chance (luck), the origin of life requires many, many enormously complex leaps of luck, chance arrangements of molecular components in just the right way, until a set of replication and translation molecular devices are created. As evolutionary biologist Eugene Koonin comments:
“For biological evolution that is governed, primarily, by natural selection, to take off, efficient systems for replication and translation are required, but even barebones cores of these systems appear to be products of extensive selection.”
When Koonin says “extensive selection” he means that the complexity involved would seem to require a step-wise process involving natural selection because a chance arrangement of such a barebones “replication-translation” system would be prohibitive in terms of probabilities.
Somewhat surprisingly, it is not as easy to find calculations on the origin of life. I suspect there are two reasons for this. First, the scope and complexity of the problem is immense. Secondly, in order to be motivated enough to perform such a calculation you would have to be trying to demonstrate either that some alternative theory of the origin of life is more plausible than another or that no alternative is plausible. The former is of course the more common inhibiting factor because the primary alternative hints at divine intervention. Therefore most materialist scientists are not motivated to conduct a calculation that they suspect or know would be somewhat embarrassing.
Those who embrace teleology on the other hand, such as Intelligent Design proponents, are highly motivated to determine these probabilities but have been hampered by the fact that the problem is so intractable.
5.1 RNA World
Earlier theories on the origin of life viewed the process as a purely chance event, a frozen accident in time despite its improbability. Nowadays the thinking is based on self-ordering principles that can help the process along. The hope of self-ordering principles is that deterministic laws of chemistry and ultimately physics can be relied upon to aid the process. These are things such as bonding affinities.
Self-ordering theories appeared to have originated from world of physics where it is noted that some physical systems self-organize such as water going down a drain in a vortex. But self- ordering in physics is deterministic and therefore contains relatively minimal complex specified information. Life on the other hand, is information-rich. What is needed to produce life are extraordinarily high levels of complex specified information. As difficult as chance is, it nevertheless has to play an important role in arrival of the first living cell from pure chemistry assuming one sees that as the only possibility.
Researchers have to approach the problem from the most reasonable starting point. And because the RNA molecule is central to the transcription process and the protein translation process and has been shown to be able to catalyze all of the biochemical reactions required for life, it is the obvious candidate to initiate the long complex chemical evolution sequence of events that are purported to have culminated in the origin of life. And relatively short RNA molecules have been artificially produced in labs, which are capable of replication.
The proposals which rely on RNA first are collectively referred to as the “RNA world hypotheses.” The RNA world requires that ribosomal RNAs must have once performed essential biochemical functions performed now by proteins.
But there are several problems with the RNA world hypothesis. One problem is the “clutter problem” cited by Gerald Joyce professor at The Scripps Research Institute. There are a wide variety of organic biomolecules that assemble through natural mechanisms but only few of these are useful in life. There is no clear natural way to sort through the clutter to produce a replicating- translating system.
A second problem is the ability to replicate some sort of a stored template (DNA is a template). Therefore assuming that the basic building blocks of life such as sugars, amino acids, lipids and nucleotides could have accumulated somewhere, and setting aside the clutter problem, the molecules would have to have assembled themselves into a replicating system. The fact that a living cell that has just died cannot be resurrected despite having all the necessary biomolecules informs us that this problem is a difficult one. Two possibilities have been proposed to address this problem.
The first possibility is to avoid the need for a replication template altogether. This assumes that the precursor biomolecules form what Stuart Kauffman calls an autocatalytic set where the members of the set collectively have the ability to synthesize (produce a copy of) every other member of the molecular assembly. The second possibility is that the first self-replicating system was an RNA ribozyme which can synthesize proteins. There is no clear indication whether either of these proposals is feasible. Currently no such set of autocatalytic biomolecules has ever been produced nor has any RNA molecule capable of replicating itself been identified.
The third problem related to the RNA world hypothesis is the origin of the genetic code. As you have seen, DNA is transcribed into RNA. The RNA consists of “codons” (three sets of bases) each set of three bases—each codon—codes for a specific amino acid. The amino acids are concatenated together in the process called protein translation. There is no plausible scenario for the evolution of the DNA-protein genetic code. In a recent critical paper summarizing this current impasse in the origin-of-life field, evolutionary biologists Eugene Koonin and Artem Novozhilov comment:
“Despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made.
“At the heart of this problem [the origin of the translation code] is a dreary vicious circle: what would be the selective force behind the evolution of the extremely complex translation system before there were functional proteins? And, of course, there could be no proteins without a sufficiently effective translation system.”
The fourth problem is the information problem identified in the preceding section of this paper covering the complexity of life. This relates to the sparseness of DNA sequences that code for viable proteins compared to the vast overall set of possible DNA sequences. As a reminder, the most recent calculation, assuming a rather small size protein of just 150 amino acids, is that the set of DNA sequences yielding a viable protein comprises only 1 in 1077 of the overall possible set of DNA sequences. And this is probably an optimistic estimate because most proteins are longer than 150 amino acids.
5.2 Calculations of Probabilities on the Origin of Life
There are a few recent calculations on the origin of life I want to cover briefly.
Eugene Koonin, referenced above, is one of the few evolutionary biologists who have offered a calculation —what he refers to as a “toy” model—on the origin of life. He set out to establish a barebones minimal set of biomolecules to carry out the essential functions of living organisms.
"Despite considerable experimental and theoretical effort, no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system.
Koonin goes on to sketch out a minimal set of biomolecules necessary for life:
“A ribozyme replicase consisting of ~100 nucleotides is conceivable, so, in principle, spontaneous origin of such an entity in a finite universe consisting of a single O region [observable universe] cannot be ruled out in this toy model (again, the rate of RNA synthesis considered here is a deliberate, gross over-estimate).
The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of:
- 2 rRNAs with a total size of at least 1000 nucleotides
- ~10 primitive adaptors of ~30 nucleotides each, in total,
- ~300 nucleotides
- at least one RNA encoding a replicase,
- ~500 nucleotides (low bound) is required.
“…even in this toy model that assumes a deliberately inflated rate of RNA production, the probability that a coupled translation-replication emerges by chance in a single O-region [observable universe] is P < [probability is less than] 10-1018. Obviously, this version of the breakthrough stage can be considered only in the context of a universe with an infinite (or, in the very least, extremely vast) number of O-regions.”
To give you a sense of how vastly improbably this number 10-1018 is, there is something called the universal probability bound which is the product of the number of particles in the universe (1080) and the number of the smallest increments of time (Planck time) (1059) that have elapsed since the beginning of the universe = 10139. The universal probability bound represents the maximum number of possible events that have occurred since the universe began. It therefore represents a very charitable upper bound to what is possible in terms of any chance event or series of events.
Koonin is a materialist and certainly has no fondness for teleology; therefore he is not motivated by a need to demonstrate the need for a Creator in his endeavor to calculate the probabilities of life arising from matter. Rather he is showing that the only realistic way a materialist could maintain a purely naturalist origin of life, is by adopting one of the multiverse theories. This way, the probabilities are amortized over greater—potentially an infinite number—of universes of opportunity.
“The ‘many worlds in one’ version of the cosmological model of eternal inflation could suggest a way out of this conundrum because, in an infinite multiverse with a finite number of distinct macroscopic histories (each repeated an infinite number of times), emergence of even highly complex systems by chance is not just possible but inevitable."
One of the approaches to the problem that Koonin cites involves the “virus world concept.” The virus world envisions what Eugene Koonin refers to as a “communal” stage of evolution with nascent populations of non-membrane genetic elements more like viruses and prior to any distinction between viral and membraned cells.
According to Koonin,
The notion of the virus world stems, primarily, from the fact that a set of genes encoding essential proteins involved in viral genome replication, packaging, and virion formation (virus hallmark genes), are shared by numerous groups of dissimilar and, otherwise, apparently, unrelated viruses.”
At some point, the separation between self-sustaining cellular systems on the one hand and parasitic virus on the other, occurred.
There is a lot of speculation in virus world proposal and that is all fine; you have to start somewhere of course. But ultimately there had to have been a good deal of genetic material for replication and translation in place for viruses as well. And that is a large part of the problem, i.e. where does the information to specify viable proteins come from?
No matter what scenario you propose, no matter how plausible and no matter how abundant you suggest organic material, especially RNAs, might have been, you still need to find a few hundred viable proteins-making sequences.
The calculation Koonin and others make is not really affected by any particular scenario because Koonin and others usually adopt a very favorable set of circumstances anyway. The resulting calculation Koonin makes even though wildly improbable, may be wildly optimistic as well. It surely seems wildly optimistic compared to the most recent and thorough calculation done by a skeptic of materialism, Stephen Meyer discussed further below.
The outcome of any particular calculation of the probabilities of abiogenesis is largely predicted on what the minimal set of biomolecules is for life is assessed to be. An empirical way of determining this minimal set of molecules is to start with the most primitive cell known and degrade it by knocking out genes one by one until you have the minimal set of genes capable of sustaining life. Craig Venter approached the problem in this fashion. His team started with the Mycoplasma and degraded this cell. The result is a set of 473 genes which appear to be required for a minimal cell. Four hundred and seventy three (473) genes is very complex as Venter commented:
“We're showing how complex life is, even in the simplest of organisms. These findings are very humbling.”
The most recent, and probably most sound such calculation of the origin of life has been made by Stephen Meyer who is a leading Intelligent Design proponent. Meyer assumed only 250 proteins necessary for life which is considerably less than Venter’s research but more than Koonin’s. Meyer used Doug Axe’s research showing that viable proteins are 1 in 1074. He then noted that this Axe’s calculation did not take into account the bond type between bases and the left of right handedness of the bases which is required by life as we know it. From this he calculated that a viable protein of a modest length of only 150 amino acids is actually 1 in 10164. From there a straight forward calculation with 250 proteins yields a probability of 1040,000 which coincidentally is what the late British cosmologist Sir Fred Hoyle calculated nearly 40 years ago.
The fundamental problem is that there is no deterministic, lawful way for the viable sequences to arise using natural causes in order to produce the massive amounts of new complex specified information. Just as the sequence of letters on a page are not determined by the chemistry of the ink or paper, so it is the case that the sequence of bases on the DNA molecule are not determined by physics are chemistry.
You can see this by looking at the construction of the DNA molecule. Note in the diagram below, there is no bond between the bases on either side (labeled T, G, A, C (left) and G, T C A (right)) along the spine of the molecule. Therefore, the arrangement or sequence of bases cannot be determined in any way by physics or chemistry. And in fact it could not be any other way. If the molecule enforced a set of deterministic laws on DNA it would not have the freedom to produce the necessary range of molecules for the rich set of functions life exhibits.
This point—distinction between order and complexity—is commonly confused as I discussed in Section 2 Complex Specified Information. The following debate between atheist Peter Atkins and Intelligent Design theorist Stephen Meyer demonstrates the manner in which atheists do not get this point (view the first 20 minutes) at:
DNA information is analogous to human text. If there were too many rules—constraints, order—about which letters could follow, then you could not form the rich variety of words to express all the concepts related to human knowledge, human artifacts and artistic renderings. So if we are looking for 30,000 facts to disprove that life is the result of accidental causes, perhaps we can start with this one.
The reigning theory of evolution is the Modern Synthesis often referred to as “Neo-Darwinism.” The theoretical underpinnings of the Modern Synthesis were laid down during the 20s through the 40s. The names most commonly associated with the theory are: Theodosius Dobzhansky, Ernst Mayr, and George Gaylord Simpson but you should also include by Ronald Fisher, J. B. S. Haldane and Sewall Wright who were population geneticists.
6.1 What is Neo-Darwinism
Neo-Darwinism proposes that life is mechanistic and follows the laws of physics and chemistry. The mechanism put forth to produce new adaptive features of life is a tandem process of random mutation and natural selection. Random mutation means that the changes “mutations” that natural selection acts upon are not in any way solicited by the organism. In other words, if and when mutations are beneficial to the organism, it is purely based on chance.
Mutation which initially meant single point mutations in the DNA base pairs has had to have been expanded rather considerably as researcher identify other sophisticated sources of change in DNA and RNA prior to protein translation. Some of these other mechanism are: exon shuffling, gene duplication, retropositioning of messenger RNA transcripts, lateral gene transfer, transfer of mobile genetic units or elements, gene fission or fusion. But it is important to remember, that from a Neo-Darwinian perspective, these other mechanisms, despite their sophistication, must also be random, in other words, not based on any intentional solicitation by the organism based on that organism ’s need.
Because mutational changes are random, conventional Neo-Darwinism would expect them to be small and incremental. They would be expected to be small and incremental because it would be unreasonable to expect that a single chance change in DNA would produce a large scale improvement such as an entirely new protein or feature.
Random changes are thought of as arising from distinct base pair changes stochasticity (randomly by chance). This would mean that evolution would accumulate many changes to produce major change in genome and phenotype. This idea of gradual incremental change is best illustrated by Richard Dawkins metaphor in his book, Climbing Mount Improbable. His idea is that just as a human would have trouble climbing the steep face of a mountain, they could however reasonably expect to climb the mountain up the gentle slope.
Once a random change occurs, Neo-Darwinists would argue, then any change that offers even a slight benefit—increases the fitness of the organism—would, were it to happen enough times, be locked in the population by natural selection. So natural selection, along with genetic drift whereby changes that are beneficial accumulate by chance even in the absence of an increase in fitness, slowly changes the genetic makeup of the organism. Over time an accumulation of these changes result in a divergence of the parent and child population resulting in a new species.
Because the changes in the genome are random, the process is said to be directionless; the entire Neo-Darwinian process is said to have no target. Because there is no target—because it is directionless, evolutionists who promote Neo-Darwinism as a nullification of Deism or Theism, ask rhetorically: How, if the process is directionless—non-teleological—can it be imagined that the process could be reconciled with a Designer. There would be nothing for a Designer to do.
Nobel Laureate Jacques Monod famously said:
“Chance alone is at the source of every innovation, of all creation in the biosphere. Pure chance, only chance, absolute but blind liberty...Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance. His duty, like his fate, is written nowhere.”
That is why Richard Dawkins has said that to suppose that Christian Theology is compatible with evolution is to misunderstand evolution and also that, “Darwinism made it possible to become an intellectually fulfilled atheist.”
Now there is some confusion on this point that Neo-Darwinism is undirected. The random changes, the “mutations” give Neo-Darwinism its random nature. But natural selection is often touted as a way of showing that there can be direction because natural selection would act to channel life only along somewhat narrow corridors of viability. Using natural selection in this way is the rhetorical technique they use to explain the ubiquity of convergence. The problem in explaining away convergence in this way is that the random mutations have to occur in the first place, and that is the problem as I will explain later in the section on Convergent evolution.
6.2 Current Status of Neo-Darwinism
Although the Intelligent Design group has assembled an impressive list of Neo-Darwinian skeptics, it is probably the case that the vast majority of biologists, not having kept up with the research and having been steeped in materialist science throughout much of their adult life, still accept the general Neo-Darwinian narrative or something like it, by default.
But there is also an intimidation factor. Anyone coming out of the closet as a supporter of Intelligent Design or offering comments which are sympathetic to Intelligent Design and perhaps even not denouncing Intelligent Design in a vigorous enough fashion, might be subject to a purge and be drummed out of the materialist—atheist academy. I suspect that there are many, many Intelligent Design sympathizers lurking in the corridors of academia. But like Reagan supporters, they better keep their mouths shut, lest they find themselves under the bright lights.
Incidentally, the “drumming out of the academic corps” so to speak of Intelligent Design proponents or sympathizers is similar to what Bruce Rosenblum and Fred Kuttner recount in their book, Quantum Enigma: Physics Encounters Consciousness, related to a physicist who might have some impure thoughts:
“An Einstein biographer tells that back in the 1950s a non-tenured faculty member in a physics department would endanger a career by showing any interest in the strange implications of quantum theory.”
At this point it would be helpful for you to watch the interview with David Berlinski, an agnostic mathematician and philosopher, who has been a critic of Neo-Darwinism (view the video at the link below up to the 11:06 mark):
Recent research, as I will briefly sketch out in this section, is pointing away from Neo-Darwinism and toward Intelligent Design unequivocally. Renowned philosopher Thomas Nagel’s book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False makes the following comment:
“Whatever one may think about the possibility of a designer, the prevailing doctrine—that the appearance of life from dead matter and its evolution through accidental mutation and natural selection to its present forms has involved nothing but the operation of physical law—cannot be regarded as unassailable. It is an assumption governing the scientific project rather than a well-confirmed scientific hypothesis.”
The opening statements of Dr. James Shapiro’s book entitled, Evolution: A View from the 21st Century are:
“Innovation, not selection, is the critical issue in evolutionary change. Without variation and novelty, selection has nothing to act upon. So this book is dedicated to considering the many ways that living organisms actively change themselves. Uncovering the molecular mechanisms by which living organisms modify their genomes is a major accomplishment of late 20th Century molecular biology. Conventional evolutionary theory made the simplifying assumption that inherited novelty was the result of chance or accident.” [Emphasis mine]
He goes on to explain that these innovations are not “random mutations .” Rather they are engineered by the cell using what he refers to as “Natural Genetic Engineering” techniques. Shapiro claims that these natural genetic engineering techniques were present even in the earliest cells based on our current knowledge of bacteria.
In 2008 a group of sixteen (16) leading scientists assembled at the Konrad Lorenz Institute for Evolution and Cognition Research in Altenberg, Austria. The purpose of the conference was to discuss the relevance of the core claims of Neo-Darwinism. Science writer, Suzan Mazer who interviewed many of the participants in advance of the conference, wrote a book called The Altenberg 16, she commented:
“There are hundreds of other evolutionary scientists (non-Creationists) who contend that natural selection is politics, not science, and that we are in a quagmire because of staggering commercial investment in a Darwinian industry built on an inadequate theory."
One of the general themes of the conference was a shift in focus from natural selection as the primary driving force of evolution to something called self-organization. Darwinists take a dim view of those advancing self-organization theories because as Eugenie Scott who you met in the evolution section, remarked, people can be become confused over the difference between self- organization and teleology.
But there is a distinction between self-ordering and self-organization. Self-ordering implies that the molecular components, guided by the laws of physics and chemistry, can assemble complex structures.
Self-organization theories claim that biologic form is epigenetic, meaning that the DNA does not determine the form of an organism. But these theories are short on specifics and long on hope as they are less clear about how complex molecular arrangements, such as those we see in living systems, arise. The hope of any materialist of course, is to uncover laws that can lead to these complex structures but for now, it is a mystery. For that reason the term self-organization can be a term used to imply that teleological or even vitalistic forces are at work.
According to Denis Noble geneticist at Oxford:
“All the central assumptions of the Modern Synthesis (often called Neo-Darwinism) have been disproven. Moreover, they have been disproven in ways that raise the tantalizing prospect of a totally new synthesis: one that would allow a re-integration of physiological science with evolutionary biology. It is hard to think of a more fundamental change for physiology, and for the conceptual foundations of biology in general…”
Dr. Noble will be heading up a conference “New trends in evolutionary biology: biological, philosophical and social science perspectives” sponsored by the Royal Society of London in partnership with the British Academy. The purpose of the conference is to discuss developments in evolutionary biology and adjacent fields that have produced calls for revision of the standard theory of evolution in support of an “Extended Synthesis.”
Of course many Neo-Darwinists are unhappy about this as they are with the Templeton Foundation’s grant of over $8 million dollars for research to UK, Swedish and US researchers to “put a revisionist view of evolution, the so-called extended evolutionary synthesis, on a sounder footing.” Many ultra-Darwinists such as University of Chicago’s Jerry Coyne are calling on these researchers to decline the grant money.
As I mentioned, Dr. Noble claims that the major assumptions of Neo-Darwinism are false. What are the major assumptions of Neo-Darwinism?
- Random mutations – That the changes in the genome that natural selection acts upon are not solicited by the environment to the benefit of the organism.
- Gradual change – That evolutionary change occurs gradually by the step-wise process of random mutation and natural selection.
- Central Dogma – That information flows only from genes to protein (“DNA makes RNA makes Protein” ).
- That there is a one to one relationship between genes and proteins they code for—each gene codes for one protein.
- Natural selection – That natural selection is the primary mechanism driving evolution.
We will look at these in the next section of the paper.
Dan Graur, Molecular Evolutionary Bioinformatics from the University of Houston, recently weighed in on the ENCODE research. The ENCODE research project was a massive undertaking. Among other things, ENCODE concluded that the genome is very complex, far more complex than ever imagined and much of the DNA that was once considered junk is in fact used in a wide variety of complex regulator functions. According to Science magazine, "Graur's atheism inflamed his anger at ENCODE." It's not surprising that Graur would become emotional over ENCODE given his blunt framing of the issue in a talk he gave in 2013:
“If the human genome is indeed devoid of junk DNA as implied by the ENCODE project, then a long, undirected evolutionary process cannot explain the human genome. If, on the other hand, organisms are designed, then all DNA, or as much as possible, is expected to exhibit function. If ENCODE is right, then Evolution is wrong.”
Clearly there has been a growing number of notable scientists who are now openly questioning Neo-Darwinism and there is evidence behind all this including the ENCODE data.
I think the science may be nearing a tipping point where there could be a confluence of younger scientists who are less inclined to submit to the old ways on the one hand and established scientists on the other hand nearing retirement who have harbored doubts about the theory and now see abandoning Neo-Darwinism as a way of establishing some rightful legacy for themselves. I don’t like to make predications but my guess would be that in the next 20 years, it will be recognized by all but a few die-hards that Neo-Darwinism is, and has been for 70 years, a spectacular canard.
In this section I am going to present recent evidence that conflicts with the modern synthesis “Neo-Darwinism” or any purely naturalistic theory of evolution. One way to think about this is to imagine what the signature of a purely naturalistic process would look like on the one hand and what a teleological process would look like on the other. A natural process, according to the view held by evolutionary biologists, should be a gradual, step-wise process with incremental change. This is because true agency causation—intelligent causation—is excluded.
Also, since Neo-Darwinism involves random mutation as a necessary cause, it is inherently stochastic. Therefore, there is no target, no goal in the evolutionary process. If there is no target or goal, then one would not expect to see repeated patterns in biologic forms and functions. A teleological process could, however, involve rapid change and similar patterns in form and function.
7.1 Sudden Appearance of Complex Biologic Features
[Life forms] do not evolve as the result of the gradual accumulation of small variations; they appear as full-fledged new orders of life, and they appear suddenly. The sudden appearance of new species and diversified orders of living organisms is wholly biologic, strictly natural. [58:6.3‒4] (P. 669)
An abrupt appearance of highly complex organisms is a signature of design—design would be the default view in that case. Richard Dawkins, among many others, has said that you can count on some luck in evolution but not too much and that you can only hope to acquire complexity by natural means with luck coupled with a selection mechanism. Neo-Darwinism is an incremental process and takes time, lots of time. The problem for Neo-Darwinism is that recent research is squeezing the time available for complex living features to have arisen. Now it seems many evolutionary biologists are having to accept as possible, that which they have declared impossible in the past.
Consider James Shapiro ’s own response to his rhetorical question:
“Do the sequences of contemporary genomes fit the predictions of change by “numerous, successive, slight variations,” as Darwin stated, or do they contain evidence of other, more abrupt processes, as numerous other thinkers had asserted? The data are overwhelmingly in favor of the saltationist school that postulated major genomic changes at key moments in evolution.”
“The data are overwhelming” he says; not a matter of interpretation any more. Saltation is the abrupt “leap” in change. Shapiro goes on to say:
“The results of sequence analysis have documented several types of genome alterations at key places in evolutionary history, alterations which are notable because they happened within a single generation and affected multiple cellular and organismal characters at the same time: horizontal transfers of large DNA segments, cell fusions and symbioses, and whole genome doublings (WGDs). These rapid multi-character changes are fundamentally different from the slowly accumulating small random variations postulated in Darwinian and neo-Darwinian theory.”
Evolutionary biologist Eugene Koonin …
“Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. …. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic super- groups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable.”
The above statement is hopelessly incompatible with Neo-Darwinism. By including animal phyla in his list above, Koonin is including the Cambrian Explosion in what he regards as a “sudden emergence.”
The two statements combined by leading figures in modern biology, well versed in the current research and not affiliated with Intelligent Design or even sympathetic to teleology, corroborate what has been known about the paleontological record for decades.
In the following subsections, I will discuss some of the phenomenon mentioned in these quotations. There is too much to cover in a paper of this scope. The primary point to keep in mind is that these examples of rapid change are also examples of rapid acquisition of complex specified information.
7.1.1 Fossil Record
“The higher protozoan type of animal life soon appeared, and appeared suddenly.” [65:2.4] (P. 732)
“You will not be able to find such connecting links between the great divisions of the animal kingdom nor between the highest of the pre-human animal types and the dawn men of the human races. These so-called "missing links" will forever remain missing, for the simple reason that they never existed.” [58:6.2] (P. 669)
The paleontological record has always been at odds with Darwinism. Darwin knew it at the time. In the Origin of Species, Darwin stated:
"The number of intermediate varieties, which have formerly existed on the earth, [must] be truly enormous…Why then is not every geological formation and every stratum full of such intermediate links? … Geology assuredly does not reveal any such finely graduated organic chain; and this, perhaps, is the most obvious and gravest objection which can be urged against my theory."
Since that time, things have not gotten much better and in many ways worse for a Darwinian, gradualist explanation. No doubt there are imperfections in the fossil record. There are cases where an organism appears in one lower layer, is missing in the next layer up (more recent in time) only to again appear in a subsequent newer layer. But there is no denying that inconsistency between Darwinism and the paleontological record.
According to late evolutionary paleontologist Stephen Jay Gould:
"The absence of fossil evidence for intermediary stages between major transitions in organic design, indeed our inability, even in our imagination, to construct functional intermediates in many cases, has been a persistent and nagging problem for gradualistic accounts of evolution."
Robert Carroll, a paleontologist at McGill University echoes these thoughts, in Trends in Ecology and Evolution:
"The extreme speed of anatomical change and adaptive radiation during this brief time period requires explanations that go beyond those proposed for the evolution of species within the modern biota."
The Cambrian Explosion of animal types about 550 million years ago is perhaps the most well- known burst of evolutionary novelty that is difficult to explain. Most of the complex new animal groups that appear in the Cambrian are "fully formed" such as trilobites, echinoderms, brachiopods, molluscs, and chordates. There are many other transitions as well in fact all transitions that I am aware of, that are sudden. Angiosperms (the “big bloom” ), the mammalian radiation are other cases of rapid evolutionary change. The fossil record is not perfect but there are gaps and they are very unlikely to ever be closed.
In fact Gould said:
"Evolutionary biologists can no longer ignore the fossil record on the ground that it is imperfect."
The following is an interesting exchange between mathematician and philosopher David Berlinski who is an Intelligent Design sympathizer and Neo-Darwinian critic, and atheist-Darwinist Eugenie Scott who serves on the board for the National Center for Science Education—a pro evolution nonprofit organization. (Just the first 55 seconds or so, Chordates are animals with backbones.)
What is a gap? It is not enough for Neo-Darwinism to show similarities and a plausible transition from one putative creature to another and another to final animal form we see today. The really difficult thing is to assess how many new and different structures and functions arose between any two animals in a sequence and from there assess how many new cell types and new proteins are required.
Perhaps the best approach is to use examples that Neo-Darwinists commonly cite to show that evolution does proceed through gradual, incremental changes. Let’s look at the whale sequence and the “crown jewel” of paleontological sequences the reptilian to mammal transition and specifically how the jaw bones transitioned to the ear in mammals.
The two most commonly cited transitions in the paleontological record purporting to support Neo- Darwinism, actually are quite weak unless we are limiting the discussion to common descent.
Neo-Darwinism makes a more extensive claim about a mechanism and it is a mechanism that suggests a gradual transition from one organism to another.
22.214.171.124 Whale Evolution
The evolution of land animals to sea mammals—whales—is commonly put forth as an excellent case of Neo-Darwinism. The whale sequence from a putative land mammal to a whale commenced, according to paleontologists, around 55 million years ago. But although the whale sequence might be enough to demonstrate common descent, it is nowhere near fine enough to support the gradual, incremental accretion of new features and functions required by the transition from a land mammal to a sea creature.
One thing to clarify at the outset is that when you see a fossil sequence represented, typically it is shown as below. Notice the branching off of the main line. The reason the sequences are drawn this way is because, for example, Rodhocetus is not envisioned to be a direct descendant of Kutchincetus. Rather, they both had a common ancestor. They know this because, both creatures had features that the other did not have. Therefore according to common descent and the supposed Darwinian mechanism, it is unrealistic to believe that the line of descent was direct. But notice that common ancestor is missing in all cases.
David Berlinski discusses the Whale fossil sequence along with a general critique of Darwinian evolution. I highly recommend you watch at least the first two thirds of this interview. Regarding the whale transition sequence, Berlinski likens it to the transition of something like a cow to a whale to that of a car to a submarine. From this you can get an analogous sense of the difficulty. This part of the interview begins around the 11:00 minute mark.
A more technically relevant discussion can be found in the interview with biologist Richard Sternberg. Richard Sternberg has examined the requirements of the whale sequence mathematically in detail and concludes: "Too many genetic re-wirings, too little time." Some of the many changes are listed below and it is important to keep in mind that many of these changes would require multiple coordinated changes.
- Emergence of a blowhole, with musculature and nerve control
- Modification of the eye for permanent underwater vision
- Ability to drink sea water
- Forelimbs transformed into flippers
- Modification of skeletal structure
- Ability to nurse young underwater
- Origin of tail flukes and musculature
- Blubber for temperature insulation
The Sternberg interview podcasts can be accessed at:
What the Berlinski video and the Sternberg podcasts show is that there are many physiological changes that need to be made for a land mammal to evolve for life in the sea. Often when you look at a proposed fossil sequence, it is not clear to the average person what else is entailed in the transition from a functional standpoint. A set of physiological changes underlie each anatomical change. For any new function there are typically a variety of new cell types. Each new cell type
has a variety of new proteins that have to be encoded. The real difficult part of evolution is to discover how all the new information—new protein sequences and the DNA sequences and RNA functions that produce them—occur.
126.96.36.199 Mammalian Jaw
The reptile to mammal transition is often cited as the “crown jewel” of fossil evidence for evolution. Technically this transition is not convergent evolution but rather parallel evolution because the changes occur in parallel through the same set of creatures. However, it fits into this discussion because we are trying to determine if mutations are random and therefore if there is a contingency and no direction. This reptile to mammal transition was used during the Dover trial in 2005. The Dover trial actually focused on common descent rather than specifically Neo- Darwinism which is the primary issue involved in the debate with Intelligent Design proponents. As I mentioned, many Intelligent Design proponents accept common descent as essentially true with some exceptions.
In the reptile to mammal transition, there are large physiological gaps between these creatures in each of these putative steps from reptile to mammal. Therefore this transition does not provide anything like definitive evidence for Neo-Darwinism which requires a continuum of intermediates.
Furthermore, there is a complication for Neo-Darwinism…It appears that this reptile to mammal transition occurred at least a few times and perhaps several times according to Simon Conway Morris. A complex transition occurring several times is indicative of a pattern and an extraordinarily unlikely pattern at that given that very similar if not nearly identical succession of mutations would have had to occur. Natural selection can only add information to the genome if the information first occurs in the form of random mutations. And here Neo-Darwinists are asking us to believe that the same set of highly unlikely mutations occurred, a few or several times.
One would have to guess that with each transition, there would be new proteins and in all likelihood new cell types. We will see later in this section how difficult it is to evolve new proteins even when starting from a useful existing protein. Certainly over the course of several transitions from reptile to mammal and between any of the steps in the diagram above, there would be new tissues and therefore new cell types along with many new proteins which would be necessary to account for the differing biologic functions of those tissues in reptiles and mammals.
7.1.2 Mechanisms of Rapid Evolutionary Change
There are several mechanisms that have been identified that provide evidence for rapid evolutionary change which are clearly non-Darwinian. James Shapiro:
“The results of sequence analysis have documented several types of genome alterations at key places in evolutionary history, alterations which are notable because they happened within a single generation and affected multiple cellular and organismal characters at the same time: horizontal transfers of large DNA segments, cell fusions and symbioses, and whole genome doublings (WGDs). These rapid multi-character changes are fundamentally different from the slowly accumulating small random variations postulated in Darwinian and neo-Darwinian theory.” [my emphasis]
We will take a look at symbioses and horizontal gene transfer a bit in this subsection. The paper will not address whole genome duplications or cell fusions.
Symbiogenesis is the process of the merging of two once independent prokaryotes or the absorption of one by the other. It is responsible for some of the most important innovations in evolution. It is believed that much of the photosynthetic capacity on earth came from cell mergers. Symbiogenesis is clearly an example of non-Darwinian change as there is nothing gradual about it and natural selection plays no role other than to cull out mistrials.
Symbiogenesis events are now known to account for the origin of eukaryotic cells (nucleated cells) from prokaryotes. Several key organelles of eukaryotes originated from separate single- celled organisms. Organelles are distinct functional units in a composite cell (a cell resulting from symbiogenesis) and is a term used to describe the smaller and maybe subordinate cell that has been incorporated into the larger cell. Mitochondria, plastids, once distinct bacteria (prokaryotes), were absorbed by another cell as an endosymbiont event around 1.5 billion years ago.
According to James Shapiro:
“Since all ‘higher’ and large multicellular organisms, including ourselves, are eukaryotes, the formation of the eukaryotic cell is arguably the single most important evolutionary event since the origin of life.”
Biologist Lynn Margulis, has been a tireless advocate of the theory of Symbiogenesis:
“Many ways to induce mutations are known but none lead to new organisms. Mutation accumulation does not lead to new species or even to new organs or new tissues… Even professional evolutionary biologists are hard put to find mutations, experimentally induced or spontaneous, that lead in a positive way to evolutionary change.” [Emphasis mine]
It was Carl Woese at the University of Illinois who demonstrated that the mitochondrion of eukaryotes cells descended from a specific type of bacterium.
Researchers believe that there were likely additional cell mergers related to the advent of eukaryotic cells. Current theories suggest that the origin of eukaryotic cells were the result of a combining of bacteria and archaea (another kingdom of single-celled organisms).
Researchers familiar with cell mergers recognize that the process of making one organism out of two is, “…far from a simple process. There are countless questions still to be answered about what makes these mergers succeed.”
The question to ask here is how could two cells merge at all? The complexity related to the interworking of two once independent cells would seem to me hopeless complex. For example, cell mergers involve independent cells with their own genomes, therefore there must be an active DNA transfer between genome compartments. Researchers have discovered that DNA sequences are routed from the organelle genomes to the cell’s nuclear genome. Thus, the cell’s nucleus encodes most of the proteins in each of its organelles, even though they have their own genomes and protein synthesis machinery. One wonders how that marvelous function just happened to have arisen virtually overnight.
Use whatever analogy you want: the merging of two literary novels, or two computer programs, or of two corporations …what would one expect at the chance swallowing of one distinct cell by another—both extraordinarily complex—without intelligent planning? I would expect pure chaos. Instead we have the greatest evolutionary event in history! The result of the chance meeting of two cells perhaps at a local night club. I marvel at the ability of Darwinists to accept these facts without even the slightest bit of questioning as to whether such a thing is at all plausible.
The entire process is more a matter of faith—a naturalistic inference—based on the obvious observation that organelles within eukaryotes greatly resemble bacteria. If these events happened it would have had to involve intelligence lurking somewhere.
7.1.3 De Novo Genes
Neo-Darwinism evolution by the tandem mechanism of random mutation and natural selection has always been said to require time, lots of time. The theory long ago denounced saltationism offered by Richard Goldschmidt and whole-heartedly embraced gradualism, incremental, piecemeal change —climbing “mount improbable” by its gradual slope as Dawkins’ claims. Given gradualism and incremental piecemeal change as the prevailing mechanism for Neo-Darwinism, one would not expect—and Darwinists did not expect—to find large numbers of new genes in each species or group or related species.
The mechanism believed to account for evolution and speciation involved “random” gene duplication and then subsequent random mutation of base pairs of the extra gene to produce new genes for new functions. But these genes that were duplicated could only require a few mutations to achieve some new function and therefore these “parent” genes that were duplication would be identified as homologs of the “child” genes, i.e. the genes that were developed after a few mutations of the parent gene that was duplicated some time ago. Certainly a lot of new cell types and therefore a lot of new genes and proteins would be required for entirely new life forms to have arisen, such as occurred in the Cambrian Explosion. But for closely related species, Darwinists did not expect large sets of novel genes.
However, as scientists have sequenced more genomes from different organisms, they are discovering that roughly 10-20% of each species genome's protein coding sequences are new in that they lack similarity in sequence to other coding genes anywhere in the catalog of all known protein coding genes. That is to say they lack homologs. Because they have no relatives in other genomes, these genes are generically referred to as taxonomically restricted genes (TRGs), but are more commonly called “Orphan Genes” or “ORFan genes” (Open Reading Frame), or “De Novo genes.” (A taxon is a level of classification, such as species, genus, family, order, class or phylum.) Since these orphan genes lack detectable similarity to genes in other species, no clear indication of common descent can be inferred.
Orphan genes have been found in genomes sequenced from yeast to fruit flies to ants to and especially the octopus as well as mice and men. Orphan genes are often short, and they produce small proteins as one researcher commented,
“Rather than folding into a precise structure—the conventional notion of how a protein behaves—the proteins these new orphan genes code for have a more disordered architecture. That makes them a bit floppy, allowing the protein to bind to a broader array of molecules. In biochemistry parlance, these young proteins are promiscuous.”
The discovery of orphan genes was quite unexpected given evolutionary theory and the origin of orphan genes has been a mystery. As I mentioned, for most of the last 40 years, scientists thought that new genes arose from copies of existing genes—via gene duplication. The existing gene went on supporting its current function and the new copy of the gene became free to evolve through random mutation and selection. Gene duplication and subsequent evolution, it is believed, may account for some of these new “orphan genes” but it can only account for a minority of them because so many of these orphan genes are too far distant in sequence space for that to be the case.
Orphan genes present a strong challenge to the Darwinian story because the probabilities of random mutations turning a bit of “junk DNA” that had been laying around in the genome into an altogether new gene seem infinitesimally small. According to French biologist François Jacob, “the probability that a functional protein would appear de novo by random association of amino acids is practically zero” . This is because the junk DNA must accumulate mutations that allow it to be read by the cell or converted into RNA, as well as regulatory components that signify when and where the gene should be active. And like a sentence, the gene must have a beginning and an end—short codes that signal its start and end.
So where to orphan genes come from? Evolutionists believe that various noncoding gene fragments from non-coding DNA must somehow be spliced together that then acquired a promoter or transcription factor binding site (i.e. they need the regulatory elements), and so is now expressed, and makes a functional protein, in the right place and at the right time. But acquiring a promoter or transcription factor binding site to turn an inactive, noncoding DNA into expressed, functional DNA is thought by most to be highly improbable.
Nevertheless, recently, reality having sunk in, and the once heretical explanation that many of these orphans arose out of so called junk DNA or noncoding DNA the mysterious stretches of DNA between genes, has quickly gained momentum.
Some thought that these new “orphan genes” were extraneous and served unimportant functions. This might make sense in a Darwinian framework. However, this does not seem to be the case at all; in fact quite the opposite. Knock out experiments where these orphan genes are disabled, often trigger catastrophic failures thereby demonstrating that they code for critical protein functions. For example, 200 new orphan genes in Fruit flies Drosophila melanogaster were silenced and more than 30 percent of them were found to kill the fly.
Research presented at the Society for Molecular Biology and Evolution in Vienna, identified 600 potentially new human genes. Of the 600 human specific genes a research team found, 80 percent are entirely new, having never been identified before.
Many of the new human genes are involved in the developing fetal brain, in the neocortex, the part of us that makes us different from our nearest ancestors. A significantly large proportion of young genes are expressed in the fetal or infant brain of. Most of these young genes are expressed in the evolutionarily newest part of human brain, the neocortex. Remarkably, a number of these human specific genes are expressed in the prefrontal cortex, which is implicated in complex cognitive behaviors.
How viable is the belief, held by paleontologists specializing in human origins, that 600 new genes could have arisen by chance and selection throughout the roughly 8 million year history of hominoids, given the small population size, the long generational cycles and the long gestation periods? I would suggest not at all likely or even possible in light of the evidence brought to bear regarding the rarity of viable gene sequences coding for proteins compared against the set of possible sequences which yield no viable proteins. How did the numerous new genes, that facilitate higher level thought, just happen to arise, and arise so quickly? No one knows. And this is aside from the enormously complex integration of these new proteins into an already working system with what must be a delicate balance of a vast number of interdependencies between molecular components.
One must wonder, how it is that the raw material for a gene, that turns out to be essential for an organism just happened to be laying wait, dormant, until somehow through a highly improbable series of events, acquires the necessary administrative components. Sounds like foresight to me.
7.1.4 Computer Simulations
There are several computer programs that attempt to model a Neo-Darwinian process and purport to show that the process can bring about complexity in living forms. Tierra, Avida, and others somewhat less sophisticated. Frankly I have not looked at these very closely. But I thought I would mention some obvious shortcomings that are generally recognized by its critics.
Generally when these evolutionary modeling programs are examined closely, invariable what is discovered is that they do one of two things: 1) smuggle in design—information—somewhere or 2) they make unrealistic assumptions.
Tierra is computer program that attempts to model the Cambrian Explosion using computer code as representative modules. But the main novel code that was generated was the result of the program shuffling around existing modules. The program did generate a few new genetic modules but far too few to approach what is needed in the Cambrian. It is generally acknowledged that the program did not succeed in modeling the Cambrian Explosion.
Avida is another computer code based evolutionary modeling program. The program starts with a complete organism so it does not directly apply to the problems in the Cambrian Explosion for example. Avida did provide some novel complexity but the complexity modules that were achieved that were supposed to represent real biological complexity, were quite a bit less in complexity. Therefore the program really did not demonstrate anything approaching the problems of the evolution of biological complexity.
Population genetics—the study of genetic change in populations—is very complex. A positive outcome in evolution and population genetics, is largely based on assumptions about the following:
1. How often beneficial mutations occur?
2. What is the ratio of positive mutations to ruinous mutations?
3. How much information is minimally necessary to produce some beneficial change?
4. How quickly, if at all, are beneficial changes locked in by natural selection?
5. How does a beneficial mutation contend with the overall ambient variation exhibited by organisms?
6. How likely is it that beneficial mutations will be lost as a result of ruinous mutations?
Depending on how you tweak these assumptions, you can make anything happen or not in a computer simulation. The way Darwinian evolution is supposed to work is that there is an incremental change in the genome. The change offers some improvement such that when it occurs the affected organism is at least somewhat more likely to survive till reproduction. But when modeling this you have to be fair. You cannot create a model that has unrealistic fitness functions which selects out any positive change that the programmer knows is on a trajectory to where the program wants the program to go.
Here is a simplified explanation of how some of these programs might smuggle in information into their programs. Richard Dawkins in Blind Watchmaker produced a program called “methinksitislikeaweasel.”
He starts with a scrambled set of letters and by copying them and randomly introducing errors, arrives at the target phrase.
…after many trials…
METHINKS IT IS LIKE A WEASEL
Here is the algorithm that can be inferred from the results:
1. Start with a random string of 28 characters.
2. Make 100 copies of the string (to simulate reproduction).
3. For each character in each of the 100 copies, with a probability of 5%, replace (mutate) the character with a new random character.
4. Compare each new string with the target string "METHINKS IT IS LIKE A WEASEL", and give each a score (the number of letters in the string that are correct and in the correct position).
5. If any of the new strings has a perfect score (28), halt. Otherwise, take the highest scoring string, and go to step 2.
There are at least three major flaws with the program.
First he invokes a target so his selection is not based on present fitness. It is based on known future fitness. This is a no-no in evolutionary algorithms because it involves foresight, something natural processes do not have but teleological process could have.
Second, there appears to be no provision for a bad mutation spoiling the incremental trajectory toward the target phrase.
Third, it is very simple. A simple change 5% chance of a beneficial mutation is extraordinarily high. New biologic features require new proteins. And in the case of the Cambrian, many proteins are required even for the simplest feature of a living creature. But as we have seen from the protein sampling problem, viable proteins might be as rare, and perhaps far more rare, than 1077. This is an extraordinarily low probability of hitting any of them.
Dawkins’ attempt, although seeming to be impressive to his readers at the time, is a cartoon. A better way to think about evolution is to form an analogy with writing. (I have used this before, but I will briefly discuss here again.) Suppose that you are writing a text book describing a new animal. And you agree to describe this animal down to the lowest level of detail. You start with a handful of phrases. Could you, by copying and randomly duplication and modifying phrases, produce anything, even if you had a person looking at each iteration and selecting something that was meaningful to him? Of course you could not because you wouldn’t be able to produce the new information from a few phrases.
Much of these theoretical discussions have been rendered moot anyway. Recent research is showing that evolutionary changes occur rapidly. This means that natural selection may play little or no role whatsoever in the incorporation of novel information into the genomes of living organisms.
7.2 Non-Randomness of Evolutionary Change
“A purposeful plan was functioning throughout all of these seemingly strange evolutions of living things, but we are not allowed arbitrarily to interfere with the development of the life patterns once they have been set in operation.” [65:3.1] (P. 733)
That mutations were perceived to be random was a philosophical assumption based on materialism. Given a materialist assumption, there was no conceivable way that mutations could be evoked by the organism because that would involve foresight and planning which are anathemas to any materialist. It is also likely that adaptive mutations, as purposive changes are called, would also violate the Central Dogma. The Central Dogma (depicted in the illustration below) is the notion that information flows one way from genes to proteins and not the other way.
For Darwinists, mutations have to be random. If mutations are shown to be evoked by the organism and to the benefit of the organism, then Neo-Darwinism is effectively dead. And in fact there is a great deal of evidence accumulating that mutations are not random.
One of the common claims, or pieces of evidence purporting to support the notion that evolution proceeds by random mutation and natural selection, is that that bacteria adapt to antibiotics. But what is almost universally discovered when bacteria gain resistance is that the change that causes the resistance, is a diminishment of information rather than an addition of information into the genome. In other words, typically a function, an enzyme gene product that was in place becomes broken because of a mutation. Therefore an enzyme that had blocked the ability to gain resistance no longer serves as an impediment.
An example of this is the research conducted by Richard Lenski which was proclaimed as a resounding evidence of Neo-Darwinism. Lenski was looking at the ability of E-coli bacteria to adapt to feed on a different nutrient, citrate.
The study cited the “new” ability of E-coli to feed on citrate as evidence for random mutation and selection. The problem is that E-coli has had the ability to feed on citrate but does not normally when there is oxygen present. The research showed the E-coli “evolved” to feed on citrate even when oxygen is present after a prolonged period of time—25 years and 60,000 generations which is about the equivalent of about a million years of human evolution. But in this case the reason E- coli normally do not feed on citrate when oxygen is present is because there is switch that represses the enzyme CitT that enable consumption of citrate when oxygen is present. The mutation nullified the switch; no new gene was created.
Furthermore, when another lab led by Scott Minnich, an intelligent design scientist working at the University of Idaho, worked on reproducing the results they found that the “mutation” arose after just a few weeks and it occurred repeatedly. This is not the type of result one would expect from Neo-Darwinism. According to Neo-Darwinism, evolution is contingent. Random mutations that offered a benefit would not be expected to be commonplace. The fact that the mutation occur so frequently in a case where the need to feed on citrate was necessary, is more in line with a design inference since the mutations would not appear to be random. Bacteria have the ability to adapt to feed on just about anything. This alone makes for a strong inference of design.
James Shapiro discovered transposition in bacteria. Shapiro was a close friend of Barbara McClintock, the first female recipient of the Nobel Prize for biology and carried on her work. Here is a summation of Shapiro’s view on the non-randomness of evolution:
“It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns of change, and genome sequence studies confirm distinct biases in location of different mobile genetic elements.”
As already noted, though, the key question is not so much whether changes are truly random…but whether they are chance events from the viewpoint of function. The evidence is that both the speed and the location of genome change can be influenced functionally. [My emphasis throughout]
There was some prior experimental evidence (prior to Lenski’s experiments) indicating that mutations were random. But the evidence according to Shapiro “is remarkably thin.” It was based on the experiments performed in 1943 by Luria-Delbruck who wanted to determine if viruses could induce mutations. Textbooks present the famous Luria-Delbrück experiment as the definitive demonstration that mutations must occur prior to selection and are therefore random.
The experiments used separate groups of colonies of bacteria and subjected them to lethal viral infections. The team found that there was a differential between the colonies in terms of the number of bacteria that survived. Luria-Delbruck concluded from this the mutations that enabled resistance to the virus must have been place prior to the infection and that therefore mutations were not induced by viruses; they had to have been present in varying degrees in each of the colonies. This would mean that mutations were random; they had not been evoked by the presence of the virus.
James Shapiro claims that the results could not have been any different because of the lethal nature of the virus infection. Here is Shapiro’s comment on this famous experiment:
“Given the lethal nature of the selecting virus Luria and Delbrück used, there was in fact no other possible outcome. Infection was invariably lethal, and only preexisting resistant mutants could survive. Nonetheless, this experiment was cited for over six decades as proof that virus infection could not induce a genetic change to resistance.
One has to be careful with the word ‘proof’ in science. I always said that conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.”
When Shapiro says “the lethal nature of the selecting virus” could not have produced any other result, he me means that all the existing bacteria that did not have that mutation which provided resistance, could not possibly have survived. If mutations can be induced by the intelligence within a bacteria as a result of the environmental insults, as Shapiro is claiming, the bacteria would have to have had some ability to survive the infection and carryout the mechanism to invoke these mutations. But they could not do this given the severe conditions of the test, “given the lethal nature of the selecting virus.”
He goes on to elaborate on what he means by the “peg broke in the first decade of this century.” Astonishingly enough, it turns out that bacteria have a sort of memory bank of past infections in their DNA. When an infection occurs, the bacteria stores some part of a virus in its DNA and by an unknown mechanism it is activated when a similar viral infection occurs to offer resistance. This unknown mechanism involves something called “small interfering RNAs” that mediate the defense against viruses.
Shapiro Fermilab Lecture
In 2010, Dr. Shapiro gave a lecture to a packed house at Fermilab. As recounted by Perry Marshall, in his book, Evolution 2.0, who attended the lecture, a small group huddled around Shapiro after the lecture, “…peppering him with questions.” At one point an attendee asked Shapiro in an incredulous way. “You mean mutations aren’t random?” he asked. “No sir,” replied Shapiro, “they’re not random at all.”
In the following subsections we are now going to look at several pieces of evidence that have persuaded Dr. Shapiro, and many others who have followed the current research, that mutations are in fact not random. Some of the evidence is direct evidence of observation during laboratory research and some of the evidence is circumstantial and based on inference.
Mutations, such as DNA copying errors were always assumed to be the result of things like to ultraviolet radiation and ionizing radiation as well as other physical phenomena. However, as Shapiro and others point out, studies of radiation induced mutation in bacteria strongly indicate that cellular repair systems are necessary for nearly all of mutational effects of ultraviolet and the vast majority of ionizing radiation.
Southern Illinois University neuroscientist David G. King recently wrote:
“The dismissive dictum, ‘Mutations are accidents,’ has grown obsolete…the spontaneous, non-accidental production of genetic variation are deeply embedded in genomic architecture.”
The view that mutations were random persisted throughout much of the early decades of the Neo- Darwinism. It is now know, as stated by Steve Talbott, New Atlantis contributing editor, and senior researcher at the Nature magazine,
“We are no longer free to imagine that evolution waits around for “accidents” to knock genes askew so as to provide new material for natural selection to work on. The genome of every organism is actively and insistently remodeled as an expression of its context.
Genetic sequences get rewritten, reshuffled, duplicated, turned backward, “invented” from scratch, and otherwise revised in a way that prominently advertises the organism’s accomplished skill in matters of genomic change…it is now indisputable that genomic change of all sorts is rooted in the remarkable ‘expertise of the organism as whole.”
There are two dogmas or laws in evolutionary biology: 1) The Central Dogma which states that information flow in the cell is from DNA to RNA to Proteins and not the other way. This is often stated as DNA makes RNA make Protein. 2) The Weismann Barrier, sometimes referred to as the Second Law of evolutionary biology, states that the soma (body) cells are isolated from the germ (sex) line cells such that any effects on the soma cells cannot affect the sex cells, the gametes.
There was a French naturalist, Jean-Baptiste Lamarck who worked in the early 1800s and advanced a theory that acquired traits could be inherited. He is famous (or infamous) for this erroneous view. One example commonly cited was a giraffe’s neck became long because they stretched their neck to reach the few remaining leaves on the top of a tree. The stretching was inherited and passed down to their progeny. Over time each incremental change was inherited and gave them the longer necks. This was called Lamarckian evolution. Darwinism, and specifically Neo-Darwinism, has been viewed as the final repudiation of Lamarckian evolution. Throughout the past 60 years or so, if you wanted to get a laugh at a dinner party with a bunch of evolutionary biologists, you could tell a Larmarckian evolution joke.
Here is the really funny thing…Lamarck was correct although in a more limited sense than the Giraffe story would indicate. Epigenetics, that acquired traits can be inherited, is perhaps the most stunning of all reversals of the many reversals that have occurred over the past 20 years in evolutionary biology. Lifestyles that one adopts can affect offspring. The claim of epigenetics is that not all inheritance is in the genes; some traits are inherited that are over and above “epi” genetics. Interestingly, Darwin himself became more Lamarckian in subsequent editions of The Origin of Species.
Epigenetics works by controlling the expression of genes, i.e. determining which genes are transcribed or activated, in effect which genes are turned on and which ones are turned off. These epigenetic controls over gene expression can be determined by environmental factors. The alterations may or may not be heritable, although the use of the term "epigenetic" to describe processes that are not heritable is controversial. Unlike genetics based on changes to the DNA sequence (the genotype), the changes in gene expression or cellular phenotype of epigenetics have other causes.
Here are some excellent video presentations on epigenetics given by Nessa Carey, former Senior Lecturer in Molecular Biology at Imperial College London:
Although the germline is separated from the rest of the body, or soma cells, during early development in most animals, following this, it appears that changes to the body cells can affect the germ cells. Recently researchers have demonstrated that environmental effects on body tissues (neurons) of the roundworm can have an effect on genes for multiple generations through the actions of what is called non-coding RNAs. As James Shapiro puts it:
“…non-coding RNA provides a molecular interface between life history events and genome alteration.”
Aside: Junk DNA/Junk RNA: Recall that RNA is the molecule that is transcribed from the DNA in a process called transcription. Transcription is the first step in the production of proteins. The second step is translation. For decades it has been known that only a small fraction of the DNA encodes for proteins. The rest of the DNA has been referred to as “Junk DNA.” The term junk DNA is, however being challenged increasingly by new research. In fact the findings of the ENCODE project show that at least 80% of the DNA is transcribed and perhaps all of it once all the cell types in the human genome are investigated. This finding is causing quite a stir and is the topic of a very bitter and contentious debate with the ENCOCDE researchers and the Intelligent Design folks claiming that junk DNA is not in fact junk and the ultra-Darwinists (Neo-Darwinist diehards) on the other. One ultra-Darwinist, biologist Dan Graur, has threated to try and shut the ENCODE project down and even claimed that, “If ENCODE is right (about a variety of things especially Junk DNA) then evolution is wrong.” The Intelligent Design proponents have long ago suggested that all or most of what has been called junk DNA will prove to be useful in some way. Michael Denton for example in his 1999 book, Nature’s Destiny, said flat out that if junk DNA is really junk, “then my whole theory falls apart.” With each passing week there are more and more research papers revealing that DNA segments once thought to be junk are producing non-coding RNAs that have very important regulatory functions. The ultra-Darwinists have acknowledged that the ENCODE findings that 80 percent of the DNA is transcribed is probably true, but they are now suggesting that much of the resulting RNA transcripts (those RNAs not used for the manufacture of proteins) is “junk RNA.” Ultra-Darwinists have too much invested to admit that all the DNA is actually useful. Some of the junk RNA they are referring to are these non- coding RNAs that appear to be involved in epigenetics. Other non-coding RNA appears to be involved in an array of regulatory roles. There have been over 2000 micro RNAs identified in humans that interwork with messenger RNAs (the RNAs that carry a DNA transcription to the ribosome for protein manufacture. But many of the microRNAs are regulated as part of epigenetics.
Researchers have discovered that mobile non-coding RNA can travel from body cells into germ cells. There appears to be a few varieties of non-coding RNAs involved: microRNAs and double- stranded RNAs. Both varieties are short strands of just a couple dozen nucleotides. These mobile RNAs enter the germline and effect gene specific regulatory information acquired from somatic cells. And these changes can be inherited across generations. This is a mechanism by which the environment elicits transgenerational effects in animals.
So like the messenger RNA (mRNA) that carries genetic information to the ribosome (the molecular machine that manufactures proteins in the process called translation), the non-coding RNA carries epigenetic information to other cells and to the next generation.
There are multiple dynamic modifications that regulate gene transcription in a systematic way and this process is often referred to as the “histone code.” The mechanisms that produce such changes are histone modification and DNA methylation. These functions alter how genes are expressed but do not affect the underlying DNA. (Histones are the proteins that DNA is wound around to create the compact hetrochrome of a chromosome.) By modifying the amino acids in the histone proteins, the histone shape is changed and as a result access to the genes that have been wound around the histone can be repressed or promoted.
There are several ways that the histone can be modified. The most common ways and the most studied are methylation and acetylation. Acetylation modifications work as a system to repress or promote gene expression.
Methylation of DNA and histone causes nucleosomes to pack tightly together. Transcription factors cannot bind the DNA, and genes and not expressed.
Histone acetylation results in loose packing of nucleosomes. Transcription factors can bind the DNA and genes are expressed.
Neo-Darwinists, after denying that epigenetic adaptation existed, because it was inconsistent with the theory in many ways, are now, in effect, trying to coopt it. But epigenetics is clearly non- Darwinian for several reasons.
Neo-Darwinian change propagates through the population from a single mutation occurring in a single individual whereas epigenetics works on many individuals across a population.
Epigenetic changes result in a fitness improvement but not under the organism’s present condition.
The advantage epigenetics provides is for some unforeseen condition in the future. Therefore, there is no reason to think that an epigenetic infrastructure could have been built by natural selection because there would be no incremental improvement to select.
Unlike evolutionary changes, which are not induced by the environment, epigenetic changes do result from effects of the environment. Because of this, evolutionary change through epigenetics is not random and it is repeatable.
Importantly, in terms of complex specified information, clearly, the mechanism to capture changes in the soma cells and transport them to germ cells is exquisitely complex. The discovery of epigenetics requires that Darwinists explain another vast new layer of complexity that we are only beginning to understand.
Epigenetics sounds a lot like a plan put in place by an intelligent designer to ensure adaptability in a world in flux.
Denis Noble, a British biologist at the University of Oxford and one of the pioneers of Systems Biology summarize epigenetics in the following statement:
“The available evidence not only suggests an intimate interplay between genetic and epigenetic inheritance, but also that this interplay may involve communication between the soma and the germline. This idea contravenes the so-called Weismann barrier, sometimes referred to as Biology’s Second Law, which is based on flimsy evidence and a desire to distance Darwinian evolution from Lamarckian inheritance at the time of the Modern Evolutionary Synthesis. However, the belief that the soma and germline do not communicate is patently incorrect.”
The discovery that acquired traits could be inherited was a stunning surprise to evolutionary biologists. Denis Noble then asks rhetorically:
“So what went wrong in the mid-twentieth century that led us astray for so long [regarding epigenetics and randomness]? The answer is that all the way from the Weismann barrier experiments in 1893 (which were very crude experiments indeed) through to the formulation of the central dogma of molecular biology in 1970, too much was claimed for the relevant experimental results, and it was claimed too dogmatically.
Noble goes on to comment that the esteemed British theoretical evolutionary biologist and geneticist, Maynard Smith was aware of experimental evidence a decade ago that called the Central Dogma in to question, Noble asks:
“So why, given his extraordinary (but completely correct) admission, did Maynard Smith not revise his view of the mechanisms of evolution? The reason he gave in 1999 was that ‘it is hard to conceive of a mechanism whereby it could occur; this is a problem.’”
7.2.2 Natural Genetic Engineering - Transposons
As we have seen there is a growing realization that evolutionary change is not random at all. Much of the evolutionary change that occurs is not necessarily the “evolution” of new proteins but rather the reuse and recombination of existing protein coding domains (“exons” ) to produce new enzymes and proteins. In order to reuse protein domains, a cell has to also establish the regulatory mechanisms that flank the protein coding areas. The transcription of mRNAs from DNA requires promoters, enhancers and initiators as you may recall from the transcription animations.
There are several mechanisms cells use to achieve build new proteins and their regulatory components. We cannot cover all the mechanisms but one in particular is quite intriguing and important.
Henrik Kaessmann of the Center for Integrative Genomics in Switzerland recently documented the variety of techniques organisms use to diversify and enlarge their genomes. “Transposable elements” are especially important in that regard. Transposable elements are DNA sequences that can change their position within the genome. They are known to “mobilize, amplify and rearrange exons (protein coding sequences in the DNA).”
“Natural Genetic Engineering” is a term that James Shapiro has coined to describe, “all the biochemical mechanisms cells have to cut, splice, copy, polymerize and otherwise manipulate the structure of internal DNA molecules, transport DNA from one cell to another, or acquire DNA from the environment.” Shapiro claims that these mechanisms are non-random innovations that cells employ to produce adaptive change. But he cautions that “non-random does not mean strictly deterministic.”
Shapiro uses the term “mobile genetic elements ” to generically describe these features. Transposable elements—primarily transposons and retrotransposons—are the primary mobile genetic elements. So transposons are key players in Shapiro’s natural genetic engineering. Transposons are often called “jumping genes” because of the way they mysteriously move around in the genome. Transposons were discovered by Shapiro’s mentor, Barbara McClintock, who was awarded the Nobel Prize for this discovery.
Transposons comprise a large set of the genome as you can see from the depiction below.
There are two classes of transposable elements. Class I transposons copy DNA segments and paste them into other locations. There are a few varieties of Class I transposons: long terminal repeats (LTRs), long interspersed nuclear elements (LINEs) and short reverse transcribed dispersed repeats (SINEs) which interestingly enough are specific for each family of mammals (rodents, carnivores, primates, etc.).
Class I transposons achieve copy and paste by something called reverse transcription. Reverse transcription involves first transcribing the DNA into RNA and then reverse transcribing the RNA back into DNA in a different location in the genome. A special enzyme—transcriptase—is used to accomplish this and the genes that encode for this enzyme are conveniently encoded within the transposon itself.
Class II transposons us a cut and paste mechanism which employs the protein transposase to effect the insertion and excision.
Transposons have historically been dismissed as Junk DNA and many ultra-Darwinists steadfastly refuse to acknowledge that they may serve an important role in evolution and continue to insist that they are junk. It has been shown beyond any reasonable doubt that transposable elements are important for both genome function, organization, renovation and evolution. In fact, James Shapiro has documented “more than 280,000 functional elements in the human genome derived from mobile elements (transposons).”
A team of researchers from the United States, Canada, Spain, and the United Kingdom recently noted, “It is now undeniable that transposable elements…have had an instrumental role in sculpting the structure and function of our genomes.”
An organism’s traits, their “phenotype” are not encoded by single "genes." The concept of a gene in the sense of a one to one relationship to a protein, no longer has any real meaning. It is now known that all cell and organism character traits are expressed from networks of coding sequences whose expression is coordinated by shared transcriptional control signals. Transposons are key elements in this coordination of regulatory elements. Transposons distribute preexisting transcription control signals to the dispersed coding regions in the genome.
There are many examples in the literature testifying to the function of transposable elements. One of the most stunning examples was made by Yale University researchers. The Yale team discovered that a network of 1,532 genes, which had been recruited for expression in the human uterus, are controlled by a set of regulatory elements that had been dispersed and coordinated by the work of transposons. Günter Wagner, lead researcher on the project, commenting on the results of the study said,
“We used to believe that changes only took place through small mutations in our DNA that accumulated over time,” remarked the lead researcher in the project, Günter Wagner. ‘But in this case we found a huge cut‐and‐paste operation that altered wide areas of the genome to create large‐scale morphological change.’”
So it appears the feature of humans that enabled longer gestation periods was the result of the handiwork of these transposable elements …elements that Darwinists once dismissed as junk, and in some cases, still dismiss as junk. It is hard to imagine that such control was the result of a random, haphazard mechanism. In fact, it could be argued, such precision, in light of the end result, would strongly seem to suggest design—direction toward a goal.
7.2.3 Horizontal Gene Transfer
Transposons are examples of intra-cellular transfer of DNA from one location to another in a cell. But inter-cellular and inter-organism transfer of DNA is also common through “horizontal gene transfer, ” another non-Darwinian mechanism of adding information to the genome. Darwinian mechanisms work through “vertical transfer” of DNA through the genome from parent to offspring.
Horizontal transfer of DNA occurs between cells of different organisms including different groups and kingdoms of organisms. Horizontal transfer once thought to be a common and exclusive skill of microorganisms, is now known to occur also in multicellular animals including humans. In fact the same mechanism bacteria use to exchange genome segments with each other also works to transfer DNA to the cells of multicellular eukaryotes (organisms with a nucleus).
A recent article in Science Daily summarizes this:
“Many animals, including humans, acquired essential 'foreign' genes from microorganisms cohabiting their environment in ancient times, according to research published in the open access journal Genome Biology. The study challenges conventional views that animal evolution relies solely on genes passed down through ancestral lines, suggesting that, at least in some lineages, the process is still ongoing.
“The transfer of genes between organisms living in the same environment is known as horizontal gene transfer (HGT). It is well known in single celled organisms and thought to be an important process that explains how quickly bacteria evolve, for example, resistance to antibiotics.”
Horizontal gene transfer is so common in fact that a Darwinian tree of life no longer really exists. Instead the tree of life has been replaced by a bush or matrix according to the late Carl Woese and Eugene Koonin. Koonin makes the following comment:
“The realization that HGT (Horizontal Gene Transfer) is extremely widespread among prokaryotes, which was one of the principal early discoveries of the genomic revolution, led to a reappraisal of the TOL (Tree of Life) concept…as long as the HGT is quantitatively substantial…Molecular phylogeneticists will have failed to find the ‘true tree.’"
It is now known that horizontal gene transfer has contributed to the evolution of many, perhaps all, animals and that these processes are ongoing. Recent studies have shown that there is no barrier between the transfer of genetic information from bacteria to the DNA in animal germ cells. For example, according to James Shapiro, “investigators found virtually the whole genome of a bacterium integrated into the chromosomes of its insect host.”
These horizontal transfers of DNA between groups and kingdoms are not random events in the sense of having no benefit to the host within whose genome the transfer is incorporated. There are many cases where the adaptive significance of the transferred DNA is well documented. According to James Shapiro,
“The genomic data is overwhelming in documenting the fundamental importance of horizontal transfer in the evolution of bacterial and archaeal genomes. Horizontal transfer may in fact be a major driver of evolutionary novelty because it permits the acquisition of DNA encoding complex traits in a single event.”
Surprisingly many of the genes in plants and animals that have been acquired through horizontal gene were from viruses. In fact according to Steve Talbott, “there is good evidence that viruses have played a major role in contributing to the genomes of more complex organisms, including mammals and humans.” One has to ask how the genes within viruses would just happen to useful to high multi-cellular organisms.
So again we see examples of non-Darwinian mechanisms; sudden and non-random evolutionary change. There is no inheritance involved in horizontal gene transfer which is what Neo-Darwinism requires. New genetic information is shared throughout the ecosystem directly between organisms. The mechanisms used to take in DNA from the genomes of other organisms, and especially of entirely different that organisms use for horizontal gene transfer is only now beginning to be understood. But we can bet that the mechanisms will be extraordinarily complex and mediated by specialized enzymes; two features that are elusive to any naturalistic explanation.
7.2.4 Overlapping Codes
An overlapping gene is a gene whose nucleotide sequence for one gene partially overlaps with the nucleotide sequence of another gene. Therefore, a nucleotide sequence can be involved in the transcription of multiple RNA transcripts. The shared nucleotide sequence may be read in alternate reading frames (starting points where the each gene RNA transcript begins.
Conventional thought was that each gene encodes a protein. This simplistic view was overturned by the discovery of RNA editing and RNA splicing which assembles different mRNAs for protein manufacturing from distinct sets of RNA transcripts. But even given RNA splicing and editing it was thought that each DNA sequence was read from a single starting point (reading frame).
When overlapping coding sequences were first identified in viruses, it caused quite a stir. It has been known for some time that overlapping codes existed in bacteria but researches assumed that the phenomenon would not exist in eukaryotic cells (cells with nucleus).
However, in yet another unexpected (and unwelcome) surprise for Neo-Darwinism, it appears that genes do overlap in a wide variety of multi-cellular organisms. The ENCODE project has confirmed that overlapping genes are common in higher genomes, where a given DNA sequence routinely encodes multiple overlapping messages. This means that a single nucleotide or set of nucleotides can contribute to two or more genetic codes.
Recently, protein coding regions of 700 species were analyzed, and the analysis showed that virtually all forms of life have extensive overlapping information in their genomes including mammals.
Codependency between codons (each set of 3 base pairs that encode for an amino acid) of overlapping protein-coding regions imposes a unique set of evolutionary constraints. Although dual coding would seem to be nearly impossible by chance; nevertheless, it turns out that a number of human transcripts contain overlapping coding regions as well—40 candidate sequences have been identified thus far.
No one knows how overlapping genes could have evolved. But it is hard to reconcile these with Neo-Darwinism because a multi-functioning DNA sequence would be subject to multiple constraints and any random change (mutation) would perturb at least one of the gene products. As the researches from one recent study said:
“As the number of overlapping codes increases, the rate of potential beneficial mutation decreases exponentially, quickly approaching zero.”
Given the commonality of overlapping codes in higher genomes it would seem that the random mutations which are purported to drive evolution under a Neo-Darwinian framework, would be diminishingly rare, if they could occur at all. It is a great mystery for any naturalistic explanation, but fits nicely in a design paradigm.
Think of it this way. Imagine trying to write a paragraph that had one meaning were one to start reading at the beginning of one end of the sequence, but have quite another meaning when read in the other direction from, say, any point. Think of how much easier it would be to generate two separate character strings for each intended meaning.
7.2.5 Edge of Evolution – Manyuan Long vs Doug Axe & Michael Behe
The difficult problems in evolution are related to creating complex new organs which require many new cell types and new genes for new proteins for each cell type. These new cell types often require new proteins with entirely new protein structures, i.e. new folds.
As we have seen, the protein sampling problem presents a severe challenge to that. Again the protein sampling problem is the very small set of gene sequences that code for viable proteins vs the vast set of gene sequences that produce no viable proteins. We said that the protein sampling problem is analogous to the number of word or letter sequences that produce meaningful text vs the vast set that yield no viable meaning. The best research tells us that the number of viable sequences within the set of total possible sequences is about 1 in 1077.
But what about creating a single new gene-protein from an existing protein? Can evolution by mutation and selection even create a new gene from a similar gene?
The seminal paper on the origin of “new genes” is one authored by Manyuan Long “The Origin of New Genes” which is commonly cited by Neo-Darwinists to support the theory. In fact the paper was cited in the Dover trial in 2005. The Long paper cites several other studies purporting to show that new genes can be create by Neo-Darwinian mechanism of random mutation and natural selection.
The first point to mention is that the papers cited in the Long article start with an existing gene so they really aren’t completely “new genes” in the sense of entirely new functions which typically require new protein folds. And this is what would be required in the Cambrian for example. What the Long paper pertains to is the creation of “new genes” that can catalyze some new reaction. In all cases, these studies begin with a functional gene and by comparing sequence similarities make the claim that gene B was evolved from gene A.
Here is some background. Genes that are related to another other through sequence similarity are called “homologs.” When two genes are homologous with one another in a particular species, they are called “paralogs.” Neo-Darwinists assume that one of these genes in the homologous pair evolved from the other. The process by which this is assumed to have happened is gene duplication followed by random mutations of some set of the base pairs in the sequence. The source gene is duplicated during crossover for example and it is free to mutate since there is already a gene performing the function.
But there has always been a difficulty with this scenario of gene duplication and mutation as a mechanism of creating new genes. The reason for the difficulty is that the way evolution is supposed to work is that each step—each mutation—in this case, each random base pair substitution, has to yield some sort of a selective advantage. In other words, if it is an enzyme (a type of protein that catalyzes a reaction), each base pair substitution of the duplicated gene would have to produce a new enzyme that could achieve some new function in order for natural selection (and evolution) to be meaningful.
This is actually a subject of heated debate between those espousing the neutral theory which posits that random genetic drift can negate the strict need for each step in a putative transition from one functional enzyme to another, and the ultra-Darwinists who insist that each step must be beneficial, but the debate is unresolved, so I am just going to ignore it for now.
The Long paper specifically, and Neo-Darwinists generally, simply assume that related genes are homologous—that their separate functions occurred through evolutionary mechanisms. But do they really know that? Is it possible for one gene to be duplicated and then mutated to produce a new enzyme randomly?
This is a very important question because remember, a central tenant of the scientific method is falsification. In fact, those who claim that Intelligent Design is not science, do so on the basis Intelligent Design cannot be falsified. Whether that is true or even is important is a topic for debate. Scientist should make every effort to falsify their theory and specifically in this case, there should be an attempt to falsify the idea that new genes result from gene duplication and subsequent random mutation.
Darwinists have not really attempted to falsify Neo-Darwinism because that would likely lead to a re-evaluation of methodological naturalism, and by extension a re-evaluation of materialism. A quick read of the Long paper reveals that the perceived relatedness of two genes is assumed to be via Darwinian mechanisms. So the intention of the Long paper was not to attempt to falsify Neo- Darwinism, but rather an attempt to confirm it but under the assumption that it was already true! However, Intelligent Design scientists have attempted to falsify Neo-Darwinism in this vein.
There are two research efforts to look at. The first one is a study conducted by Doug Axe of the Biologic Institute and the second was a study conducted by Robert Summers of the Research School of Biology at the Australian National University which was an attempt to confirm or deny a specific claim made by Michael Behe in his book, The Edge of Evolution.
Doug Axe, Ann Gauger – The Origin of New Genes
Doug Axe is an Intelligent Design scientist working for the Biologic Institute. Axe wanted to look at the validity of the claims in the Long paper that the evolution of new genes is commonplace and that the process by which new genes arise is gene duplication followed by random mutation and natural selection.
There were two questions to look at given two related enzymes that Neo-Darwinists assume were homologous-paralogs, i.e. that one was derived from the other through Neo-Darwinian mechanisms. The first question is: 1) How many base pair substitutions would be required to transition from one source gene to a new gene, and 2) Can evolution by random mutation and selection achieve those number of base pair substitutions in a realistic time frame.
Axe selected two enzymes that were assumed to be paralogs a large ‘superfamily’ of presumed homologs, the pyridoxal-5-phosphate (PLP) dependent transferases in common bacteria. There are about fifty (50) structurally similar enzymes that share a common fold in this family. The two (2) enzymes he chose were Kbl and BioF. The transition he wanted to verify or falsify was from Kbl to BioF. In this case there were 250 base pair differences. The first goal was to determine the minimal number of base pair substitutions that would be required in Kbl to achieve some level of function of BioF. Axe concluded that only six or seven base pair substitutions were required to make the transition.
The second goal was to assess if evolution could achieve this transition in a reasonable time. This would require knowing the population size, reproduction cycle and mutation rate.
The conclusion of Axe’s research was that, “some 1030 or more generations would elapse before a BioF-like innovation that is paralogous to Kbl could become established. This places the innovation well beyond what can be expected within the time that life has existed on earth, under favorable assumptions.”
If evolution cannot achieve new protein functions when there are several changes required, what sort of a limited set of changes represent the threshold for which Neo-Darwinism could achieve a new function? If not six (6) or seven (7) mutations, could evolution by random mutation and natural selection achieve a new gene if there were only three (3) or four (4) mutational changes required?
Michael Behe – The Edge of Evolution
Molecular biologist Michael Behe of Lehigh University looked at this same question in his second book, The Edge of Evolution. Michael Behe as you recall, is an Intelligent Design molecular biologist and author of Darwin’s Black Box which introduced the concept of irreducible complexity. Just to review briefly, irreducibly complex involves very complex molecular machines and the necessity of multiple protein components that they are composed of. Eliminate one or two of the genes and the entire molecular machine can no longer perform its function.
Like Behe’s first book, his second book, The Edge of Evolution was subjected to a torrent of criticism from the Neo-Darwinian establishment. And, like irreducible complexity, which, after all the dust had settled, still stands, so too it appears, does one of Behe’s major points in The Edge which relates to simple adaptation.
The problem with falsifying Neo-Darwinism using molecular machines, as Behe did in Black Box, is that the complexity of these molecular machines does not readily lend themselves to
quantitative analysis. The calculations are overwhelming. It has been done by William Dembski for the bacteria flagellum and the results are quite unfavorable to Neo-Darwinism or course. But because the quantification of such complex adaptions were difficult, Neo-Darwinists were free to slip by the strong inference of design that these molecular machines offered using a variety of story-telling techniques and also by offering terms of description for solutions, for example “Cooption” as an explanation for the bacterial flagellum.
But if the problem is simplified to basic adaptations, mathematical probabilities can perhaps be applied. That is what Behe did in this second book, The Edge of Evolution.
Behe looked at the ability of the malaria parasite (Plasmodium falciparum) to adapt—gain resistance to—the medication chloroquine. Noting the fact that resistance required “specific changes in a particular malarial protein (called PfCRT) for the development of resistance to chloroquine” he asked how long should it take to acquire resistance assuming that the resistance was strictly by virtue of a random set of changes.
Behe first determined that two mutations were required for plasmodium falciparum to acquire resistance to chloroquine. He calculated that if we understand molecular biology correctly and the probabilities of random mutations are what Neo-Darwinism expects them to be, that it would require an inordinate amount of samples (number of organisms throughout a specific period of time) for the plasmodium falciparum to acquire resistance to chloroquine. Behe inferred from this, that evolution of complex organs could not then evolve by Neo-Darwinian means.
Neo-Darwinism requires that each distinct random mutation, each base pair substitution, has to offer some incremental benefit. But in the case of the malaria resistance, Behe was claiming that two mutations are required to offer resistance to chloroquine. Any single mutation would not help. He then predicted that resistance should result only after a large number of opportunities, i.e. a large population of the plasmodium falciparum organisms. He calculated that resistance to chloroquine would require 1020 Plasmodium falciparum organisms.
So, if resistance to chloroquine requires 1020 organisms to acquire the necessary two random mutations in a specific protein, how on earth could one imagine that evolution by random mutation and natural selection could produce an eye for example in roughly 20 million years in the Cambrian given the much smaller population sizes and the longer breeding cycles of multi-cellular animals?
Behe’s conclusion related to the malaria parasite was widely misunderstood. His critics thought that he was saying that the two mutations had to occur simultaneously in the same organism. He was not in fact saying that. Behe was simply saying that two random mutations are required and that if evolution is true then the first step has to be beneficial otherwise it would not be preserved by natural selection. (There is debate on this point however as I mentioned between the neutral theorists and the ultra-Darwinists; the latter of whom such as Dawkins and Jerry Coyne to name a few, insist that any change, order to be preserved, has to offer some selective benefit.)
But in any case, Behe’s statements in The Edge of Evolution were recently vindicated by the research of Robert L. Summers, et al, of the Australian National University. Behe, in noting that if evolution by natural selection has to skip one intermediate step in a long and relentlessly detailed evolutionary pathway, to attain a beneficial state (in this case one of the two mutations for resistance), concluded that the probability of reaching that state decreases exponentially:
“I argued that the evolution of many protein interactions would fall into the skip-step category, that multi-protein complexes in the cell were beyond the reach of Darwinian evolution, and that design extended very deeply into life.”
So it seems, perhaps, that evolution by random mutation and natural selection might be limited to a single perhaps two or even one random base pair change. The conclusion here is that some other non-random mechanism is at work.
Recall from the Introduction of this paper that we said that complex specified information systems have many components, have many dependencies as well as interdependencies and therefore are highly constrained and also are highly improbable and they are non-deterministic. Further, complex specified information systems achieve something—they conform to some known pattern. Conforming to a known pattern is the second side of the design coin so to speak. This section on convergences in the evolution of life, pertains to this idea of conforming to a known pattern.
We have seen in the section above on “Complexity of Life” that life is extraordinarily complex. And in the previous section we looked at how complex features arose rapidly. What is left now in order for one to gain a strong inference of design without delving into the details of evolutionary biology is to show how common the sudden appearance of extraordinarily complex features is in the evolution of life.
Convergent evolution is the independent evolution of similar features in species of different lineages. Convergent evolution creates similar structures or functions but that were not present in the likely last common ancestor of those groups.
Convergent evolution reveals a pattern and a pattern suggests a direction—teleology. If as Darwinists say, evolution is contingent and directionless, why is there so much evidence to the contrary? The evidence of direction revealed by convergence is massive. Darwinists have had to explain away convergent evolution and they use natural selection to do that. But it is important keep in mind that in order for natural selection to occur, the same random mutations would have had to have occurred in the first place. And therein lays the problem for Darwinists.
Assigning a name to the phenomenon “convergent evolution” and adopting it into their vernacular does not mean it is consistent with the Neo-Darwinian program. Convergent evolution is at odds with Neo-Darwinism because it shows direction and direction with complexity involved. Stephen Gould went to great effort to deny direction in evolution and also deny a trend toward complexity, even while recognizing convergent evolution, in two of his books, Wonderful Life and Full House.
Convergent evolution, the same complex features occurring over and over again, occurs at all levels: molecular, tissue and organism. The examples are too numerous, I cannot list them all or even many of them. Simon Conway Morris has written two books on the topic with a third one near release. Although Morris is a Darwinist he uses convergent evolution, which has a teleological quality to it, to inform his Christian viewpoint.
My personal opinion is that Neo-Darwinism, given that the mechanism of random mutation is necessary, would have to directionless. This, I believe is true, despite the role of natural selection which could conceivable have the capability of channeling random variation along a limited set of pathways. You still have to have the same series of random mutations to work with.
Wikipedia has a list of a couple hundred convergences: https://en.wikipedia.org/wiki/List_of_examples_of_convergent_evolution
Here are just a few of the more striking examples from that list:
8.1 Convergences at the Organism Level
- Multicellular organisms arose independently in brown algae (seaweed and kelp), plants, and animals. The pronghorn of North America, while not a true antelope and only distantly related to them, closely resembles the true antelopes of the Old World, both behaviorally and morphologically. It also fills a similar ecological niche and is found in the same biomes.
- Marsupial Tasmanian devil has many resemblances to the placental hyena. Similar skull morphology, large canines and crushing carnasial molars.
- Wombat is a marsupial has many resemblances to the Groundhog a placental.
8.2 Convergences at the Organ or Tissue Level
- Koalas of Australasia have evolved fingerprints, indistinguishable from those of humans.
- Echolocation in bats and whales also both necessitate high frequency hearing. The protein prestin, which confers high hearing sensitivity in mammals, shows molecular convergence between the two main clades of echolocating bats, and also between bats and dolphins. Other hearing genes also show convergence between echolocating taxa.
- The brain structure, forebrain, of hummingbirds, song birds, and parrots responsible for vocal learning (not by instinct) is very similar. These types of birds are not closely related.
- Leaves have evolved multiple times. They have evolved not only in land plants, but also in various algae, like kelp.
- Koalas of Australasia have evolved fingerprints, indistinguishable from those of humans.
8.3 Convergences at the molecular Level
According to Fazel Rana, who lists hundreds of molecular convergences in his book, The Cells Design, evolutionary biologists recognize five different types of molecular convergence:"
- Functional convergence describes the independent origin of biochemical chemical functionality on more than one occasion.
- Mechanistic convergence refers to the multiple independent emergences of biochemical processes that use the same chemical mechanisms.
- Structural convergence results when two or more biomolecules independently adopt the same three-dimensional structure.
- Sequence convergence occurs when either proteins or regions of DNA arise separately but have identical amino acid or nucleotide sequences, respectively.
- Systemic convergence is the most remarkable of all. This type of molecular convergence describes the independent emergence of identical biochemical systems.
Here are a few of the many examples.
- The protein prestin that drives the cochlea amplifier and confers high auditory sensitivity in mammals, shows numerous convergent amino acid replacements in bats and dolphins, both of which have independently evolved high frequency hearing for echolocation. This same signature of convergence has also been found in other genes expressed in the mammalian cochlea.
- Antifreeze proteins are a perfect example of convergent evolution. Different small proteins with a flat surface which is rich in threonine from different organisms are selected to bind to the surface of ice crystals. “These include two proteins from fish, the ocean pout and the winter flounder, and three very active proteins from insects, the yellow mealworm beetle, the spruce budworm moth, and the snow flea.
- Hemoglobins in jawed vertebrates and jawless fish evolved independently. The oxygen-binding hemoglobins of jawless fish evolved from an ancestor of cytoglobin which has no oxygen transport function and is expressed in fibroblast cells.
These are just a few examples. There are many hundreds more and probably thousands yet to be discovered.
One striking case, astounding might be a better word, analogous to convergent evolution, displaying both complexity and conformance to a pattern, is the discovery of camera-like eye in a single celled organism. This single celled planktonic organism “evolved” an eye called an ocelloid. The structure of the ocelloid is “eerily similar” to the mammalian eye. Since the ocelloid is in a single celled creature, it was assembled using subcellular molecular components such as proteins, rather than entire cells of various types for the assembly of the similar mammalian camera eye.
Commenting on the ocelloid, lead author of the paper presenting the finding, Greg Gavelis, a zoology PhD student at UBC, said:
"It's an amazingly complex structure for a single celled organism to have evolved. It contains a collection of subcellular organelles that look very much like the lens, cornea, iris and retina of multicellular eyes found in humans and other larger animals. The ocelloid is among the most complex subcellular structures known.”
Remember the definition of complex specified information, complex information that achieves something is assessed by whether it conforms to an independently given pattern. The complexity of this “eye” alone is difficult, and in fact I believe impossible, to explain through a Darwinian paradigm. But this eye is complex and conforms to an independently given pattern—the multicellular eye of an animal. If you are looking for a single piece of evidence—a smoking gun—that Neo-Darwinism is false, I could offer no better single testament than this. This feature exhibits both staggering complexity and pattern—a goal—all in one.
In the previous sections of the paper I have discussed modern evolutionary theory and have shown that the key principles of Neo-Darwinism, which have been regarded as unassailable by nearly all the scientists in all the universities across the globe for several decades, have been, or are in the process of being shown to be false. The information problem—the appearance of complex specified information in living systems is becoming far more of a problem for materialism, not less of a problem as is commonly assumed, as research continues to unveil the inter-workings of the cell.
9.1 Trends - Non-Randomness, Saltation
Recent studies regarding the protein sampling problem show that locating a single viable DNA string to code for a functional protein would require far more time than the earth allows. And yet, living organisms are far more complex than ever imagined meaning that far more of these elusive proteins would be required than previously thought. Biologists are running out of superlatives to describe the dazzling complexity of the myriad of molecular machines made up of dozens and even hundreds of proteins that do all the work to keep living organisms alive —“astounding,” “unimagined,” “extraordinary” …where else to go from there?
Research is demonstrating that evolutionary change, far from being gradual as Neo-Darwinism would require, is abrupt as the fossil record has always indicated.
[Life forms] do not evolve as the result of the gradual accumulation of small variations; they appear as full-fledged new orders of life, and they appear suddenly. The sudden appearance of new species and diversified orders of living organisms is wholly biologic, strictly natural. [58:6.3‒4] (P. 669)
Mechanisms such as symbiogenesis and “natural genetic engineering” produce rapid, systemic change using a variety of sophisticated mechanisms unimagined just a few decades ago. De novo or orphan genes, which make up 10 to 30% of each organism’s genome, appear out of the “dark matter” of the genome.
Nor does evolutionary change appear to be random. The “proof” offered in text books purporting to show that mutations are random—the famous Luria-Delbrück experiments in the 1940s—has evaporated, as James Shapiro puts it, “conventional evolutionists were hanging a very heavy coat on a very thin peg in the way they cited Luria and Delbrück. The peg broke in the first decade of this century.” It has been replaced by an intelligent system of bacterial adaptation to infection.
“A purposeful plan was functioning throughout all of these seemingly strange evolutions of living things, but we are not allowed arbitrarily to interfere with the development of the life patterns once they have been set in operation.” [65:3.1] (P. 733)
More and more apparently sophisticated, purposeful mechanisms related to “natural genetic engineering” including horizontal gene transfer and again mobile genetic elements such as transposons are demonstrating, with little doubt, that evolutionary change is not random. The discovery of overlapping genes which are “impossible by chance” is also strong evidence for the non-random nature of change.
Randomness is being falsified in the lab as well. Two empirical studies show that evolution by random mutation cannot even account for an adaptation requiring more than a single base pair mutation in any reasonable time frame.
Perhaps the most earth-shattering discovery of all has been epigenetic inheritance which involves purposive changes resulting from environmental interactions affecting the germ cells. The mechanism is largely unknown but clearly involves an extraordinarily complex series of interactions between a myriad of biochemical molecules. Epigenetics has resurrected, to some degree, Lamarckian evolution—the inheritance of acquired traits, an idea ridiculed for over a century. Epigenetics…in one grand sweep has washed away both the Central Dogma and the Second Law of evolutionary biology —“the Weisman Barrier” two principles thought to be unassailable even as recently as the turn of the century. (The Weisman Barrier is the belief that hereditary information (DNA) moves only from germline cells to somatic cells and never in reverse.)
Moreover, given the ubiquity of convergent evolution, it is difficult to claim that evolutionary change does not have a target. The same extraordinarily complex features and mechanisms appear time and time again at all levels: molecular, tissue and organism.
You may or may not be persuaded that the scientific consensus of modern evolutionary theory is, or even could be, incorrect. Obviously I have assigned a high level of credibility to James Shapiro and Eugene Koonin and other materialist skeptics of the modern synthesis. Neither are in any way sympathetic to the idea that a Creator could have been involved in the origin of life or its unfolding. These researchers are simply following the evidence and the evidence is extraordinarily difficult to square with Neo-Darwinism or any purely naturalistic account of evolution.
Again, Eugene Koonin:
“Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. …. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic super- groups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable.”
And James Shapiro:
“It is difficult (if not impossible) to find a genome change operator that is truly random in its action within the DNA of the cell where it works. All careful studies of mutagenesis find statistically significant non-random patterns of change, and genome sequence studies confirm distinct biases in location of different mobile genetic elements.”
“As already noted, though, the key question is not so much whether changes are truly random (there can be no such thing independent of context) but whether they are chance events from the viewpoint of function. The evidence is that both the speed and the location of genome change can be influenced functionally.”
“Do the sequences of contemporary genomes fit the predictions of change by “numerous, successive, slight variations,” as Darwin stated, or do they contain evidence of other, more abrupt processes, as numerous other thinkers had asserted? The data are overwhelmingly in favor of the saltationist school that postulated major genomic changes at key moments in evolution.”
“We are no longer free to imagine that evolution waits around for “accidents” to knock genes askew so as to provide new material for natural selection to work on. The genome of every organism is actively and insistently remodeled as an expression of its context.
Genetic sequences get rewritten, reshuffled, duplicated, turned backward, “invented” from scratch, and otherwise revised in a way that prominently advertises the organism’s accomplished skill in matters of genomic change…it is now indisputable that genomic change of all sorts is rooted in the remarkable ‘expertise of the organism as whole.”
David G. King:
“The dismissive dictum, ‘Mutations are accidents,’ has grown obsolete…the spontaneous, non-accidental production of genetic variation are deeply embedded in genomic architecture.”
And Denis Noble:
“All the central assumptions of the Modern Synthesis (often called Neo-Darwinism) have been disproven.”
None of these scientists are Intelligent Design proponents. They are simply stating what the evidence shows. But the problem for Neo-Darwinists is that, in accepting this new research, there is no purely naturalist successor to Neo-Darwinism in hand and my view is that one will never be found.
9.2 Visualizing Darwinian Evolution
One way to gain a sense of the problem facing Neo-Darwinism is to imagine writing down all that
we understand about living organisms in human text. Let’s say we are not going to use illustrations and that we had to describe all we know about life down to the lowest level of detail. It should be clear that such an endeavor would require a multi-volume set. I have no idea how many words or pages or volumes. But I do know that it would be quite large. The Helicase alone, for example would be just one book; just for one molecular machine that was necessary for even the most basic cell.
Now imagine that you had to construct those words using a random search and even aided by a selection method. The first problem is that you are starting with a bunch of empty sheets of paper. Let’s say you have your team of a million monkeys busily typing away and you could select from the gibberish viable statements—anything even remotely useful as an explanations. Of course this is cheating a bit because naturalistic evolution has no target.
Clearly there are a lot of interdependencies in meaning as you build the books arranging the various texts and so forth. This is not a perfect analogy but I think it is useful to get a sense of the problem. The protein sampling problem alone says that a chance occurrence of a single short protein of 150 residues that does anything at all is no better than one (1) in 1077. This is analogous to human text and the analogy is revealing. The chance of one of our monkeys stumbling onto a single useful phrase of any subject is diminishing small. According to Michael Denton, the ratio of meaningful 100 letter sentences to possible 100 letter strings has been estimated as one (1) in 10100. There are only about 1080 atoms in the universe.
9.3 God of the Gaps
There is a persistent fear among those purporting to be among the intellectually informed religious, expressed as a cautionary tale, that one better not take a stand as to where a Creator might play a role in the origin or evolution of life because these openings, where a Creator is envisioned to play a role, will surely close; the gaps will be closed. This is commonly referred to as the “God of the gaps fallacy.” Actually the gaps are not being filled. The gaps are widening with no end in sight. With each new finding come more questions and more mysteries and more intractable problems for any purely naturalistic accounting of life.
There is a corollary to the “God of the gaps” that materialists commonly succumb to which is commonly referred to as “promissory materialism.” No matter how complex and how improbable any materialist solution might seem, materialists can and do simply shrug their shoulders and say, “They will figure it out.” My belief is that the gulf between what we see in the complexity of life and what can be resolved through purely naturalist explanations will forever remain and in fact widen.
9.4 Intelligent Design
I have sided with the Intelligent Design proponents because their views are being confirmed by the research. I believe that they have been more intellectual honest than their Neo-Darwinian counterparts. Time and time again I have seen the Intelligent Design proponents offering well- reasoned critiques of the theory and time and time again they are met with scorn, ridicule, specious arguments, conflating Intelligent Design with Creationism, ad hominin attacks, hand waving, arguments from authority…anything and everything but a well-reasoned response.
Increasingly, the Intelligent Design scientists are getting the best in the many debates to the extent that an objective viewer can assess. Here is an example of a debate between two well-known Intelligent Design scientists and their atheist counterparts:
It is commonly said that Intelligent Design is not science. Is that true? And does it really even matter? Intelligent Design rejects methodological naturalism. Why, they ask, limit the range of hypotheses before really understanding the ultimate causes of nature? Therefore, if you define science as pertaining strictly to material causation, then you could argue that at least part of what Intelligent Design proposes is not science.
Academic scientists claim that Intelligent Design is not science because it cannot be falsified. Can Intelligent Design be falsified? Yes, I believe it can. Scientists could demonstrate through laboratory experimentation that a complex feature can evolve, or has evolved, by random chance and natural selection. They have not done that. I have addressed this in the segments on Michael Behe and Doug Axe. Alternatively, paleontologists could produce evidence of a fossil sequence showing a continuous, gradual evolution of a complex feature. No such sequence exists. The fossil sequences they have are, however, are probably adequate to demonstrate the truth of Darwin’s primary claim of common descent.
Regardless of whether Intelligent Design can be falsified, at least much of what Intelligent Design seeks to achieve clearly is science. Science requires falsification; that is part of the scientific method. But Neo-Darwinists are not anxious to falsify their theory because the alternative is unthinkable for them—that the religionists and philosophers have something important to say. Intelligent Design endeavors to falsify Neo-Darwinism based on knowledge of living system through the probabilities of mathematics. I think they are close to achieving that. If falsification of a theory using mathematics and knowledge of chemistry and biology is not science, I don’t know why not.
I agree with renowned, atheist-materialist philosopher, Thomas Nagel when he said in his recent book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False:
“The problems that these iconoclasts [Intelligent Design proponents] pose for the orthodox scientific consensus should be taken seriously. They do not deserve the scorn with which they are commonly met. It is manifestly unfair.”
The reality is that regardless of whether Intelligent Design is science or not, the important thing, as philosopher Brad Monton has pointed out is, are they correct:
“One of the main lines of attack against intelligent design is to argue that intelligent design isn’t science. Even though I’m an atheist, I wanted to defend intelligent design by taking issue with this line of attack. Ultimately, what we really want to know isn’t whether intelligent design is science – what we really want to know is whether intelligent design is true. We could, if we wanted, agree with . . . Judge Jones that intelligent design is not science. But if it turns out that intelligent design is true, would the fact that it’s not science really matter?”
9.5 Where is the Debate Heading?
The fact that Neo-Darwinists never question their theory even when presented with current research which reveals unimagined complexity that has arisen rapidly and repeatedly producing the same types of features, should raise a red flag. But it hasn’t. If Neo-Darwinists were in fact intellectually honest as they claim, they would acknowledge the current limitations of their theory in light of recent evidence.
Neo-Darwinists do not acknowledge these problems publicly because acknowledging them would open the door to a reassessment of methodological naturalism and avail an unwelcome intrusion by philosophers and religionists.
If you think that the complexity depicted by these protein machines could have been produced by random processes you are of course not alone. However, I find it utterly incredulous and I agree with philosopher James Barham who said:
“The gradual crumbling of the Darwinian consensus and the rise of a new theoretical outlook in biology is one of the most significant but under-reported news stories of our time. It's a scandal that science journalists have been so slow to pick up on this story. For, make no mistake about it, the story is huge. In science, they don't come any bigger.”
I also agree with Thomas Nagel when he said:
“Whatever one may think about the possibility of a designer, the prevailing doctrine that the appearance of life from dead matter and its evolution through accidental mutation and natural selection to its present forms has involved nothing but the operation of physical law—cannot be regarded as unassailable. It is an assumption governing the scientific project rather than a well-confirmed scientific hypothesis.”
I am less tentative in my statements than Nagel of course. Nagel appears to be somewhat limited in his knowledge of current research. He has also been immersed in academia virtually his entire adult life so I can understand how he might have difficulty stepping back to gain an entirely new perspective.
The details in the evolutionary debate are quite complex. As I mentioned, Neo-Darwinists challenge everything; concede nothing. But what is undeniable is that there are three trends that contravene Neo-Darwinism or any purely materialist theory of evolutionary change. When combined together, these trends provide one with a very strong intuition that something teleological is going on even if no driving mechanism seems apparent or can be identified. Here are the three trends:
- The increasing understanding of the spectacular complexity of living systems.
- The rapidity with which this complexity is now known to have arisen.
- The ubiquity of convergent evolution of these spectacularly complex adaptations, at all levels.
A high quotient in the ratio of complexity over time (c/t) on the one hand coupled with the fact that this is a repeated phenomenon—convergent evolution—is strong evidence for teleology. And there is another important point involved in the interworking of complexity and convergence as Israeli Physicist, Lee Spetner recently made:
“The big problem for neo-Darwinian evolution is that they must show that the probability of getting the right mutations at the right time is large enough to make evolution work. We know the mutation rates (approximately) but we don't know what fraction of them will be adaptive in any particular situation. It turns out that if we assume the fraction is large enough to make evolution work, then there are too many evolutionary pathways to allow convergent evolution.”
For all the talk and all the studies related to Neo-Darwinism, one might think that there are robust theories about how complex living functions arose. But as I mentioned in a previous section, when Michael Behe’s book, Darwin’s Black Box came out and made the claim that there were no detailed accounts (that he could find) in the literature of how complex features could have arisen, the Neo-Darwinian community was outraged, they circled the wagons and assured us all that there were many such explanation. Then James Shapiro declared in National Review that:
"There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations."
And despite the on-going claims, no one has found any to this day as far as I know.
It’s hard for many to accept that those in a profession which above all requires open, rational and honest inquiry could be held captive for over seven decades in what can only be described as perhaps the greatest intellectual canard in human history. It appears that nearly all the academics in all the universities, in all the world, have collectively engagement in what can best be described as a massive fool’s errand. Like lemmings aimlessly running this way and that they have sought to find evidence for a vacuous theory based solely on an erroneous a priori assumption.
As to the question of when Neo-Darwinism will finally expire …it is anyone’s guess. I think Max Planck said it best about the way an old theory dies and is replaced by a new theory:
“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.”
In the conclusion of Mind and Cosmos, Thomas Nagel addresses this conundrum of human behavior,
“I have argued patiently against the prevailing form of naturalism, a reductive materialism that purports to capture life and mind through its neo-Darwinian extension. But to go back to my introductory remarks, I find this view antecedently unbelievable— a heroic triumph of ideological theory over common sense. The empirical evidence can be interpreted to accommodate different comprehensive theories, but in this case the cost in conceptual and probabilistic contortions is prohibitive. I would be willing to bet that the present right- thinking consensus will come to seem laughable in a generation or two—though of course it may be replaced by a new consensus that is just as invalid. The human will to believe is inexhaustible.” [Emphasis mine]
If scientists could be so wrong about Neo-Darwinism—a theory that has had near universal acceptance for seven decades—what else could they be wrong about? Could they be wrong about the very nature of life itself? We will look at this in the next section.
As I mentioned in the Introduction, this paper is a work in progress. This statement is more true of this section on Vitalism than any other. I cannot offer any knock-down evidence that vitalism is true. The best I can do is offer some insights that could lead to an inference that vitalism might well be true. Some of what I offer here is speculation but I think there is a sound basis for the speculations I offer. With that disclaimer, let me continue.
The previous sections of this paper focused on the complex specified information problem in living organisms insofar as their origin and evolution. The purpose of those sections was to assess the viability of natural processes and / or whether a design inference is reasonable and necessary to explain the origin and evolution of life.
This section addresses a different problem. Fundamentally, what is important to understand is the nature of causation in the operation of the living cell. This question goes deeper than the information problem related to the genome. Can material causation explain the workings, the operations, the moment by moment functions of the living cell? I suggest material causes cannot.
10.1 What Has to Be Explained?
Vitalism is the metaphysical doctrine that living organisms possess a non-physical inner force or energy that gives them an essential property of life. That is the text book definition anyway. Vitalists believe that the laws of physics and chemistry alone cannot explain life functions and processes.
There are very few, if any, reputable scientists that I am aware of who believe in something like vitalism. You could probably fit all the scientists who countenance something like vitalism in a phone booth and still have room for a few small farm animals. Rupert Sheldrake and Jon Lieff are two of the few reputable scientists that appear to believe in some form of vitalism. Of course by countenancing vitalism one instantly becomes irreputable in the eyes of the established scientific community. If there are others, they are keeping quiet. Interestingly, vitalism is widely accepted in chiropractic science. The intelligent design proponents do not claim to believe in vitalism, if they do they are not saying so. Their focus has been primarily on the information problem.
The Urantia Book is unambiguous on this point however:
“The vital spark—the mystery of life—is bestowed through the Life Carriers, not by them. In every living plant or animal cell, in every living organism—material or spiritual—there is an insatiable craving for the attainment of ever-increasing perfection of environmental adjustment, organismal adaptation, and augmented life realization. These interminable efforts of all living things evidence the existence within them of an innate striving for perfection.”
My interpretation of these statements, or at least one way of thinking about them, is to suggest that the living cell is endowed with an immaterial mind of sorts. In other words, there is agency—intelligent causation—acting at the level of the individual cell.
The claim of a modern day vitalist—in fact my tentative claim—is that just as the laws of physics and chemistry even in light natural selection, are not sufficient to account for the complex arrangements of organic molecules from an evolutionary perspective, so too is it the case that the laws of physics and chemistry—material causes—are not sufficient to account for the complex functional arrangement and dynamic movement of molecules within the cell and especially that the laws of physics and chemistry are not sufficient to account for the behavior of cells themselves.
10.2 Why was Vitalism Dismissed?
My contention is that vitalism has never been falsified. It has been dismissed prematurely based on faulty reasoning and an a priori commitment to materialism. Let’s look at why vitalism has been dismissed.
First and foremost vitalism was dismissed because it conflicted with the wave of materialist thought that became dominate beginning at least by the early part of the 20th century. Vitalism conflicted with science’s methodological naturalism and its subsequent whole sale adoption of materialism. Furthermore there was no way for science to directly falsify vitalism and therefore it cluttered up research by introducing a large set of unknowns.
Finally, in some sense the problems that might yield a need for a vitalist explanation were largely unknown because the complexity of living organisms was largely unknown. During the time Neo- Darwinism was being formulated, the cell was not known to be anywhere near as complex as we now know it to be. Therefore, without knowing that there were many molecular machines and proteins and molecular components that needed to be localized properly in the cell, there would obviously be no reason for a materialist to fret about such a thing. Generally people don’t go around looking for problems that they don’t know exist.
Also, vitalism conflicted with what was perceived to be the causal closure of space time. And indeed were Newtonian “Classical” physics not overturned by quantum mechanics, such an objection might be sustainable. However, quantum mechanics allows for a casually open universe. I discuss this in more detail in the subsection, Is the Universe Causally Closed?
Modern claims that vitalism has been falsified, by Francis Crick and Richard Dawkins for example, revolve around the fact that DNA, once heralded as the secret of life, is known to be composed of purely chemical compounds. And further that these compounds can be synthesized in the lab along with many other organic molecules.
The belief that the validity of vitalism hinged on whether organic molecules could be synthesized in the lab, was really a “straw man” argument. But in the early part of the 20th century it was commonly believed by vitalists that that could not be done. I don’t think that the ability to synthesize organic molecules in the lab has any bearing at all on whether vitalism is true or not.
There are three aspects, let me call them mysteries, of life that are unresolved that bear on the issue of vitalism. They pertain to the complex functioning of living cells such as how cells seem to know what they are doing and where they are going, how cells repair themselves, how the molecular components of the cell get to where they need to be when they need to be there to accomplish these marvelous functions of living organisms. But vitalism also comes into play in evolution; specifically how do complex new features in life arise.
10.3 Cell Intelligence
Brian J. Ford of Gonville and Caius College, University of Cambridge had been a long advocate of Systems Biology which focuses on the intelligence of the cell itself as a whole. It is still a materialist viewpoint but one that hints at a complexity that strains material explanations.
Here are his remarks on the extraordinary ability of single celled organisms to repair themselves following a catastrophic breach.
“I have examined in detail the repair of an Antithamnion cell that has been captured on…time-lapse video [of] a cell that was torn open with a fine dissecting needle. The empty and broken cell wall remained in two portions that were separated as clearly as cutting a drinking-straw with scissors. Antithamnion then embarked upon a remarkable sequence of events that restored the empty cell wall to full function. Close examination of the video frame-by-frame, allows one to observe how it is not merely the cell contents that are restored: the broken and displaced cell wall itself is also repaired and reinstated. It is not merely patched, like a bicycle tire, but meticulously realigned and permanently healed.”
You can view this fascinating video at:
There is no real materialist explanation here that makes sense (limiting the discussion to the last part of the video related to repair of the breached cell). The complexity of repair is astounding. And that repair functionality, were it attributable to material causes would have had to have been acquired over time by Neo-Darwinian mechanisms. But again, Neo-Darwinian mechanisms are incremental. Each incremental improvement in a complex process such as this would have had to have occurred and then locked in by natural selection. But in the case of a complete catastrophic rupture of a cell, how could there be any incremental set of random changes? Either a ruptured cell is repaired and it lives or it does not. There would seem to be no step-wise process possible. But keep in mind here, we are not just talking about information content as we were in the previous sections. Cell repair is a real-time process of single cell that would seem to require the coordination of a multitude of molecular components.
I talked about morphogenesis in the section on The Complexity of Life. Morphogenesis—the formation of multi-cellular organisms during development—is perhaps the most complex process known. It subsumes the complexity many other biologic functions—one could claim that it subsumes all other biologic functions.
Multi-cellular development requires that cells know their precise location. Research shows that cells make calculations based on inputs from their environment. Cells “use data from morphogenic fields of diffusible molecules and electrical gradients and complex networks of genes.” Cells signal back and forth with each other to help determine their exact location. Knowing their location is necessary in order to determine what type of tissue to differentiate into.
Dr. Jon Lieff comments are intriguing,
“How can an individual cell be so intelligent? How can an individual cell integrate so many different kinds of information and then act in concert with many other cells? Where does this intelligence lie? Where is the direction for all of this? The cell must be tied to other information sources such as mind.”
10.4 “Self Organization”
I mentioned in the Section on the Complexity of Life that there is now a good deal of evidence that the form of a multi-cellular organism is not determined entirely by the gene regulatory network in the DNA—genetic programs. The claim is a bit controversial and probably unsettled. But the evidence is quite credible as I understand it. There is an analogous problem in the form or architecture of the cell itself. The question being asked now is whether the architecture and form of the cell is specified entirely by the genes. The answer here, too, appears to be perhaps not.
The growing sense supported by research is that the architectural structure of the cell emerges epigenetically (has causes beyond the genes) through self-organization. “Emerges” and “self-organizes” are words used by biologists when they observe a phenomenon that exhibits complexity but cannot be—or have not been able to—ascribed it to material causes.
According the molecular biologist Roy Britten,
“There are no known genes that individually encode large amounts of information specifying the structure or patterns of development. There is also no reason to believe that…spatial specifications might be concealed in the vast stretches of noncoding DNA.
“Self-assembly is the logical replacement for potential overarching regulatory concepts,” for which no specific mechanisms can presently be identified.”
“An organism will assemble automatically from parts (macromolecules, structures and cells) specified by nuclear control factors . . . Without global control systems, information for form is in the genes for structural proteins, adhesion molecules, control factors, signaling molecules, and their control regions.”
Self-organization in biology means that molecular order or arrangements emerges through local interactions of these molecules where there is no apparent template, energy exchange or intelligence involved. The first thing about this definition is that for a scientist adopting methodological naturalism would have to assume, a priori that intelligence was not involved. So the question is: How can local causation result in order and complexity?
Molecular biologists believe that ribosomes and spliceosomes, among many other complex molecular structures, self-organize. But is that simply a retreat from common sense based on an aversion to teleology? Or are their local causes that can produce order and complexity? Were one to suggest that material causes could produce order and complexity at the molecular level, one might fairly surmise that perhaps nature is infused with just the right properties to produce order certainly, but complexity is quite another story. Complexity—information—has an arbitrary quality to it; order, does not. Order is the result of lawful, deterministic processes; complexity is not the result of lawful, deterministic processes.
The need for energy exchange is a principle reason why many materialists regard the universe as causally closed. Quantum mechanics changed all that. So, if it is the case that there is no energy exchange involved in self-organization, could it be proposed that self-organization of complex molecular machines and structures might involve the invisible hand of mind? How could the question ever be answered? I am not sure it can. But I am a bit out of my element here.
Molecular biologist who make the claim about the various phenomena that fall under the banner of “self-organization,” make this claim because they the molecular components are subject to constraints. And it is these constraints that by prohibiting some alternatives channel the actions toward a consistent, ordered set of results. It is analogous to convergent evolution. The world expert on convergent evolution is Simon Conway Morris. A simplified version of his claim is that it is the constraints that nature imposes on the Neo-Darwinian mechanisms of random mutation and natural selection that channel adaptations along predictable and repeatable paths.
According to University of Washington microbiologist Franklin Harold, a complete cell is necessary for self-organization. And it is only the cell that can provide the constraints which he believes channel the local activities of molecules in the cell into the splendid molecular machines we observe. Here are an interesting couple of statements by Harold that pertain to the self- assembly of the cell membranes,
“It is a most curious fact, known from the early days of electron microscopy but seldom mentioned in the literature: phospholipid bilayer membranes readily self-assemble in the test tube, but rarely if ever do so in the living cell. On the contrary, the major classes of cellular membranes (plasma membrane, endoplasmic reticulum, nuclear membrane, and those of mitochondria and chloroplasts) all grow, and they grow by extension of an existing membrane. Polarity and membrane type are maintained during growth.
“Cavalier-Smith, who has made much of membrane heredity in recent years, distinguishes between “genetic” membranes, which always arise by growth and division of membranes of the same type (e.g., the plasma membranes of bacteria and the inner and outer mitochondrial membranes), and “derived” ones, which form by differentiation from dissimilar membranes (e.g., the eukaryotic plasma membrane). Genetic membranes, like DNA, appear to have been passed from one generation to the next since the dawn of cellular life.”
If I understand the point here, one might perhaps sum up Harold’s comment as: “Life begets life.”
10.5 Molecular Location in the Cell
Regarding the first question, how do organic molecules in the living cell get to where they need to be when they need to be there, every astute observer who reads about cell division, transcription, translation or any other function within the cell, asks how the molecular components get to where they need to be when then need to be there.
If agency, intelligence, exists at the level of the individual cell through an endowment of mind as I suspect is the case, then it seems reason to suspect that this endowment might also play a causative role in localizing the essential molecular components of the cell to their proper locations and at the proper time.
Watch the animations of life at the link below, https://www.youtube.com/watch?v=Kzgnl5-8WAk
In the previous sections we asked, how does the information that specifies these molecular machines come into place? That is the information problem—the complex specified information problem. The other question people ask all the time is: How do these molecular components get to where they need to be when they need to be there…they seem to know where they are going? The causation available to materialism is quite limited in this regard, random diffusion principally.
Now of course when we look at animations of these processes, we recognize that they are simulations, but the animators have made every attempt to provide a realistic view of what must be happening in the cell. To the extent that these animations fall short of reality, it is not clear whether they do so in a way that makes vitalism seem more or less plausible.
John Travis writing in Science Magazine comments,
“If you think air traffic controllers have a tough job guiding planes into major airports or across a crowded continental airspace, consider the challenge facing a human cell trying to position its proteins. The latest analyses suggest that some of our cells make more than 10,000 different proteins. And a typical mammalian cell will contain more than a billion individual protein molecules. Somehow, a cell must get all its proteins to their correct destinations—and equally important, keep these molecules out of the wrong places. While research addressing this challenge has already produced a Nobel Prize, biologists stress that the mystery of how cells place their protein repertoire is far from solved.”
When talking about how it is that organic molecules get to where they need to be when they need to be there, the general problem is that there are many millions of molecules in the relatively caverness space of the eukaryote cells (cells with nucleus) of animals for example. These cells have to get to where they need to be but there are obstacles.
Generally, these molecules can react with many molecules in the cell to some extent. So the question arises, how does it happen that a molecule destined for a DNA polymerase for example, during DNA duplication avoids being interrupted by any other molecule that it could react with in this multitude of molecules?
Most biologists believe that the motion of macromolecules within the aqueous like solutions common in cells is governed by random Brownian motion. But random motion does not seem plausible especially in larger cells as Laurieanne Dent comments,
“This [Brownian motion] may work in bacteria where molecules are in close proximity to one another where it is observed that “each soluble enzyme contacts every other enzyme and substrate once every second.”
Since the pioneering works by Einstein and von Smoluchowski, it is universally accepted that transport of mesoscopic particles in simple solvents is governed by Brownian diffusion. It is also recognized that this paradigm dramatically fails to describe the motion of molecules in complex biological media, such as the interior of cells. Three decades of biophysical investigations have characterized a number of ‘anomalous’ phenomena associated with the translational motion of molecules in cells.”
Dent goes on to say that, “forty years ago, it was shown that the targeting of the lac repressor [a transcription factor in bacteria] to its DNA-binding site occurred up to 1,000 times faster than the predictions of diffusion and random collision.”
It is known that the crowding effect of molecules in a cell produce what is called anomalous subdiffusion which is diffusion that is slower than pure brownian motion. There is a technology limitation in terms of the ability to track a distinct particle in continuous motion. The best that can be done is a set of freeze frames. The complexity involved in empirical research as to whether or not molecular components are entirely governed by mechanical forces as they make their way to a reaction site, unaided by a vital intelligent force, precludes any definitive assessment. Certainly, scientists are not considering a vital force and are not conducting experiments which might bring the question into better focus. For now the best we can do is make a rough assessment as to whether molecular movement and location in the cell is purely natural or aided by intelligence.
Let’s take a closer look at some of these complex functions to help us identify why Brownian motion might not be a sufficient cause to explain how organic molecules in the cell get to where they need to be when they need to be there. We will look at four (4) key functions: 1) DNA Replication, 2) DNA Transcription, 3) RNA Splicing and 4) RNA Translation (protein synthesis).
When discussing complexity at the information level above we looked at the number of molecular machines in the transcription complex. We noted that there were several machines each comprised of many proteins each comprised of amino acid sequences. Here we are asking the question related to dynamics, in other words how these molecules get to where they need to be.
10.5.1 DNA Replication
Observations… In order to copy DNA, the molecular components (bases) have to find their way to the DNA polymerase. How does that happen? The animation is slowed down so humans can see what is happening. DNA replication occurs at a rate of 50 nucleotides per second in humans. How do the Helicase and the other machines know how to get to the proper location and when to get there?
Notice that the “leading” strand is copied forward, but the lagging strand is copied in reverse. To understand what I mean by that, look at the videos of the following links, they can explain in imagery much better than I can explain in words.
There is an important complication not shown. In order to copy DNA the components that are paired up with the separated strand have to be localized at the right time and place and presumably they have to be oriented correctly to fit in the channel of the DNA Polymerase. And since each of the four bases can fit in the channel there has to be some provision whereby the incorrect pairing is identified and extruded. But a complication arises, how then does the helicase become aware of this and compensate by halting the unwinding? It seems the Helicase would have to stop and start constantly and immediately to account for cases where an incorrect base pair interfered with DNA replication. This would be a repeated phenomenon of course.
Another related consideration that is not shown in the animation, and I suspect is not known, is what happens when the assembly of machines has to wait for the correct molecule base on one of the copied strands when the other strand has acquired its correct base pair? There is a dual dependency here. If the DNA polymerase copying the leading strand receives the proper base, but the lagging strand has to wait for the correct base, what happens? If it were left up to random chance and diffusion, one side or the other would always be waiting on the other it would seem.
10.5.2 RNA Transcription
Let’s take a closer look at transcription. Transcription again is the way cells copy the DNA in order to create an RNA template which is used to synthesis proteins during translation. View the videos at:
The transcription pre-initiation complex according to Wikipedia “is a large complex of proteins that is necessary for the transcription of protein-coding genes in eukaryotes (archaea).” It includes the RNA polymerase and six transcription factor proteins. The RNA polymerase is a large complex of 12 subunit proteins. Collectively the transcription initiation complex is an extraordinary complex molecular machine.
So interesting questions here are how all the components of the transcription initiation complex locate themselves nearly simultaneously at the correct location on the DNA and secondly a similar question for the activator proteins which also must locate themselves on a nearby segment of DNA. They then have to cause a bending of the Chromosome in order to precisely locate their activator region to the enhancer region of the transcription initiation complex.
The collection of activator proteins is also required which has to be assembled on the DNA somewhere, somehow is to initiation the transcription. This raises the question as to how the activator sequence “knows” how to find the precise enhancer region of the transcription initiation complex.
10.5.3 RNA Splicing
RNA splicing is perhaps the most complex and mysterious process in living organisms. Conventional biochemistry viewed the relationship between genes and proteins is as very simple reflected by the truism —“DNA makes RNA makes Proteins” —of the now bygone age where the Central Dogma ruled.
It is now known, and this is a recent discovery, that a “gene” is a rather amorphous term. A gene strictly speaking today might be a sequence of DNA base pairs. But a “gene” does not simply make one protein. Between DNA transcription and RNA translation (to make proteins) there are two extraordinarily complex activities that occur: 1) RNA editing and 2) RNA splicing.
RNA editing involves that changing of an mRNA transcript to make a new or different sequence which is then translated into a protein. The process is mediated by a set of molecular machines. RNA splicing is more complex.
RNA splicing is perhaps the most complex distinct function in the cell rivaled only by the copying of the DNA molecule itself. In RNA splicing the mRNA transcript is chopped up and reassembled into multiple new mRNAs which are then translated into proteins. In extreme cases there is a single gene in a WHAT that is spliced up into 18,000 distinct mRNA transcripts to create 18,000 separate protein products. So the wonderment we all had when we discovered that we humans have the same number of “genes” as a fruit fly was diminished when it was learned that a single stretch of DNA can produce many, many proteins.
But the question arises as to where the intelligence comes from that can direct the splicing process in such a complex way. No one knows really. One thing that is becoming increasingly apparent is that the stretches of DNA that do not code for proteins but yet are transcribed into RNAs, do appear to play a role in mediating the RNA editing and splicing process.
Let’s take a look at RNA splicing a bit more closely.
I think these animation videos speak for themselves. How these marvelously complex spliceosomes know how to self-assemble, how they know where to self-assemble, how they know what and where to splice and how they know when to disassemble are complete mysteries. “Self-assembly” is a magic word that describes a process presumed to be strictly material but ostensibly involves teleology. Were one to insist that the RNA splicing process is strictly the result of local material causes, I would be tempted to ask them how many certificates of ownership they had to the Brooklyn Bridge. RNA Splicing is just one of the many profound mysteries of the living cell.
10.5.4 mRNA Translation (Protein Synthesis)
Let’s take a look at mRNA translation.
Although these videos primary relate to how the molecular components of proteins get the ribosome, a related problem is how the resulting proteins themselves get properly located in the cell. One solution offered is a zip code-like system. John Travis comments in Science Magazine:
“Researchers have observed the precise positioning of more than 3000 different types of mRNA during early fruit fly development. More than 70% exhibited clear localization. That’s a ‘staggeringly large number… It’s almost as if every mRNA coming out of the nucleus knows where it’s going,’ one researcher commented.”
Further research has shown that mRNAs appear to have built in location codes that some are calling zip codes to help RNA localize itself in the cell. But a zip code system is not sufficient for localization any more than a zip code could be said to cause a piece of mail to arrive in your mail box.
Consider the Kinesin protein used for transporting large cargo in the cell. The fact that these marvelous molecular machines use a system of roads in the cell still does not explain how they know which roads to take in order to deliver their cargo to the proper location. View the video of the Kinesins at work at the link below.
Disclaimer: These are just a few of the problems I have been able to put down on paper given what has only been a cursory look at just some of these processes—replication, transcription, translation and splicing. I have not have the time to fully investigate and assess what other profound problems related to molecular location in the cell might exist in these and the many other essential functions of living organisms. I am sure that the comments I have offered in the immediately preceding subsections represent only the tip of the iceberg of the profound problems lying in wait for any explanation limiting itself to material causes related to molecular location in the cell. I am sure it is the case that for a person with the time and insight to scrutinize these astounding process in the living cell, other profound problems will be forth coming. The “devil is in in the details” my engineering friends are fond of cautioning me.
Regarding the second question that bears on vitalism, how do complex novel features in life develop—evolve, atheist philosopher and Neo-Darwinian critic, James Barham has said:
“We are finally beginning to realize, on the basis of irrefutable empirical evidence, as well as more careful analysis of Darwinian theory itself, that purposeful action in living things is an objectively real phenomenon that is presupposed, not explained, by the theory of natural selection.”
James Shapiro uses the term “vitalism” to describe the cognition that cells seem to have in order to evolve new features. There are a lot of questions and debates about what Shapiro is actually saying. Consider these jaw dropping statements:
“Natural genetic engineering generates different kinds of variation from those produced by classical mutations, one gene at a time. Rearrangements can take place at multiple locations at once and shuffle entire domains from one protein to another, producing novel combinations quickly and abruptly, perhaps even purposefully.” [Emphasis mine]
Notice he says, “novel combinations quickly.” Anything novel and that occurs in combinations is complex. So he is hinting at purposive change and then says so! Shapiro continues…
“Genomes, it seems, are built to evolve not at the petty pace of classical genetics, but in leaps that entail rearrangement of the genetic architecture or the import of foreign information. So could this be the way that organisms generate the multiple, coherent variations that seem required to manufacture complex organelles such as eyes or flagella? At the end of the day, that is a question that must be answered by experiment; we are not there yet, but the technology to address such issues is coming to hand.”
Massive systemic evolutionary change occurring in just a few generations is hopelessly incompatible with material processes. Shapiro continues…
“If experiments show that cells can make distinct appropriate natural genetic engineering responses to different adaptive challenges, we need to figure out how they do so. This almost certainly would prove to be more than a strictly mechanical process. How do cells carry out their computations to make useful goal oriented responses? A successful answer to that question will certainly involve cybernetics.” [Emphasis mine]
Shapiro uses the term cybernetics which is the science of control and communication in systems. But cybernetics, since it is a science, would also be deterministic and algorithmic would it not? How can an algorithm produce novelty? I don’t think it can by definition unless the constraints were defined in to extreme detail. An algorithm is simply a latent manifestation or postponement of the application of prior intelligence—it is a plan as Noam Chomski has said. Shapiro concludes…
“If such investigations take evolution science into areas that are more than strictly material, so be it. As long as we stay within the realm of natural processes, there are no boundaries on what science can address.” [Emphasis mine]
What does Shapiro mean when he says, “This almost certainly would prove to be more than a strictly mechanical process” and “If such investigations take evolution science into areas that are more than strictly material, so be it” ? It is not clear how something that can be demonstrated to be more than a strictly mechanical process and also be something that moves science into areas that are not strictly material, can fit into the realm of natural processes.
Clearly, if by cognition, Shapiro is talking about is strictly natural causation, which is no doubt how most biologist interpret his statements despite him hinting that that may not be an appropriate constraint, such cognition is what I referred to as emergent property.
Recall in the earlier section on Causation, I introduced the concept of “emulated agency.” But emergence from a physical system is still limited to material causes because it would have to be the result of an algorithm programmed in the cells that involve only chemical interactions.
A key point here is that Shapiro would not be saying that these processes may be “more than strictly material” were it not the case that living cells seemed to know what they are doing as they evolve new features.
Shapiro claims that cells have sophisticated techniques to evolve new adaptations. When asked how these marvelous capabilities that he collectively calls natural genetic engineering, evolved, he demurs, saying only that he doesn’t know but that they were in place in the earliest cells and that more research is required.
Others, William Dembski, have pointed out that unless Shapiro is claiming supernatural causes, he must revert back to Neo-Darwinian mechanisms to account for natural genetic engineering techniques evolved. I agree with Dembski; I don’t think there is any middle ground because material, natural causes and immaterial intelligent causes constitutes a binary proposition and would preclude any “third way.”
There are profound problems for anyone claiming that cognition, developed through evolutionary means, could account for any complex adaptation. First off, Shapiro for example, is primarily using experimental evidence related to relatively simple adaptations. By extension he is assuming these same techniques must certainly be required for complex adaptations such as the eye.
The simple adaptations, e.g. color changes, changes in food preference for bacterial are adaptations that one could, with only a bit of imagination and effective story-telling, accept as plausible the result of purely natural evolution because it would be reasonable to think that the cell could have encountered a similar environmental situation in the past and through a series of chemical interactions plausibly develop a simple algorithm to account for them. Personally I don’t find this all that plausible, but for the moment let’s accept that it is.
Nevertheless, the primary problem in evolutionary science is not how a bacterium could have evolved an algorithm to know how to react when starved of one type of sugar for example but rather how a multi-cellular animal could evolve an eye along with many other complex features. Natural genetic engineering, as currently described would seem to offer no help whatsoever.
Here is the problem: Shapiro is stating that Neo-Darwinian mechanisms of random mutation and natural selection are incapable of accounting for what is observed in living cells in developing adaptations and he is claiming early on that cells had this capability. So he is saying that Neo- Darwinian cannot even account for these simple changes, yet there are three dilemmas that are left unresolved:
1) How can it be imagined that these marvelous qualities of natural genetic engineering have to be invoked to explain even rather simple adaptations on the one hand, yet the far more complex set of functions that Shapiro collectively calls natural genetic engineering themselves would have had to have evolve through Neo-Darwinian means? The dilemma arises because Shapiro has deemed that Neo-Darwinian mechanisms are insufficient to account for even simple adaptations.
2) How can any naturalistic processes, which is what natural genetic engineering are, produce extraordinarily complex novel features such as the eye, which require the coordination with a vast array of additional, novel structures and functions?
3) How could these natural genetic engineering techniques, which must have arisen by Neo- Darwinian means, have the foreknowledge to engineer extraordinarily complex adaptations far in the future and do so time and time again?
Cybernetics might be fine at producing novel things by chance if they were similar to what that they have been programmed to produce. If we are limiting ourselves to natural causes, which Shapiro certainly seems to be, cybernetic algorithms could only hope to produce very simple novel things by chance material.
Something like the eye is a much different thing. When asked how the eye evolved, Shapiro has said that it probably evolved in a series of large steps. I am not sure that helps much. Evolving complex new features is the central question of evolutionary biology. Neither Neo-Darwinism, nor natural genetic engineering, appears to be the solution.
While we can agree with Shapiro’s observations and how they conflict with Neo-Darwinism, we do not have to accept his conclusions which are based on a materialist pre-supposition. Instead we are free to formulate our own alternatives theories.
One such alternative is to conclude that information has been infused into living systems by intelligent divine agents.
10.7 Summary - Vitalism
In this section we looked at evidence suggesting that vitalism has not been falsified. There is much more work to do on this topic and I am afraid it will have to wait for a future paper. We now leave the study of biology and turn to our attention to the creative complex specified information exhibited by the human mind where the greatest evidence for intelligent purpose in the universe is revealed.
Now we begin the lengthy evaluation of complex specified information exhibited by the human mind. It is in the human mind where the greatest amounts of complex specified information are generated and over the briefest periods of time. Therefore the strongest case against material is made in the arena of mind.
In this section I will discuss a few of problems related to consciousness and introduce the various philosophical theories of mind and particularly the most popular materialist theory of mind—the emergent mind incorporating the computational theory of mind, a variant of property dualism.
11.1 A Brief Review of Current Neuroscience
The best way to think of the workings of the brain insofar as its purported ability (purported by materialists) to account for consciousness and thought, is that these qualities of mind are produced by sequence of patterns of neural firings. Much of the following is a paraphrasing from several sources.
There are two basic types of cells in the brain: neurons and glial cells. Neurons are the most relevant since they are the ones that “fire ” in patterns which is what is said to give rise to all these marvelous qualities of consciousness and mind.
Neuron cells have dendrites and axons in addition to the central part of the cell. The dendrites are branch extensions. The axons are long, thin structures that connect to other neurons. The axons connect one neuron to many other neurons at the synapses of other neurons’ dendrites. There is actually a gap where the neurons connect called the synaptic gap or cleft.
The functioning of a neuron is such that an electrical charge builds up along the axon and permeates down to the axon terminal. As the charge grows it reaches a threshold and causes a release of neurotransmitter molecules across the synaptic gap. When this happens the neuron is said to have “fired.” The neurotransmitters stimulate the dendrites of the connected neuron in a way that either increases or decreases (inhibits) its electrical charge.
A neural network is the connections of a multitude of neurons. There is a lot of complexity as to how these neurons can connect together in neural networks and describing how all that gets done is beyond the scope of this paper.
To summarize a bit though, brain activity is comprised of patterns of neurons firing in many neural networks. Not all neural networks are active at any one time. Conscious experience is believe by materialists to be correlated with particular neural processes in the brain. These are referred to as the “neural correlates of consciousness.”
The following videos provide an excellent overview of how the brain works.
Most philosophers are in agreement that consciousness is the single most confounding mystery of our time. There are no good theories about how material processes can give rise to our conscious sense of self and self-awareness. It is not even clear how one would even go about trying to understand what it is from a material perspective. Thomas Nagel puts it this way:
“The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view.”
The consciousness problem does not readily lend itself to a quantifiable denial of materialism; although its elusive nature and intractability would obviously suggest something other than a material explanation. Part of the problem is that the neuro correlates of consciousness are somewhat contradictory. As a result many neuroscientists have proposed a concept called “specificity.”
The theory of specificity claims that there is a general mapping between the type of experience and a specific subset of neurons but that the level or intensity of neural firings varies. This means there is a decoupling of cause and effect and not what one would expect from a materialist explanation. What one would expect from a materialist explanation is that the same patterns of neural firings would yield the same mental experience. Current research seems to show that this is not the case. This means that neuroscience, is perhaps not falsifiable and therefore not true science.
Giulio Tononi has offered what seems to be one of the favored theories to explain the failure to find a one-to-one mapping between neural activity and conscious mental experience. Briefly, Tononi, in noting that there is a mapping between specific areas of the brain and a particular mental experience, but not in terms of the extent or level of neural firing, proposes that there is a threshold that must be eclipsed in a particular neural complex in order for a particular conscious mental experience to emerge. He calls this threshold Φ (Phi). As long as this threshold is met, the conscious mental experience arises within that complex.
Clearly this is simply a description of what is going on based on a failed observation to find what would be a scientifically based correlation between physical activity in the brain and conscious experience. It seems entirely untenable to me that neural firings could give rise to consciousness at all. But the extent to which I could be persuaded that this might be possible, it would be on the basis of a very tight and predictable correlation between neural activity and mental experience.
11.2.1 Qualia - Perception (“The Hard Problem” )
Philosophers of the mind believe consciousness poses the most baffling problem in the science of the mind typified by this statement by Deepak Chopra:
“There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain.”
Qualia refers to the subjective aspect of sensory experience. Philosopher David Chalmers coined the phrase, “The Hard Problem” of consciousness to qualia. It is the subjective aspect of sensation such as the experience of the color blue or the taste of honey. Qualia is not limited to sensory experience; it also extends to other subjective feelings such as grief or joy. How the neurochemistry of the brain could give rise to the subjective experience of color or taste or mood, etc. is a complete mystery.
Although neuroscientists do not claim to know how the brain creates consciousness and qualia, they are sure that it does. As a typical pronouncement on the topic I offer Joe Herbert, Emeritus Professor of Neuroscience University of Cambridge who commented in a forum about Ray Kurweil’s book, How to Make a Mind,
“I cannot tell you what happens in your brain to make you feel hungry, or angry, or recognize your friends, or plan your future. I cannot tell you how the brain can malfunction, and produce states that we label as ‘depression’ or ‘schizophrenia.’ But we do know that these states depend on different patterns of neural activity… [these] assemblies of neurons do exactly that [produce these above subjective experiences]. This is not homunculus talk, but acceptance of a simple fact: the brain generates the mind, as wings enable flight. We just don’t know how.”
Qualia are experiences that are known only to the person experiencing them and knowing them is all there is to know. There is no third person objective method for understanding someone else’s subjective experience. And Qualia cannot be conveyed to someone else who has not experienced them.
Philosopher of the mind Frank Jackson imagined a thought experiment —Mary’s Room— to explain qualia and why it is such an intractable problem for science. The problem identified is referred to as the knowledge argument. Here is the description of the thought experiment:
“Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like 'red', 'blue', and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence 'The sky is blue'. (...) What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?
Jackson believed that Mary did learn something new: she learned what it was like to experience color.”
It seems just obvious that she will learn something about the world and our visual experience of it. But then is it inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism [materialism] is false.”
There has been a lot of ink spilled on this topic and I will have to declare somewhat arbitrarily that the rest is beyond the scope of this paper.
The hard problem of consciousness pertains primarily to concrete things, i.e. sensory perception as oppose to abstract things such as intentionality (discussed below) and thought in general. Personally I do not think qualia is the only “hard problem of consciousness” and probably not even the hardest problem related to consciousness. I think abstract thought—intentionality (discussed below)—and the puzzle of where thoughts and other creative images come from, are equally difficult to explain and more quantifiable.
Abstract thought seems more perplexing because, at least with sensory input, it can be imagined that there is an analog representation to them. In other words, it is easier to understand how the color red for example could be produced by a distinct neural spike train and with a bit of imagination how that might affect the visual cortex in some defined and analogous way to produce the color red. How that signal is interpreted by consciousness, such that it is experienced in our subjective consciousness, is the qualia mystery.
However, with abstract thought, there isn’t any sort of analogous neural correlate. Therefore, it seems that a code of some sort would be necessary to explain how abstract thoughts are represented in the physical brain given a materialist perspective.
Here is an example…you see a red square. The light is transduced to produce a neural spike train which is routed to your visual cortex and somehow it is recognized in your consciousness. (No one knows how, that is the qualia problem.) There is, it seems, an analogous relationship between a color or a shape which are concrete things, and the physical brain’s spike trains that occur.
But while you are looking at a red square, your mind (materialists would say, brain) might drift off to symbolism, and you begin thinking about the Red Square in Moscow. You might incorrectly have the notion that the name comes from the association of the color red with Communism. You begin thinking about the concept of Communism which is not a concrete thing at all. Somehow, all that is entailed in your concept of Communism is represented in your brain according to materialists. It could not be represented in a way that was analogous to anything physical such as a red square though. So, it seems, for abstract thoughts there must be a code of some sort as I mentioned above. This idea of representing abstract thoughts in the brain takes us to the next intractable problem related to the mind/brain issue—Intentionality.
Intentionality is often presented as a mental phenomenon that eludes any physicalist account of the brain. The origin of the name is from the scholars of the Middle Ages and does not imply intention as in free will. It is most commonly defined as “the aboutness” of something. It is perhaps best understood as the ability of the mind (or brain) to be about something, stand for something, or to represent something. That is the standard definition. But a better definition is that intentionality is simply all or most thought that we experience that is not directly related to perception. If you are not perceiving anything, you mind is not blank; you are thinking about something. That phenomenon of thinking about something is intentionality.
For example, whereas qualia is the perceptional experience of seeing or feeling or tasting a red apple, intentionality is thinking about seeing or tasting or feeling a red apple. Qualia involves the experience of walking in the woods, intentionality is thinking about—imagining—the experience of walking in the woods.
When describing intentionality philosophers of mind say that:
“Every mental phenomenon includes something as object within itself although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it.”
Certainly intentionality, like qualia, is an unsolved problem in neuroscience and the philosophy of the mind from a materialist perspective. But like every other intractable problem, most neuroscientists claim that they will eventually explain intentionality in purely naturalistic terms. When all else fails, invoke promissory materialism.
If you watch the following exchange between atheist neuroscientist Steven Novella and neurosurgeon Eben Alexander you will see what I mean about the assurance neuroscientists have (proceed to the 1:04:22 to 1:06:33 minute mark of the video).
11.3 Theories of Mind
Generally neuroscientists are monists who embrace materialism. They assert that the mind is reducible to the brain but that is as an a priori assumption. They start with the premise that materialism is true. Therefore, in their view, there just has to be a way to explain all mental phenomenon by brain chemistry.
Given the confidence with which materialist neuroscientists deliver their message, one might think that neuroscience has coalesced around one or two theories as to how exactly the brain gives rise to the consciousness and the mind. That is not the case at all. The number of theories in the philosophy of mind that have come and gone is embarrassingly large and diverse. It seems that there are as many theories about how the brain gives rise to the mind as there are theories about why the Cubs haven’t won a pennant in 70 years.
Virtually all theories are monistic-materialist, meaning that they are predicted on the belief that there is one type of substance—matter (and energy)—and that somehow this matter and energy brings forth all phenomena, including consciousness and thought.
There are a few different ways to categorize these theories of mind. You can group them as materialist or idealist. In this case the materialist theories would clearly dwarf the idealist theories. You could also subcategorize the materialist theories of mind as reductionist or non-reductionist. Or you could categorize them into dualistic or non-dualistic. I will focus on dualism for now.
The following subsections will briefly describe some of the theories in the philosophy of mind—all of them monistic materialistic—that are now, or have been in the recent past, put forth. I think it is important to note, at least for me, given that I have very little doubt that the brain cannot give rise to what we experience as mind, that any attempt to explain mental phenomena through material causation is going to be vague and a bit peculiar. In reading through the various philosophies of mind, that certainly seems to be the case.
The term dualism as it applies to theories of mind is a bit counterintuitive. There are two broad categories of dualism: 1) Property Dualism and 2) Substance Dualism.
Property dualism is the idea of the emergent material mind. Property dualism is a monistic theory meaning that there is only one substance in the world and that substance is matter (and energy) and that somehow there is a non-reductive emergence of a new property—the material mind. This is the most popular view currently.
Substance dualism is the belief that mind is a different “substance” altogether. The word substance can be confusing since substance might normally imply some type of substance which most people would infer to be a material thing. But I think it is best perhaps to think of the word to mean substantial. The mind is a substantially different thing. This substantially different thing is the immaterial mind and soul.
The Urantia Book is clearly espousing a substance dualistic view. It speaks unkindly about property dualism and emergence.
“To say that mind "emerged" from matter explains nothing. If the universe were merely a mechanism and mind were unapart from matter, we would never have two differing interpretations of any observed phenomenon. The concepts of truth, beauty, and goodness are not inherent in either physics or chemistry. A machine cannot know, much less know truth, hunger for righteousness, and cherish goodness.” [195:6.11] (P. 2077)
Because the concept of the emergent mind is the most popular and realistic materialist approach to explaining human consciousness and thought, I am going to spend a good deal of time enumerating its flaws later in this section of the paper. For now I wanted to briefly discuss a few other materialistic theories and terminology of mind.
Epiphenomenalism is a form of property dualism but with no downward causative powers. It claims that what we experience as our mind is real (unlike say eliminative materialism which I will discuss below) and that this mind does emerge from the physical brain. But epiphenomenalism posits a one way causal relationship in that causation is from the physical to the mental only. There is no downward causation from the mental to the physical. There is no free will, no ability to control one ’s thoughts or actions.
Thomas Huxley, who held an epiphenomenalist view of mind, compared mental events to a steam whistle that contributes nothing to the work of a locomotive. William James likened epiphenomenalism to a shadow “upon the steps of a traveler” . Epiphenomenalism would be falsified by anything, aside from common sense, that shows that there is free will. I put forth what I feel is the best case one could make for free will in the falsification of materialism I called “Continuity of Thought.”
Another area that seems to undermine epiphenomenalism is neuroplasticity well defined by Jeffrey Schwartz in his book, You Are Not Your Brain, which describes a phenomenon that lasting changes to the brain can be achieved through intentional practice. Intentional practice is a function of a mind. It used to be that neuroscientists thought that the brain wiring did not change once one reached adulthood. It is now known that that is not the case at all. Some changes are small, affecting a few neurons, but others are large scale remapping which are limited to children.
The key point is that neuroplasticity involves a mental event affecting a brain event. This is an indication of a downward causal effect going in the direction of the mental to the physical which epiphenomenalism denies.
Neuroplasticity functions through what is called synaptic pruning, where individual connections in the brain are continuously being created and removed. This is the meaning of the colloquialism “neurons that fire together, wire together - neurons that fire apart, wire apart.”
In some cases brain activity normally associated with a given function can is relocated following brain injury. Even novel functions can be achieved such as a blind child’s ability to learn echolocation. This would be an extreme example of neuroplasticity—the mind affecting the brain. The video at the link below is very interesting and very heart warming.
11.3.3 Eliminative Materialism
If you are looking for a theory in the philosophy of mind to shake your head at, I offer Eliminative Materialism. Eliminative materialism, advanced by Paul and Patricia Churchland, makes note that there are no clear neural correlates established for certain types of mental phenomenon, such as beliefs and desires, and that they are not likely to be discovered. They conclude from this that these beliefs and desires, among other everyday mental experiences, are illusions. Those who continue embracing these types of mental experiences as real are said to be engaging in “folk psychology.” The root “eliminate” comes from the idea that you eliminate those mental phenomena that you cannot tie to empirical evidence of neuroscience.
It seems to me that Eliminative Materialism is itself a belief and would therefore have to be categorized as an illusion. I suppose if I sat down with the Churchlands over a glass of wine they might present a loophole. But any loophole they put forth would certainly be an expression of a desire on their part to pursue some end and would that not then itself also constitute an example of a belief? I think it would. After listening to an interview with Patricia Churchland, I decided that I could eliminate this materialist philosophy on the grounds that it does not correlate with reality. Listen for yourself (start at the 4:30 mark).
11.3.4 Type Physicalism or Identity Theory
Type Physicalism, sometimes called Identity Theory, is a reductive materialist theory which claims that mental events and physical events have one to one correlations, and in fact they are the same thing. There are a few variants of identity theory. The attribute which seems to vary is just how tight (how specific) the correlation between physical brain states and mental states are claimed to be. There is pure identity theory, type identity theory and token identity theory with lesser degrees of correlation in the order listed. Frankly I am not sure I understand the distinctions.
Type identity theory groups mental events into categories rather than tightly specified correlations. An example commonly used is a type of mental event, such as the category of “mental pain” which should be correlated with a category of physical event, in this case a specific type of neuron firing event (C-fibers in this case).
Type identify was propose in part to overcome the “multiple realizable” objection to its predecessor—Identity theory—which posited a stricter correlation between mental and physical events. The claim of multiple realizability is that mental states can be realized in various types of systems—computers for example as well as living organisms.
Identity theory would assert that mental events will always have a specific physical correlate. But given that a variety of organisms can perceive a specific color of red, let’s say, it is hard to imagine that the neural correlates would be the same between a puma and a swede for example. Type physicalism is therefore a loosening up of identity theory to accommodate an obvious problem.
If identity theorists are saying a physical state is the mental state, in order to be a scientific theory they have to somehow provide a causal account as to how the one gives rise to the other. They have not done that. If the brain gives rise to the mind, then it would seem that there would have to be a precise correlation because the events in the brain and the mental events. And if there isn’t some sort of a tight causal correlation, then it really isn’t science is it, because it is not deterministic nor falsifiable nor demonstrable by empiricism. In other words, if neuroscientists are allowed to decouple physical causes with mental causes in any way such that repeatable experiments cannot be performed to produce the same cause and effect relationship between the physical and the mental causes, then it is not science.
Functionalism, which I will just briefly mention, was another attempt at overcoming the objection of multiple realizability in that it accommodates the idea that similar mental phenomenon can occur over very different substrates.
The primary aim of functionalism is to arrive at a theory that is copasetic with the rise of computational science. Functionalism is not really an alternative to identity theory because unlike identify theory it does not posit any sort of physical causal mapping between physical events in the brain and mental events in the material mind. Functionalism simply redefines mental phenomenon in terms of outcome rather than physical cause. In functionalism, mental states are identified by the functions they induce. A machine—a computer—could have the same “mental state” as a person as long as the same functions were realized in each.
So imagine a thought stream: Thought 1 Thought 2 Thought 3…
What functionalism is saying is that the mental state of Thought 2 can be defined in terms of the following causal relations:
Thought 2 has as its cause, Thought 1 and it causes Thought 3.
Functionalism is assumed to be a materialist explanation. But as can be seen it is left unsaid what, if any physical cause, causes a thought to occur. The assumption is that a mental state causes another mental state but then you are left with the same problem…what is a mental state in terms of a physical explanation?
Functionalism is really not a serious contender as a causal explanation of the mind/brain problem and therefore does not bear on the question of materialism. In affect functionalism simply avoids this questions of consciousness, qualia and intentionality.
11.4 Property Dualism – Emergence
As an outsider with frankly limited exposure to the various theories, it seems that there may be a consensus emerging around the emergent mind. Many variants of emergent theories have incorporated the computational theory of mind and it has become the dominant view in cognitive science. In this subsection I will first discuss the emergent mind and the computational theory in brief and then discuss its many flaws.
11.4.1 Emergent Mind - In Relation to Theism / Atheism
Thomas Nagel describes the interplay of reductionism and emergence in his book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False, in this way:
“Many—perhaps most—philosophers of mind are still committed to the reductionist project; they think of the difficulties I have described merely as problems that need to be solved in carrying it out successfully.”
But, Nagel goes on to say,
“Materialism requires reductionism; therefore the failure of reductionism requires an alternative to materialism.”
But materialist scientists are not going to seek an alternative to materialism related to consciousness and mind that involves a nonmaterial solution. Emergence is an anti-reductionist, materialist account of the mind/body problem that posits a material mind emerging from the physical activity of the brain. That anti-reductionism seems to be gaining some favor in academic circles is no doubt a recognition of the profound problems with how human consciousness and thought could be accounted for through bottom-up—reductionist—causation alone and at the same time a recognition that these qualities we all experience in our subjective consciousness cannot be dismissed as eliminative materialism has attempted to do.
Emergence, in most uses is synonymous with property dualism. Generally, emergence describes a phenomenon whereby some new property or feature arises from a lower level physical system. And this new emergent property is something that cannot be inferred and was not expected from the lower level physical system.
Emergent property dualists can either be atheists or theists. Property dualism is the favored approach to the mind/body problem among theistic evolutionists (theists who generally accept the scientific narrative on evolution).
Property dualism is also the only viable position for an atheist who has the good sense to realize that our thoughts and sense of free will (at least in some limited sense) are not illusory although as I mentioned, I think true free will is incompatible with the emergent theory of mind.
Some of the leading property dualists are: Benjamin Libet, David Chalmers, Stuart Kauffman, William Hasker, Thomas Nagel on the atheist or agnostic side and Philip Clayton, Nancey Murphy and Robert Russell on the theistic side and I suppose Francis Collins and Simon Conway Morris might fall into that category as well despite not being philosophers of the mind. (There are a host of others I am leaving out of course.) Generally most theistic evolutionists are more likely to embrace property dualism than substance dualism. It may come as a surprise to some, that many, perhaps most, academic theologians have abandoned the idea of the immaterial mind and soul altogether.
But emergence—property dualism—is really more of a description or an observation, than an explanation. There aren’t any analogies or examples provided that are meaningful in any way to help one understand how a mind could emerge from the physical brain. In that sense it is sort of a magic word. Bernardo Kastrup puts this nicely in his book, Why Materialism is Baloney: How True Skeptics Know There Is No Death and Fathom Answers to life, the Universe, and Everything:
“The problem here is that, unless one is prepared to accept magic, the emergent properties of a complex system must be deducible from the properties of the lower-level components of the system. For instance, we can deduce – and even predict – the shape of sand ripples from the properties of grains of sand and wind. We can put it all in a computer program and watch simulated sand ripples form in the computer screen that look exactly like the real thing. But when it comes to consciousness, nothing allows us to deduce the properties of subjective experience – the redness of red, the bitterness of regret, the warmth of fire – from the mass, momentum, spin, charge, or any other property of subatomic particles bouncing around in the brain. This is the hard problem of consciousness.”
To expand on what Kastrup is saying…all these marvelous qualities of mind are not detectable in the DNA—at all; nor are the inferable by the interworkings of the brain. DNA provides instructions on how to make a neuron. It does not provide an instruction book on how the brain connects and establishes the patterns that materialist purport give rise to consciousness or mind. So where does the intelligence, revealed by our rich subjective lives, come from? It is a mystery, well beyond consciousness itself. Assigning a word—“emergence” —to the mystery is all materialists have been able to do. But it is an appeal to magic.
In debate, the exchange between a property dualist and a substance dualist often takes the form of “what is the alternative?” Which is a question asked by the property dualist to a substance dualist. As a substance dualist, I always point out to an adversary in debate that the burden of explanation between my category of claim and their category of claim is not symmetrical. I don’t have to explain how a divine source could endow us with consciousness, free will, an immaterial mind and nor do I have to explain how it interacts with the physical brain. But a property dualist, subscribing to emergent materialism, does have to explain how consciousness and mind could arise from the physical brain.
If a materialist were to claim that they needn’t account for how the brain becomes the mind, then this would mean that at least those aspect of nature—consciousness and mind—are opaque to human reason and beyond science and further that neuroscience is not science at all in the most fundamental sense. Very few atheists want to make that claim but David Chalmers appears to be one who does.
So the discussion typically ends in an impasse with the property dualists ending the exchange with an invocation of promissory materialism. It is as if a materialist were to say, “Well we know consciousness and the mind arises from the brain, but we have no idea how that happens and we may never know.”
Emergence is almost always assumed to reside within a materialist framework. There are exceptions as in Steve McIntosh ’s excellent book, The Evolution of Purpose: An Integral Interpretation of the Scientific Story of Our Origins. McIntosh uses the term emergence in a fundamentally different way. As a self-described panentheists, he would argue, presumably, that an immaterial mind has emerged. This would be a form of substance dualism if I understand his point correctly.
11.4.2 Computational Theory
The emergent material mind is often thought of as a computation device. No computer has ever been produced that has consciousness or free thought. Computers can do marvelous things but they are deterministic and can create minimal novelty only by incorporating a bit of randomness here and there.
Under the hood, computers are pretty much the same as they were 50 years ago or so from a hardware standpoint. They still use the same basic gate level functions. But the large library of software functions that have accumulated over time and the speed and capacity with which computers now operate, gives the impression that they possess more capabilities than they really do. They can produce nothing like the creativity exhibited by human intellect.
There has been a lot of press about the idea that computers may soon rival human intellect and perhaps surpass us in many aspects. This is called the “Singularity” advanced by Ray Kurzweil. Some strong AI enthusiasts do not necessarily believe that computers can become conscious in the sense that we are, but Kurzweil does. However Kurzweil does not even attempt a theory of consciousness in his book, How to Build a Mind: The Secret of Human Thought Revealed. In fact he focuses primarily on pattern recognition which pertains to perception and not intentionality.
And it is intentionality that encompasses the creative side of human intellect which is much more of an intractable problem for any brain based description of human thought.
The main point of concern about the singularity is that computer learning could advance to a breakthrough point where humans could not control them. Others have echoed similar concerns such as Stephen Hawking, Elon Musk and Bill Gates. This leads me to suggest that there is a fine line between genius and idiocy.
It seems much of the concern about the singularity relates to the recent success of IBM’s Watson computer which handily defeated Ken Jennings in Jeopardy. Not all, or even perhaps most, strong AI proponents accept that this event is a quantum leap toward the singularity, but some do.
On this topic, I agree with Steve Pinker and Noam Chomsky in claiming that such a singularity is not near now nor perhaps will it ever be near. Chomsky in particular refers to this idea of a singularity as “science fiction.” (View from the 8:00 mark to the 21:30 mark):
There are several things to keep in mind here with Watson for example. The Watson computer runs on some very complex hardware that was created by team of human engineers over a long period of years from the CPU and the field programmable gate arrays and the precise circuit traces on the multi-layer circuit board. The operating software was created by many software engineers. The specific Jeopardy program was again created by a large set of engineers who were leveraging the accomplishments of past engineers etc.
Watson can be used for a variety of very useful applications. For example, artificial intelligence systems such as Watson can quicken diagnosis and eliminate human error in medical applications. But the reality is that the "smarts" are of course entirely dependent at every step on human engineers.
To prepare Watson for any artificial intelligence application, a "corpus" of data is assembled. And it is entirely dependent on humans. The date is entered into a vast database of files. The data is then "curated" culled through to eliminate the irrelevant, out of date and poorly regarded information. Curating also entails establishing a weighting system that Watson can use to diagnose more precisely. Curating is entirely a human task.
Once the curated data is loaded into Watson in a process called "ingestion" where Watson indexes the data. Since computers are good at matching things, ingestion is exclusively a Watson endeavor (although the software to allow Watson to ingest the data was previously created by a human).
The next step is machine learning which also requires human intervention. The learning begins with uploading a series of questions and answers to help seed the information and establish "ground truth". This process, in effect, enables an algorithmic transformation of data to information; “syntax to semantics.”
The entire success of Watson is predicated on how vast the raw information is that is uploaded to Watson by a team of humans, how effectively the data is curated, how well it is trained. Then and only then can Watson offer any help at all.
There are a lot of things that one can worry about these days...Nuclear war seems to be making a comeback in my mind, antibiotic resistance, the national debt, the solvency of social security, the economic future, identity theft, the growth of secular humanism, the depopulation of the developed world, etc. Somewhere down along with my concern that bell bottom pants may come back in style is my fear that computers are going to rise up and rule the world.
If it started to look like Watson and his cohorts were gaining the upper hand, all we would need to do is place a call to Vladimir Putin, who Ted Koppel claims in his book, Lights Out, has the capability to take down our power grid. This would lead to the death tens of millions of people, but thankfully, a real crisis would have been averted—we humans will have put Watson in his place so to speak.
188.8.131.52 Turing Test and Chinese Room
Much of the discussion about Watson’s victory in Jeopardy pertains to what is called the “Turing Test.” What is the Turing Test? Mathematician Alan Turing who broke the Nazi code during WWII, claimed that if a human observer could not distinguish a human from a machine during a conversation, with greater accuracy than 70% of the time, then such a machine would pass the “Turing Test.” Although Turing did not necessarily claim that such a machine would be conscious or possess the thinking capabilities of humans, many advocates of what is called the “strong Artificial Intelligence” (Strong AI) position do (such as Kurzweil).
The Turing test is an extremely low bar to pass with respect to achieving something like human intelligence. Even if the Turing Test were to be passed, this would mean almost nothing. Kurzweil disagrees,
“The progress that has been achieved in systems like Watson should give anyone substantial confidence that the advent of Turing-level AI [artificial intelligence] is close at hand. If one were to create a version of Watson that was optimized for the Turing test, it would probably come pretty close…If a machine can pass the Turing test we can declare it to be conscious—that is, if it talks like a conscious being it must be a conscious being.”
As you can infer from the quote above, at least some strong AI proponents believe that a machine that could pass the Turing Test might also someday acquire consciousness and essentially equivalent human faculties. For the Strong AI, it is only a matter of time. If this seems far-fetched, it is only necessary to recognize that for a materialist, this is perfectly reasonable. A computer is software running on hardware. But the brain, a materialist would claim, is software running on “wetware.”
Philosopher John Searle, a property dualist, countered Turing’s claim in a thought experiment he called “The Chinese room.” Searle asked us to imagine that you are locked in a room and that you know no Chinese. You are given a set of rules in English that enable you to correlate English text symbols with Chinese characters. These rules enable you to respond in Chinese to a native literate Chinese person. If the Chinese person can become convinced that the person in the Chinese Room, (you) actually understands Chinese, then this would falsify the Turing Test.
By extension, Searle argued that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either. This means that the Turing Test is not a viable way of determining whether or not a computer possesses, human mental faculties. Perhaps Watson did pass the Turing Test in a limited sense. But does that really mean anything?
No, it doesn’t. The fact that a computer could ostensibly carry on a conversation for a while with a person and from that jump to the conclusion that this means that computers possess consciousness and “understand” higher level thought is utter foolishness. Daniel Dennett’s claim that AI only needs to address syntax and semantics comes for free, is complete nonsense and the Chinese Room type thought experiment along with a whole lot of common sense, demonstrates that. If semantics were equivalent to syntax then learning would simply be rote.
When a conversation between a computer and a human takes place, the reality is that the communication is occurring between the person and the programmer(s), in absentia, who the programmed the computer.
And of course it should go without saying that Watson understands precisely nothing about what it "says." All Watson is doing is manipulating symbols based on what it has been preprogrammed to do and parroting them out through a voice synthesizer. When Ken Jennings lost in Jeopardy, I am sure he did not congratulate Watson, he congratulated the programming team.
11.4.3 Problems with the Emergent Theory of Mind
There are profound problems with emergent theory of mind—with property dualism. We already know that no materialist account of the mind has anything substantive to say as to how the physical brain can give rise to consciousness, self-awareness, qualia or intentionality. No one really denies this but materialist scientists point out that there are theories and people are busy working on it. Of course that could be said of anything; it does not mean that a solution is near at hand or even possible.
Beyond consciousness, qualia and intentionality, as I will show in the next section of this paper, a materialist account cannot explain complex specified information and particularly, creative, complex specified information. My claim would be that it cannot—even in principle—explain these qualities of mind and I detail why in my three Falsifications of Materialism offered in the next section.
But setting these intractable problems aside, there are two additional categories of profound problems with the theory of the emergent material mind that I will deal with in this section. They are: 1) Explaining how these marvelous qualities of mind could have evolved, and 2) Explaining how these functions of mind can operate under a materialist-mechanist framework to produce the marvelous qualities of mind we experience.
184.108.40.206 Problems Related to Evolution of Human Consciousness and Mind
Let me first address the evolutionary issue broadly for all attributes of mind. Notre Dame University analytic philosopher Alvin Plantinga’s argument against naturalism based on evolution is commonly cited. Plantinga argued that evolution and naturalism are irreconcilable. He reasoned that if human thought had in fact evolved to produce some sort of mental faculties, these mental faculties would be exclusively related to their survival value at the time they were acquired and not to produce beliefs that are necessarily true especially in the distant future. Therefore there is every reason to doubt the resulting “truth” that those mental faculties give rise to including naturalism and evolution themselves.
I actually think a stronger argument in this vein is to simply say that there would be no survival value which would accrue to early humans for higher level thought. And since higher level thought does in fact exist, naturalistic evolution must be false, and by extension, materialism itself.
Beyond this, there are other problems with the viability of the notion that evolutionary theory can account for human consciousness and mind. The fundamental evolutionary problem that a materialist is confronted with regarding the evolution of human consciousness and mind is how evolution could produce these qualities given that these qualities are not even directly expressed in the genes.
Evolution, as it is currently understood, can only produce change if the information in the genome directly produces the phenotype and if the associated genes are affected. But in the case of the brain, while it is true that the genes for the neurons are transcribed into RNA and then translated into proteins, the end result, are the neurons. But how those neurons connect and therefore give rise to specific neural firing patterns, which are purported account for consciousness and thought, is not expressed in the genes.
There is not anywhere near enough information in the genome to specify the connections within the brain. There are about 3 billion DNA base pairs in human DNA and most of them are not involved in the development of the structures and functions of the brain. But there are an estimated 100 trillion synaptic connections in the human brain and new connections are being made all the time.
To explain the evolution of human thought and consciousness, a materialist would have to explain how a relatively small quantity of information in the genome—those genes that code for proteins and enzymes to produce the neurons—could give rise to a vast and nearly unlimited quantity of information exhibited by human intelligence essentially equaling the sum total of human knowledge.
Information—complexity does not come for free. You cannot put an ounce of information into a black box and expect a ton of new information to come out. It is only in the mind of a materialist where a “magic black box” exists, where vast amounts of rich complex specified information can arise for free and this is revealed in every wave of their hand or shrug of their shoulder when confronted with these types of profound problems.
Furthermore, even if the genetic basis for specifying the brain structures and functions were discovered (and I don’t see how that is even possible), there is not anywhere near enough time to have evolved these functions given the sparse population sizes and the long breeding cycles of early hominoids. We have seen that it is highly improbable that evolution by naturalistic processes could even account for a single new gene in perhaps as long as the history of earth, let alone the brief several million years that this exquisite capability we all have, is purported to have evolved.
For me this problem, related to the evolution of human consciousness and thought, is intractable and a substance dualist should have to go no further than these to dismiss materialism altogether.
220.127.116.11 “Administrative” and “Operational” Problems Associated with a Mechanistic Account of Human Consciousness and Mind
I want to move on now and discuss what I will call the “administrative and operational” related problems with a materialist emergent mind theory. Can the mind be understood as a computational entity that has emerged from the physical brain? Despite its popularity the idea is untenable.
To see this, it is helpful to think of how a computer works. We know that computers have “applications” that do all these marvelous things we are used to with word processing and video editing, spreadsheets etc. If we are to entertain the computational theory of mind—an emergent mind—these applications are analogous to the functions related to thought such as knowledge of how to play chess.
What must always be kept in mind is that all functions on the emergent mind are thought to be sequences of neural firing patterns. All thought, analysis, knowledge, memory recall, memory storage, learning, perception and consciousness itself, are envisioned as sequences of neural firing patterns.
Consciousness and its Resources
In a computational theory of mind there would have to be something analogous to an operating system. An operating system in a computer mediates between the software applications and the underlying hardware, mother board and input and output operations. If we keep with the analogy between a computer and an emergent mind, consciousness would best be thought of as the operating system but it is of course much more than that.
Consciousness would also have to use the resources of the brain—the neural networks. But there is no high level plan here with consciousness in the brain. In the emergent theory of mind consciousness arose from a vast array of neurons which were doing what they were doing strictly in response to prior local cause—no higher level purpose whatsoever. Reductionism does not allow that of course. According to the emergent theory, a degree of complexity was eclipsed and mind emerged. If that strikes you as magic I would like to congratulate you for your insight.
The first fundamental question is: If consciousness and all its administrative capabilities emerged from the physical brain, how does this splendid function of consciousness avoid perturbing the very neural resources that have given rise to it? There is no reason to believe that consciousness, having emerged from the physical brain, could have any “knowledge” of what resources were “off limits.” An operating system in a computer would need to know how to store the various memories and thinking functions. But how would consciousness —the supposed “operating system” in the emergent mind—know which resources were available and which were not? Believing that consciousness would have this knowledge is ascribing omniscience to an array of dispersed neurons each of which would be doing what they do based solely on local antecedent causation.
Thinking and Learning
The second fundamental problem with the theory of the emergent material mind pertains to our ability to think and learn. We are not born with knowledge. We learn. In a matter of a few years, humans somehow engender vast amounts of complexity from within. So how would learning happen given a materialist perspective? How do the specific sequences of neural firing patterns, that represent the ability to play chess for example, get put in place?
We learn to play chess. The function to play chess in our minds would be analogous to a computer application that plays chess. A computer chess game is created by a human. A human learns to play chess from within. These functions of thought must be, according to materialism, a sequence of specific neural firing patterns. These neural firing patterns would of course have to be quite complex, roughly comparable to the complexity of a computer chess game—lots of lines of code and a database all put in place by a team of very smart humans.
The only mechanism that materialists can offer as a way for nature to create complexity such as these thinking programs in the mind, is through a two-step process such as Neo-Darwinian evolution. But learning has nothing to do with evolution because these programs are clearly not stored in our genes. The DNA for the brain makes neurons. It does not specify neural networks let alone neural firing patterns. Learning is something that is gained after the genetics have been established.
This learning process it seems must involve developing the requisite neural firing patterns that represents knowledge. The process of gaining knowledge of the game of chess, would start with an existing sequence of neural firing patterns representing a particular level of knowledge of the game. As we learn more about the game of chess, this sequence of neural firing patterns is replaced by a new sequence of neural firing patterns that represents our new elevated knowledge of chess—a synthesis knowledge. Some other neural function has to have been involved to cause this transformation of one sequence of neural firing patterns to another sequence of neural firing patterns. This cause—this learning function—must itself be a sequence of neural firing patterns. And this function, this sequence of neural firing patterns for learning, must be specific such that it can cause a specific set of changes reflecting our new knowledge of chess. This is depicted in the diagram below.
But there is a problem. The learning function is not, and could not be, specific or predesigned in any way to perform the transformation from one level of knowledge of the game of chess to another higher level of knowledge of the same game of chess. How could it? The origin of that learning process, which is just a complex sequence of neural firing patterns, and how it happens to be able to create this knowledge transformation is a mystery.
Moreover, this function of learning, this sequence of neural firing patterns whose intervention caused the patterns that produced the sequence of neural firing patterns that represent our new elevated knowledge of chess, must itself have been caused. And this cause also must have been a synthesis of two specific sequences of neural firing patterns. The chain of causation goes on with no resolution in sight because there is no base line source of intelligence—no true agency causation—represented in the DNA, or in the neural network structures of the brain, to halt the chain of dependency. This is called an infinite regress. Infinite regresses are common when one thinks about a system which is purported to produce intelligence from nothing. Infinite regress problems like this often surface with the emergent theory of mind because there is no true agent—intelligent—causation to halt the regress.
What is an infinite regress? An infinite regress is a case where you have no intelligent cause to halt a series of dependencies. Imagine a bunch of 4 year old boys in a neighborhood. None of them know how to tie their own play sneakers when they are on their own feet because their mom’s always do this. But they can tie the shoes when they are off their feet or on someone else’s feet because that is the way the practiced on their younger sister. So Bobby calls Billy to come out and play. Billy says he can’t because his mommy says he can’t play without his sneakers on. So Billy calls Danny to come over to tie his sneakers. Danny also does not have his sneakers on and is bound by the same parental rule. He calls Timmy. Timmy also does not have his sneakers on. Timmy calls Billy only to find out Billy has the same problem. So none of these little guys can leave the house to go out and play because of a regress—a circular regress in this case.
Neuroscientists have done an impressive job of mapping the various areas of the brain and what areas correspond with what particular functions. They understand the architecture of the brain from a correlative standpoint. But this says nothing about how these thinking functions are created, i.e. how learning occurs in terms of neural activity or whether it is even possible. The question as to where the intelligence exhibited by the mind’s ability to learn, under the assumption that the mind is a material emergent mind, is unanswered. I suppose there are theories but none that I have run across gives me any sense whatsoever that a purely material account of learning is tenable.
Perception and Recognition
The third fundamental operational problem with the emergent theory of mind relates to perception and recognition. I only want to talk briefly about perception first in relation to focus and then relation to recognition.
Imagine you are in a restaurant having a conversation with someone. The restaurant is busy and there are conversations at the adjacent tables. Some are very interesting; more interesting than your guest perhaps. So your auditory attention vacillates back and forth between your guest and those at the next table. But your ears are taking in the same sounds either way and a spike train transmits them to your auditory cortex. But how can an emergent mind account for focus? An emergent mind would be deterministic but when you have multiple conversations or multiple visual items, competing for attention, what accounts for the fact that you can shift from one conversation to the other if not an outside attribute, i.e. free will?
The best audio editing programs would have a real hard time separating an audio signal from one source from another once they are mixed if it can be done at all. The complexity of such programs, in terms of lines of code, is probably on the order of tens of thousands of lines of computer code. But somehow, we humans can arbitrarily tune in and tune out very similar sounds. How did we acquire that function?
Now I want to talk about perception in the context of recognition. Please look at the image below?
Recognize it? How long did it take to recognize the image? Probably less than a second, right? But how did your brain recognize the image so quickly given that your mind is purported to be algorithmic—like a computer? Computers recognize things in one of two ways. Typically, computers recognize things based on a precise match between characters or distinct pixels of an image. But in order to recognize something, this would mean that there had to be a precise match of something, or a very close match, to something stored in memory.
However I am quite certain, pixel for pixel almost none of the pixels of the image above received through the eyes and transmitted to the visual cortex would match anything in anyone’s memory. The colors are different and the association between the pixels in the image, in all likelihood, are like no other image of the Statue of Liberty you might have stored in your brain or anyone else’s for that matter. And in any case the speed with which you recognized the image means that a chunking away in trying to match an image among the many millions that we must have stored, is entirely implausible.
Precise matches are not very useful in the real world for AI recognition. Facial recognition type programs on the other hand are useful. Facial recognition programs draw lines between recognizable points on the face such as the eyes, the center of the nose, bottom of the ears or bottom of the lip. The facial recognition program then measures the angles of the triangles created by these lines and compares them with the angles calculated for images in its database.
Facial recognition done in this manner avoids the scaling problem involved in a pixel by pixel match and the need to match each pixel in terms of color which is nearly impossible given the different scales and coloring and shading of any image perceived by a human or a machine and an image in memory or a database. But humans can recognize faces at an early age. Babies can recognize their mothers very early on and they can recognize them in different lighting and from various angles.
How do they do that? There is no facial recognition program in the DNA. So are we to believe that babies create a complex program to recognize faces, the complexity of which, would be something that even a large team of our best computer programmers have difficulty doing?
How is it that a team of talented programmer would require months or years to develop a program for facial recognition but a human baby can do it in a few weeks? How is the human program for facial recognition created? I ask the same question above…where does the intelligence, the complexity, come from that enables humans to develop a program for recognition? Complexity does not come for free under a materialist perspective. The complexity cannot come from within, it must come from without. It must come from an immaterial mind.
The same thing could be said for intelligent character recognition programs. Character recognition programs are not good at recognizing human scribbles beyond a certain point such as the following note from Lincoln.
But humans can do this.
The authority on how the brain might recognize a pattern is Ray Kurzweil. According to Kurzweil, “there are about 300 million neural pattern recognizers in the neocortex.” These pattern recognition modules are comprised of specific arrangements of dendrites and axons (the portions of the neurons that connect neurons together). Kurzweil claims that these pattern recognizers decompose the shapes of text for example. And there are specific pattern recognizers for specific types of shapes. The outputs of these pattern recognition modules are somehow combined with the outputs of other pattern recognition modules to recognize some pattern.
How all these modules got put in place, how they recall shapes in a database to match, how they are coordinated with one another, and how they interact with one another, is unclear. But this mechanism of pattern matching would only work if what is being perceived has something to match with in its database, would it not? But we view things every day that we have not seen before and can quickly recognize them and categorize them very quickly.
How can this recognition occur so quickly? Searching through an entire database of distinct shapes which comprise the image of the Statue of Liberty above, for example, would be what computer scientists call a very processor intensive function. But you were able to recognize the image in less than a second. The brain does not work that fast. Neurons fires about 200 times a second. Anyone who has ever waited during a search for a text string in a large Microsoft Word document, could, with a bit of thought, realize that the brain could not complete a search and match an image as quickly as we do it. Computer clocks work on the scale of gigahertz (billions of cycles, analogous to neuron firings, a second).
I could have degraded that image of the Statue of Liberty significantly more than I did and you still would have recognized it. If Kurzweil’s proposal is the best humanity has to offer by way of an explanation of perception and recognition, we haven’t gotten very far at all.
Moreover, recognizing a static image such as the Statue of Liberty is one thing. Recognizing a place you have been as a child and now returning presently while driving in a car is something far more complex. The number of lines of code necessary—the complexity necessary—for dynamic recognition where the images are moving and much of the setting has changed, would stagger the imagination of a materialist (if only a materialist would admit such a thing as imagination existed).
Memory Storage and Recall
The fourth fundamental operational problem with the emergent materialist theory of mind pertains to memory. If materialism is true and a mind emerged from the neural activity of the brain, then there must be a memory storage and recall system in place to store thoughts, images, knowledge, and life experiences. And like every other function in the brain under a materialist framework, these memory management functions are nothing but complex sequences of specific neural firing patterns.
In order for memory recall to work, there would have to be something analogous to a computer’s look up table with pointers to a database and metadata describing the various stored information and hyperlinks between memories. And this would have to be quite complex of course. But the problems don’t stop with the complexity of the memory management system.
A back up system would be needed for various memories. Experiments on animals show that memory is very resilient. By correlating animal behavior with brain architecture, researchers have tried to isolate where certain memories are stored. Surgical procedures thought to remove memory storage locations do not remove the memory in most cases. This led one expert to suggest that, “memories seem to be stored everywhere and yet nowhere in particular.” So the memory management system would have to catalog locations and backup locations—perhaps many of them.
The memory management system must be a complex and dynamic mental function itself given that the information it would have to manage the storage and retrieval of is ever growing and changing and comprised of diverse types. But the memory management system is itself a mental function—a complex, specific sequence of neural firing patterns—that itself would need to be stored in the brain.
How is the storage and recall of the memory management system itself recalled and stored? An auxiliary memory management process perhaps? But an auxiliary memory management process would also be nothing but a complex sequence of neural firing patterns. This auxiliary memory management program would also have to be stored and recalled. So it appears as though we may encounter another infinite regress here.
Infrastructure Maintenance and Preparation
The fifth fundamental operational problem with the emergent mind theory involves the need to prepare a neurological infrastructure to run these putative thinking programs. Assuming the emergent mind theory is correct and the mind is similar to a computer, our minds must then have something like an operating system along with a vast and growing variety of mental functions such as playing chess and facial recognition, etc. Aside from the question as to how these mental functions could have arisen, we also have to explain how they operate over the brain’s neural infrastructures such that a reliable outcome is achieved.
Computers are designed in such a way that the infrastructure—the data and address buses—are tightly controlled. The voltage levels, resistance, capacitance and inductance on a computer data bus for example are kept within a rather narrow range so that heat, power variance and electromagnetic interference have negligible effects. A computer scientist could know precisely what these electrical levels are at any point on an address or data bus for any given point in a running program were he/she to be provided with all the necessary information and were he/she to take the time to do so. A computer’s applications run over an infrastructure that is pre-prepared for them at all times to insure a known-good state.
But the brain has no such provisions; nor would a mind that purportedly emerged from the brain have any such knowledge of those necessary provisions if they existed. How could it? It seems that there would have to be a function to prepare a neural infrastructure to ensure a proper outcome of some other function. If there are functions for normalizing a neural infrastructure to insure a known good state with the proper initial conditions in preparation for a mental function to operate, then these preparatory functions would themselves be sequences of neural firing patterns and would need to have a neural infrastructure prepared for them to produce a reliable outcome and on and on for another infinite regress of functions.
The sixth fundamental operational problem with the emergent computational theory of mind involves the necessary interaction between putative functional thinking programs.
This paragraph and the next are a bit of a review from the segment above on learning but bear with me. Let’s say you are learning to play chess. At any time, you have an existing level of knowledge of the game. According to the computation emergent mind theory, this knowledge, this function, would be a specific sequence of neural firing patterns stored in the brain somewhere.
As you learn more about the game of chess by observing others or pulling insights from memory that are analogous to aspects of chess, this new information has to be integrated with your current knowledge. That is what the learning process is. In order to integrate this new information or these new insights into your current knowledge of chess, these new insights or memories, which are themselves sequences of neural firing patterns, would have to somehow affect the current set of neural firing patterns that account for your current knowledge of chess. This means that the neural firing patterns representing your existing knowledge of chess would have to converge with the sequence of neural firing patterns that represent your learning function and they would have to converge over the same set of neural networks at some point. There would have to be a synthesis of this convergence that produces a new neural firing pattern that accounted for your new level of knowledge of chess. Refer to the diagram below.
But how could the outcome of two converging patterns of neural firings that had never associated before result in anything other than chaos? It would be like saying that you could take the words that describe each of these—your current knowledge of chess and your new insights—and randomly place them on a page in order to get a description of your now more complete knowledge of chess in written form. These sets of neural firing patterns are new to the brain so an appeal to some pre-existing integration program (which itself would have to be explained) doesn’t work. It takes intelligence to ensure an outcome. This is all setting aside the fact that since we are conscious during the learning process, yet another complex set of neural firing patterns—those representing consciousness—would also have to somehow converge on this neural network without affecting the outcome. How would that magic happen?
Computers are designed in such a way that the interface between the operating system and the applications is tightly controlled. Computer programs interface with one another in a very precise manner through a set of registers under program control. The discrete functions insure a consistent result. Nothing is left to chance. The brain could have nothing like this.
To perform any of these functions such as recalling a mental function from memory to play chess for example and do so at any point, it seems obvious to us in our subjective experience that we have the ability to direct our focus of attention in a precise way. Directing one’s thoughts is an act of free will. Is true free will compatible with the emergent theory of material mind?
This could be a long discussion and I will have more to say about free will in one of my Falsifications of Materialism in the next section of the paper. But the short answer is that if the mind is envisioned to an emergent computational system, then no, there could be no true free will. There would be no true free will because all mental functions arise from deterministic interactions of molecules internal to the brain. There can be randomness related to quantum mechanics but that is not the same thing as the openness that true free will entails.
The apparent conflict between what we feel is true about our ability to direct our thoughts on the one hand and the determinism of nature, is unresolved by the emergent theory of mind. At best you could view free will as computational decision making and that appears to be what “compatibilism” is. (Compatibilism is the belief that materialism and free will are reconcilable.) But what compatibilism offers is a demotion of free will in that it equates it to algorithmic decision making.
11.5 Summary – Emergence of Consciousness and Mind
In this section I have discussed the variety of theories of mind. I have spent a lot of time discussing the profound problems with property dualism—an emergent material mind. The problem of consciousness, self-awareness and qualia are untouched by any emergent theories. But consciousness and qualia, and I should include intentionality as well, are only part of the difficulties.
How an emergent material mind could have evolved through Neo-Darwinian processes given that there is no direct genetic mapping between the genotype (the DNA) and the phenotype (the brain) and because the time and populations sizes were insufficient, is an unsolvable problem. And the general features a materially emergent mind must have in managing resources, learning, integration, memory storage and recall, present profound obstacles which I referred to as administrative and operational.
But these administrative and operational problems and especially, consciousness qualia and intentionality are difficult to think about in a way that would lend itself to a quantitative falsification.
In my research and involvement in forums on these topics, I have not been given even the slightest hint that any neuroscientist has the faintest clue as to how these mental functions could be developed, how these mental functions can be accessed at any time from any point, how they can interact with other mental functions, how they can be recalled and stored, or how they could be activated and operated over a neural infrastructure such that a reliable result is achieved.
If mental functions and their interactions are to be reliable as we sense that they are in our daily subjective experience, then the precise neural firing patterns would have to be controlled in some way. There is no provision for such a thing in any theory positing an exclusively material mind. Only true agency—intelligent—causation can do this. The vagueness of the descriptions of emergent mind theories betrays the futility of such theories.
When one reads accounts of how the emergent mind might work, they read about property x supervening on property y and emerging from this or that other property. It strikes me as pure nonsense. The vagueness of the accounts are the result of intelligent people seeking to explain the greatest of all enigmas while operating from a false initial premise—materialism. They are doing the best they can with the hand they have been dealt. But perhaps it is time for them to discard their hand and draw some new cards.
In the next section of this paper I will offer three falsifications of materialism which lend themselves to a more quantitative approach to falsifying the idea of the emergent material mind and by extension, falsifications of materialism. These falsifications are more intuitive and accessible to anyone and make for a much stronger case against materialist theories of mind.
The problem with relying on philosophy and cutting edge scientific research to attempt to understand whether materialism is true or not, is that for most people, the complexity of the terminology and concepts and science are overwhelming. One quickly gets lost in a confusing conceptual morass of new terms, processes and phenomena they know nothing about. Endless claims and counterclaims and references to this study or that, quickly becomes too difficult to sort out. It takes years of persistent investigation to sift through the nonsense in order to make sense of it all and to establish credibility among the various participants in the debate. It is not easy.
On the one hand you have what appears to be a consensus of modern science with its affinity for materialism and on the other hand you have the intuitive sense that a physical system—the brain—could not possibly account for your subjective mental experience. Nevertheless, that is what virtually all academic neuroscientists will tell you. There are detractors but were one to simply weigh the two sides in academia anyway, materialists would vastly out-weigh the number substance dualists or idealists.
I have always thought that if it is the case that God exists, that it should not be so difficult for the average person to discern this truth by means other than faith and a vague sense of something greater. In other words, for the rational and open minded, it should be the case that with a bit of persistence one should be able to make a rational case for the existence of God or at least deny the alternative.
I have provided a bit of that in the previous section. But I believe that I have found a few more clear and quantifiable falsifications that are accessible to all and I will present them herewith. I have put these out on a few forums, both Intelligent Design and atheistic, and no one has laid a glove on them in my opinion. I believe that they transcend science. They are in-principle immune to promissory materialism. In other words they are not likely to be nullified by future science discoveries. And most importantly they are intuitive, simple to understand and they are based on reason. For me, these “falsifications of materialism” as I call them, are the most persuasive evidence and have given me the greatest assurance that the mind is not reducible to the brain.
These falsifications of materialism are aimed primarily at the young believing adult that is being told by their professors in college that there is no God. But regardless of how persuasive I believe they are, they are unlikely to persuade someone who has already been infected by modern scientism, the belief that any pronouncement of modern science presented as a consensus could ever be wrong.
I have shown that Neo-Darwinism’s failings undercut this claim, this appeal to authority, but a hardened materialists will be unmoved by anything other than scientific consensus. When all else fails those who have been gullible enough to have bought in to materialism based on science hook, line and sinker, will simply shrug their shoulders, wave their hand or evoke promissory materialism and change the subject.
I will make the very generous assumption (generous to materialism) that the unsolved and intractable problems of consciousness, qualia, intentionality as well as the administrative problems I enumerated and discussed in the previous section, related to the emergent theory of mind, are off limits. I will just put them on the shelf and suggest that even if those problems were to be resolved, the emergent theory of mind still fails based on the creative complex specified information that human mind exhibits. For these falsifications I will assume that the emergent theory of mind is plausible. In other words, I will not hinder materialism with strict reductionism. The emergent theory of mind, as I have detailed, is an appeal to magic, but I want to make every effort to give materialism its best chance at succeeding in explaining creative complex specified information.
12.1 Falsification of Materialism #1: Dreams
I was in Amsterdam several months ago driving with our distributor through the streets. It was evening and we encountered a massive bridge made of tree trimmings with rail cars going underneath it. I had a sense that someone had told me about this bridge somewhere in Amsterdam but have never seen it before even though I had been here several times and walked around the town. It was evening and the trade show was over for the day and we were looking for a quick bite to eat before going back to the venue to watch a few presentations. We gave up, made a u-turn and went back to the venue. We walked in and saw the large convention center auditorium all with chairs being set up people where hanging out the auditorium was very well lit and huge. I had never seen such a grand, bright room anywhere.
I needed to find a bathroom. I asked a women, who was ushering at the venue, she motioned that it was down stairs. There were steel bar type gates that partially blocked the way down stairs kind of like what you would find in a super market. I continued down and saw the entry way to the washroom, very posh but with strange urinals I had never seen before.
I scurried upstairs back to the venue and into the large auditorium with people milling about and setting chairs up. The chairs were kind of cheap looking given the otherwise large and grand venue. I saw a coworker of mine sitting at a table. There was a middle aged women sitting on the table who then began to lean over toward my coworker as though intending to flirt. He saw me and waved me over and so I continued walking and sat down. I glanced around the room and then toward the chair I would sit at the table. The women was now practically laying on the table with her head resting on her right hand facing me. She was looking at me and I was looking directly at her. I had never seen this women before. She was expressionless with perhaps a slight smile. She was fair skinned with a few light freckles and full cheeks and a slight over bite. She had a full head of sandy hair cut mid length. I said hello and she said nothing just a coy smile.
As I was leaving the venue I walked down stairs to exit, saw a wedding going on, odd I thought that a wedding would be going on in a convention center, but I continued and went out the other door which resembled more of a hotel door, they motioned for me to go to the right but I knew my car was up and to the left in a lot. I started to walk through a waterfall area and pond and almost slipped, I caught myself twice, they said walk on flat dry rocks, I did and was able to grab onto some trees and bushes to make my way up toward the lot.
This was all a dream. There was no bridge made of tree trimmings, no venue, no auditorium, no women. It was so real but it was all something that my mind had put together for me for my nightly entertainment. Dreams are a true poor man’s theatre.
Dreams are astounding, spectacular if you really think about what has to be going on to produce them. And dreams without a doubt disprove materialism in that they disprove the notion that the brain accounts for the mind. In fact, if I had to offer one knock-down proof that is accessible to anyone as a falsification of materialism it would be dreams.
There are other mental phenomena that exhibit the signature of design but dreams are easier to quantify because of the imagery they produce in our sleeping consciousness. Dreams produce vast amounts of creative, complex specified information and they do so spontaneously, continuously and instantaneously. My claim is no material set of causes that could explain the spontaneous and instantaneous appearance of creative, complex specified information.
The following are some terms that I will be using in this falsification related to dreams. I am going to make a comparison between the video that we see on say an Ultra High Definition TV and the imagery that appears in our dreams. When I am speaking of the image of a dream I will use the following terms:
Image plane – refers to the dream’s presentation of the image “canvas” appearing in one’s consciousness during a dream. This is analogous to a projector screen or TV screen or one’s visual plane as viewed through the eyes during waking consciousness.
Image content (or imagery) – refers to the mental image content of a dream and all the various components that make up the image plane. This is analogous to the image content of a movie or real life as seen through the visual system during waking consciousness. These are the image items in a dream such as a table, chair, cup, mountain, tree, boat, person etc. anything that can appear in the imagery of a dream. And of course these components can all be in motion so there is a dynamic quality to the image content.
Image frame – refers to the dynamic refreshing of the image content in a dream. This is analogous to a video frame in a movie. It is the current (instantaneous) image, which is replaced by another and another in immediate succession to give the impression of motion.
Image element – represents the resolution or granularity of the imagery in the dream. This is analogous to a video picture element (pixel) in video systems. If a coffee cup appears in your dream image it is the distinct elements that comprise the cup—each one of the many required to form the image of a cup. They are not recognizable as distinct entities as they are too small. But by inference we know that they must exist in order for an image with details to be rendered during dream.
Note: For the most part to simplify things, I am going to primarily discuss the image content of a dream. I am going to only briefly discuss all the other intractable difficulties related to dreams such as the dialog that occurs during a dream, the abstract meaning of the visual element, the abstract thoughts that accompany one’s dreams, the tactile elements and so forth. I will simply note that they are problematic as well in the context of a discussion of the mind ’s ability to emulate the senses during a dream.
12.1.2 Attributes of Dreams
Based on my own experience with dreams and discussions with others, dreams have the following attributes:
1) The imagery of a dream is novel, i.e. it is creative, meaning that the imagery is new (and unique actually). Dream imagery is not a direct replaying out of memory. However, the context or topic of our dreams is typically similar to what we experience in reality so in that limited sense, dreams can be based on memories. For example, I have dreams about things that are going on in my life and often with persons I interact with. But when these contextually similar elements appear in my dream, the image of them is not directly lifted out of memory –it is always different. So dreams are creative in that they produce new imagery even though the subject or topic of a dream is, more often than not, familiar.
2) The quality of the imagery of a dream is real in the sense that when we are dreaming in most cases, except lucid dreaming, we are not aware that we are dreaming and the imagery presented seems as real and of essentially the same quality as the imagery that we experience through the visual system during waking consciousness. In fact I often dream that I am glad what I am doing in the dream is not a dream, or, wishing that it was a dream.
I think that our minds deliver imagery during day dreams that is similar in clarity to that imagery that appears in our nightly dreams. The difference is that during waking consciousness our minds are in flux and overloaded with perception from our senses. During dreams, at night, our minds quiet avail themselves to clear images. It is a bit like looking at a reflection in a lake. When the water is calm, the reflection is clear; then there water is stirred up the image is highly distorted and unclear.
3) The image content of a dream is restricted to personal human experience. Our dreams include both natural items such as trees and dirt and grass and sky and humans as well as human artifacts such as bats and balls, and tables and chairs and computers. But our dreams typically do not include topics or objects never experienced.
4) The image content of a dream exhibits complexity. Our dream imagery must be comprised of vast numbers of distinct image elements and the associated, underlying brain components that would have to change rapidly over time.
5) The imagery of a dream exhibits a very high degree of specificity in that the distinct image elements are tightly interrelated with one another to form objects and trees and settings and people. Tight interrelatedness means that each image element that makes up an object or a face or a tree in a dream has to be what it is in order for the object to appear real as it normally would through the visual system. This is really another way of stating that dreams are real or appear real while in sleeping consciousness. Tight interrelatedness of image elements means that they are interdependent and therefore highly constrained. Highly constrained means that they are highly specific. Each image element is in effect specified by the adjacent image elements (think picture elements, “pixels” in video) that together makeup an object or face or tree. And these objects are specified by the overall setting of the imagery. The image elements are also specified temporally by the image content of the preceding image frames and the successive image frames.
6) The imagery of a dream is ultra high definition, at the very least equivalent to what they call Ultra High Definition II video—actually quite a bit better in all attributes, as I will explain below.
A rough comparison can be made between the information content of the imagery in a dream and the information content of Ultra High Definition II video. Ultra High Definition II video has 8000 picture elements (pixels) in a line and 4000 lines. Each picture element can be one of about 1,000,000,000 (one billion) color values (230). There are 120 frames of video per second.
The imagery in a dream is probably superior in quality than current Ultra High Definition II video in terms of the resolution of the imagery (granularity or resolution) and the colors represented as well as the smoothness (rate of refresh). For example, I am certain that the smoothness of the imagery of a dream would require greater refresh rate that 120 image frames a second and is capable of displaying more than 1 billion color values (230). The human eye can see about twice as many colors as even the best TVs that you have watched can display.
6) The imagery of a dream is dynamic in the sense that the real human artifacts and natural items and persons that appear in dreams are often involved in motion. The motion can be absolute or relative. Absolute motion can be produced by the objects themselves moving such as a ball moving across the dream’s image plane. Relative motion can come about by virtue of the dreamer shifting their viewing reference.
7) The imagery of a dream is continuous in the sense that there are no gaps in the imagery of a dream once it starts. There is always a next image frame filled with appropriate content queued up and brought into one’s sleeping consciousness.
So the imagery in a dream is novel (creative), real, high quality, highly complex, highly specific, dynamic and continuous.
12.1.3 Emulation of the Senses
One of the remarkable features of dreaming is that dreams emulate—act as—at least three of the five senses: vision, auditory, tactile. The difference between dreams and the perceptual senses is that with dreams the information content—all of it—is auto-generated. There are no incoming perceptual signals that are transduced into some other signal to feed the visual or auditory cortex to produce the sensations.
That there are areas of the brain that can emulate the senses is a stunning example of convergence in a sense. Emulation is a duplication of an existing and extraordinarily complex set of functions, i.e. the perception of the senses themselves. Obviously the molecular function of vision, hearing and touch, even though it is not understood as to how they are recognized in our consciousness (qualia the “hard problem” of consciousness), must be very complex because the cascade of electrochemical molecular interactions in the perceptual senses is known to be very complex.
But yet somehow, materialists would have us believe, these other parts of the brain have precisely duplicated these extraordinarily complex signaling functions and have done so without the benefit of an incoming signal from the senses. These duplicated functions precisely emulate the senses and must also produce a set of similar signals as well as the content itself, i.e. the imagery, the sounds, the smells and the touches and feeling of gravity and balance, etc.
Moreover, when we dream we also have thoughts that go along with the imagery, sounds and tactile sensations as well as the dialog. These thoughts and the other “sensory” content are generated simultaneously or nearly simultaneously. In order for a dream to be coherent, the various emulated sensory components would have to be synchronized with one another and with our thoughts and then transferred to the seat of consciousness wherever that would be and whatever that might be.
A materialist would have to believe that all this heterogeneous content, this eclectic mix of the abstract thoughts, visual imagery as well as the auditory and tactile sensations, is somehow generated from the brain and that the disparate content is marshalled together, bound together, synchronized and routed across distinct sets of neuronal infrastructures where they could then be presented to the neural components which allegedly give rise to consciousness. This is impossibly complex and would require both foresight and top down causation.
So you have heterogeneous content, some presumably analog like—analogous to the physical senses—and some abstract content such as the thoughts and dialog in a dream. Abstract content would seem to need a coding scheme of some sort to be represented as there is no analogous physical quality to the abstract thought of “liberty” for example. This would all mean that there is a mix of encoded signals and analog signals occurring over a neural infrastructure. How would all these disparate signals be delineated and sorted out such that they could be treated—decoded—appropriately? No one has a clue how all this could possibly work.
Frankly all of this is a bit speculative but whichever way a materialist attempts to explain the ability of dreams to auto-generate sensory signals, generate novel content, integrate it, bind it together, synchronize it and route it, will encounter immense intractable difficulties.
But the emulation of the senses is not the real hard part for materialist to explain related to dreams. The real difficult part is to explain where the creative complex specified information comes from.
12.1.4 What a Materialist Needs to Explain
As I mentioned in the introduction of this section, I want to be very charitable to materialism in this exercise. I have put on the shelf so to speak a whole list of intractable problems: consciousness, qualia, intentionality, as well as the administrative and operational problems of the emergent mind theory. In fact I am also going to give materialism a pass on the question as to how the ability to emulate the senses could have arisen. This enables us to focus on just a few aspects of the problem—the creative content of a dream.
The emergent theory of mind is only somewhat more plausible than strict reductionism as a way of explaining dreams. Dreams require massive coordination of resources something only a top down causative theory could address. In the context of theory positing an emerging mind arising from the brain this would mean that when a dream starts, suddenly, somehow, an unknown material mechanism—a set of processes the genesis of which is a complete mystery—activates and sequesters a large array of brain components to produce a sequence of image frames and readies them for presentation to consciousness.
The material process would have to demarcate these brain components based on which specific set of brain components would produce the coloring for this or that specific set of image elements in the dream. In all likelihood this would require many brain components to define the color of each distinct image element (equivalent to each “pixel” ) given that the range of colors that each image element could assume is so large (roughly one billion possibilities).
The material process would also have to coordinate between these brain components and arrange them to produce the colors for the multitude of image elements in the visual plane of the dream such that coherent image objects (e.g. a tea cup on a table) and a coherent image frame were produced. Everything in the image frame in a dream is visually coherent.
The material process would have to synchronize the array of brain components such that the imagery would be refreshed in a coordinated fashion between all the brain components associated with each image element in an image frame, just as frame sync generator does in video processing.
The material mechanism would also have to ensure that there was a new frame of imagery ready to present to our sleeping consciousness so that the dream imagery appeared dynamic and uninterrupted.
But here is the real hard part: The material mechanism would have to somehow configure all the brain components such that the visual content was meaningful in the context of the subject or topic of the dream. This means that each visual element in each image frame must be precisely what it is in order for the dream to be visually (and audible and tactically) consistent.
Here is an example. If I am playing catch with a football in my dream as has happened, there is a precise set of image elements that would have to be precisely what they are to form the image of a football in each image frame. This precise set of image elements would have to be defined by a corresponding precise set of brain components if materialism is true. And this would have to occur frame after frame to give the dream its sense of reality and image coherency.
In the case of a football in motion, this would mean that these brain components would have to be arranged such that the image of a football was in a slightly different location in the image plane and still slightly different in the next and so on to produce the impression of a moving football. Again, the underlying brain components that produce the distinct image elements comprising the football, would each have to be precisely what there are in color and could not be anything else, image frame after image frame.
You can see these problems by looking at the three frames of video below. Clearly these images are not from a dream (that would be a real trick). However, this type of image content could appear in a dream. Notice the relative positions of the ball, the left fielder and the players in the background in each frame. The distinct image elements that comprise the ball, the left fielder and all the other items in the image frame would have to be what they are in terms of color and could be no other color in order to provide a visually coherent set of components in the set of image frames throughout a dream.
But the brain components responsible for rendering the ball for example would have to change from frame to frame. The question arises: Does the set of neurons that account for the ball always account for the ball no matter where the ball is in the image frame? Or does a specific set of neurons account for a constant position in the image frame? What happens when you change your viewing perspective in a dream? No one has a clue how any of this could possibly work. It is a massive creative complex specified information problem not to mention a massive binding, coordinating and synchronizing problem.
And what is more remarkable is that during dreams we see and interact with people. With people we have never seen before, in settings we have never been to before. Where does this rich set of creative imagery come from? I don’t think there is any strictly material explanation possible, not now; not ever.
How all this happens is a complete and utter mystery. Science involves explaining causation in physical phenomena. The task facing a materialist is to explain these unidentified material causative mechanisms sketched out above. My claim is that a materialist, brain only, accounting for dream imagery is impossible even in principle. Vast quantities of novel, dynamic, complex specified information cannot be produced—especially instantaneously—not even through an intelligent top down causation as in a gaming system. And again, I have focused primarily on the imagery, which sets aside another set of intractable problems, i.e. emulation of the senses, identification of brain components where useful information is emerging, binding or integration of the brain components harboring partial sets of the coherent imagery, synchronization them, handling heterogeneous types of data, not to mention consciousness itself and qualia.
12.1.5 Possible Materialist Explanations
Let’s look at some possible materialist explanations.
There are really two possibilities from a materialist standpoint: 1) The content of a dream just spontaneously appears randomly as would be required by a reductionist view of the mind. I think most everyone can discount that. So this leaves us with, 2) A program of some sort running in the brain to account for dream imagery. This would be the only possible solution and would require an emergent theory of mind—property dualism.
Materialists explain complexity through a Darwinian process involving incremental random changes locked in by some selection process. The problem is that it should be obvious that dream imagery is not something that could have been programmed by natural selection. There are a few reasons for this. First our dream imagery is specific to individuals in our present time and often involves recently invented human artifacts. Secondly, there is not enough DNA in the genome to account for this information nor is the structure of the brain precisely defined or specified by our DNA. Any program that arose would have to be epigenetic—occurring beyond the genes and involving “self-organization.” So a Darwinian explanation makes no sense unless you deplete Darwinian necessity of all meaning. (That assumes that there is any meaning left to natural selection in the first place).
A memory-capture, image primitive scheme at first blush might seem plausible. An image primitive story would involve the brain somehow capturing and preparing all varieties of “sensory” elements acquired through the visual, auditory and sensory systems and then storing them. These “sensory primitives” would have to then be prepared as generic components for later integration into a dream. This would be similar to a gaming system rendering content on the fly.
A verbally skilled materialist might be able to weave a “plausible” story together around a memory-capture, image primitive scheme that could deceive all varieties of naive foolish young adults in academia or even adults stricken with the mental disease of materialism. But for the skeptical and deep thinkers among us it will not wash; not even close. Here’s why:
First, you would need a complex generic capture mechanism for image primitives in the first place that saves off snippets of imagery for reuse. How could that have evolved if these programs could not possibly be in the DNA? Why would it have evolved if there is no incremental (immediate) use for it?
Secondly, you would need a generic preparation function for each image (or auditory or tactile) primitive that would prepare image, auditory and tactile primitives for instantiation into a dream sequence. Generic functions are by definition not useful for specific cases so what incremental selective value would such a thing have.
Third when these image, auditory and tactile primitives were to be integrated into a dream, they would have to have been instantiated instantaneously for the specific context and visual, audible and sensory content of the dream. Instantiation would involve preparing the generic primitives for imagery in terms of scale, color, “viewing” angle, and for auditory in terms of volume, location, pitch and so forth. Analogous preparation functions would be necessary for tactile “sensations” as well. There would have to be a set of functions for integrating these sensory primitives. And this primitive method would have to accomplish these multi-sensory integrating functions dynamically throughout the sequence of the dream such that it would result in an experience that was indistinguishable from normal life in waking consciousness, i.e. perceived motion, dialog, sounds and sense of touch, all synchronized with one another. Anyone who has worked at all with video editing, sound editing and photo editing programs would know how difficult it would be to tweak a set of images that were even 90% similar to a specific multimedia intent. Unless it is a direct replay of memory, the integration would be impossibly hard.
Fourth, the instantiated image primitive(s) would have to be introduced at just the right location and orientation in the image plane.
Fifth, the instantiated, emulated sensory primitives would have to be introduced at precisely the right time in the dream sequence.
Sixth, the emulated sensory primitives would have to be stored and recalled for use which would require some complex and dynamic indexing system.
Seventh, the recall mechanism would have to have a recognition system that would recognize usable primitives and instantiate them in a timely manner. This would require top down knowledge of the content of the dream and foreknowledge as to what was coming next.
Eighth, dreams often include images of things which are entirely unfamiliar and could not have been derived from any sort of primitive based on past memory.
Ninth, the image primitives once prepared for instantiation would have to be coordinated with tactile and auditory sensations and integrated with dialog and the dreams thought stream.
There is more and probably much more I am leaving out, but I will stop there.
All of this is extraordinarily complex, would require top down planning and foreknowledge and is hopelessly complex probably not much easier than randomly generating the images from scratch which is the only other alternative available to materialism.
12.1.6 Falsification Using Probabilities of Complex Specified Information
So how would you go about falsifying materialism based on the characteristics of dreams? For now let me be extraordinarily charitable to the task of a materialist to explain dreams, and again limit the focus to quantifying the probabilities of the imagery itself, leaving the auditory and tactile content out. And I will also ignore the ability to emulate the senses through analogous signaling and other integration and synchronization problems I listed above (as well as many others that I did not even mention).
Since materialist explanations for dreams are essentially limited to chance, notwithstanding the emergent material mind theory, we would need to calculate the probability of each brain component being arranged precisely how it would have to have been vs what it could have been. However, we do not know how many brain components are involved in producing the imagery in a dream (under the assumption that materialism is true) and we do not know how many different states each brain component could be in. No one knows this. Therefore, we cannot come up with a definitive super set to calculate the probabilities for the brain itself in order to falsify materialism.
Instead we have to make an alternative calculation, a proxy calculation, based on the rough assessment of the information content of a dream compared to the information content of Ultra High Definition II video. For example, each image element (think pixel on a TV) in a dream could be one of at least 1 billion color values (1/1,000,000,000). There would be roughly 32,000,000 image elements (analogous to pixels) per image frame in the image plane of a dream. There would be roughly 120 image frames per second –1200 total for a 10 second dream.
The numbers above probably understate the probabilities against a materialist explanation because as I mentioned the imagery in dreams is in all likelihood of far greater quality (and therefore quantity) than what we could see on the best UltraHD II (“8KTVs ” ).
So what are the odds for any distinct image element (pixel) being precisely what it has to be throughout out all 1200 image frames? The calculation would be: 1/1,000,000,0001200. What are the odds for all image elements in any specific image frame being precisely what they have to be in order to form the objects and setting and faces and trees, etc. in a single distinct image frame of a dream? The calculation would be: 1/1,000,000,00032,000,000. What are the odds for all image elements of all image frames to be precisely what they have to be in order to produce the imagery in a brief 10 second dream? The calculation would be: 1/1,000,000,000(32000000+1200).
The resulting probabilities are so diminishingly small that they are not worth considering. There have been at most 10139 events in the universe since its beginning. This number (10139 ) is the product of the number of participles in the universe and the number of Planck times since the universe began. It is called the universal probability bound.
Clearly the probabilities of an alignment of brain components to produce just the imagery of a dream (leaving out the auditory, tactile, the dialog, the thoughts and the integration and synchronization factors out) are well beyond anything possible. The number: 1,000,000,000 to the power of 12,000,600, is tens of millions of orders of magnitude larger than the total number of events that have occurred since the origin of the universe.
It is often claimed that the origin of life is the most vexing complexity problem that materialist have to address. These probabilities related to dreams show otherwise. The complex specified information exhibited by dreams vastly exceeds the complex specified information related to the origin of life. Recall that Stephen Meyer’s calculation for the chance occurrence of a basic living cell was 1040,000. Eugene Koonin’s calculation was 1/101019.
12.1.7 Near Death Experiences, End of Life Experiences, DMT and other “Hallucinations”
Despite the strong case just presented, there may be some lingering doubt about my claim that dreams cannot be produced by the physical brain. Perhaps, some might reason, dreams could be a direct playing out of stored memories by some programmatic function even though there is no evidence that they are. I have already discussed this idea that dreams could be a replaying of memory in some detail and rejected it. Here are a few more considerations that make that untenable.
It is important to note that there are other non-perceptual, mental phenomenon that fall in the category of hallucinations (dream could be viewed as hallucinations). These other categories of hallucinations clearly involve novel content and therefore could not possibly be a programmatic replaying of stored memory content. I will discuss these in more detail in the later sections of the paper, but for now, I will just mention them. The visual, auditory and intentionality content of near death experiences, end of life experiences, N-Dimethyltryptamine (DMT) trips and alien abductions, which are all said to be “more real than real,” offer what is clearly novel and ineffable content. Therefore, obviously, these subjective phenomena cannot be the result of a programmatic replaying of images and other content from memories.
Therefore, even if a materialist were to persist in the claim that dreams are a direct replaying of stored memory content—despite the infeasibility of that—or a programmatic assembly of stored elements, they cannot make that claim about these other “hallucinations.” Whether these other phenomena are hallucinations will be the subject of later sections of the paper. Some clearly are; some may not be.
12.1.8 Summary - Dreams
The imagery in dreams produces massive amounts of creative complex specified information, millions of orders of magnitude beyond the universal probability bound. This is a profound problem but it is only the tip of the iceberg when discussing the problems dreams pose for materialism. Aside from this complex specified information problem, there are the intractable problems related to the emulation of at least three (3) of the five (5) senses, the identification of brain components where useful, related information is emerging, binding or integrating these brain components which are harboring partial sets of the related imagery, synchronization the emerging related information, handling heterogeneous types of information—analog information such as colors and shapes and encoded abstract information from the thoughts and dialog—that are associated with the imagery. And this treatment leaves out the unsolved “hard problems of consciousness” …conscious awareness itself and qualia.
For the brain to produce dreams and all that is entailed in that, would require that the brain had foreknowledge, had omniscience about its emerging resources and possess top-down causative control of immense complexity. Even a gaming system cannot do what dreams would require because a gaming system cannot emulate all the senses, cannot create novel images let alone novel, dynamic imagery. Furthermore, a gaming system is programmed, consisting of perhaps a million lines of highly specific programming code written by an intelligent human which rides on top of a large set of embedded code including an operating system and a circuit board all engineered meticulously by a large number of intelligent humans. And it goes without saying that gaming systems are not conscious and do not generate abstract thoughts which are so tightly integrated with the content of dreams.
12.2 Falsification of Materialism #2: Continuity of Thought
In language, an alphabet represents the mechanism of materialism, while the words expressive of the meaning of a thousand thoughts, grand ideas, and noble ideals—of love and hate, of cowardice and courage—represent the performances of mind within the scope defined by both material and spiritual law, directed by the assertion of the will of personality, and limited by the inherent situational endowment. [195:7.21] (P. 2080)
The next falsification I would like to offer is the phenomenon of the continuity of thought. Have you ever noticed that your mind is always presented with a continuous stream of related thoughts? There are seldom, if ever, any gaps where your mind is blank. There always seems to be a single, whole, intact thought present in our waking conscious awareness. I suppose there are exceptions such as seizures. Remarkably, barring interruption and internal shifts in context, each distinct thought in a sequence of thoughts is related to the adjacent thoughts in time—those before and after and in the context of one’s experiences. This is true whether we are rehashing a similar set of thoughts from memory, or when we are daydreaming or problem solving when our imaginations are heightened and presenting us with a novel sequence of thoughts.
Even more astounding is when these streams of thought are found to be creative and unique in human history and contribute to the advancement of human knowledge, human artifacts, artistic renderings and expressions of goodness in new and fundamental ways. Can this marvelous quality of mind be reconciled with materialism which posits only the physical brain to account for human consciousness and thought? No, it cannot be ; not even in principle.
It is not known how thoughts could arise in the brain, not to mention how they are represented in the brain or how they could be rendered in our consciousness much less what consciousness is. For many people these intractable problems of consciousness and thought are enough to dismiss materialism from the start. But materialism’s grip on Western thought has conditioned the educated class into thinking that there are no plausible alternatives to a brain-only hypothesis of human consciousness and thought. Only by thinking about the details of our conscious thoughts and about what would have to be the case for materialism to be true, does materialism’s brain-only theory fall completely apart. We saw that in dreams and now we turn to thought streams.
12.2.1 Materialism’s Claims
Materialism’s accounting of human intellect is either reductionist which requires strict adherence to bottom up causation, or emergent which posits top down causation which is derived from bottom up causation.
Bottom up causation means that it is the sequences of molecular neural events—patterns of neural firings —that give rise to one’s thoughts and direct them to our conscious awareness for rendering – somehow. But no one has a clue how that is accomplished. Therefore, under a reductionist view, the thoughts that appear in our conscious awareness are entirely determined by the prior local causal chain of molecular neural events. But if our thoughts are produced and determined by the prior causal chain of neural events in the brain then they would not be expected or necessitated, in any way, to produce a coherent, continuous sequences of related mental events, i.e. thoughts that were recognizable to our conscious experience as associated with one another. There would be no expectation that adjacent brain states (similar configurations of the brain components in sequence) would result in “adjacent” (tightly related) mental states.
This decoupling of local causation at the physical level and information and meaning at the mental experience level is a fundamental fact that reductive materialism is bound by. Simply put, physical processes in the brain, if reductive, cannot possibly have any way of knowing what set of physical sequences in the brain would give rise to coherent mental sequences of thought. But materialism would seem to require that. Without such knowledge of the mapping between physical brain states and mental experiential states, materialism is left with either blind chance or determinism offered by the emergent mind theory, neither of which could possibly produce the rich creative mental lives we all experience.
The emergent materialist mind—property dualism—fairs only slightly better than a purely reductionist theory of mind. First off, it should be pointed out that even an emergent theory of mind is ultimately a reductionist theory. The emergent theory has the possibility of offering a programmatic top down control over thought streams assuming we wave away the difficulties as to how such programs could have evolved or even operate as described in the previous section of the paper.
But a programmatic system—an algorithm—because it would be deterministic, cannot produce novel complex specified information except by introducing some randomness and adding extremely tight constraints. And even then algorithms could produce only modest amounts of novel content. Obviously, we all have new thoughts each day as we experience and learn new things.
The emergent mind posited by many philosophers of mind, in principle, could produce some limited top-down causation. But we have discussed the many problems with the emergent material mind in the previous section. This Falsification takes these problems a step further.
12.2.2 Complex Specified Information
The sequences of molecular neural events that materialism claims give rise to our thoughts would have to be precise and they would have to be specific. They would have to be precise and specific because there are an incalculable array of thoughts that arise in our minds and these thoughts must then have an incalculable number of physical arrangements to underlie them if materialism is true.
Imagine an insight that you have had or bit of knowledge that you have acquired. Then think of the innumerable ways in which it could be slightly modified even in very subtle ways. Each version of this insight would have—must have if materialism is true—a slightly different underlying neural signature otherwise they would not be distinguishable from thoughts which were slightly different. Also, since these physical processes—these sequences of molecular neural events—would have to interface with other putative physical processes, a predictable outcome could only result if the processes themselves, and the interface between them, were precise and specific. This is something I discussed in the previous section.
For a materialist to deny that this correspondence between the physical brain states and the mental states is necessary, would be to deny that science has anything to say about neuro- “science” because science is all about uncovering and verifying repeatable phenomenon. If one could claim that there were no consistent correlations between mental events and brain events, then the whole endeavor of neuroscience is not falsifiable.
Because thoughts and insights unfold over time, they would have underlying sequences of arrangements—neural firing patterns—not just static arrangements. Think of these as patterns of neural firings. That thinking must—if materialism is true—involve sequences of physical brain events rather than simply physical brain structures, is obvious because learning occurs very quickly, far too quickly to be founded on a new physical infrastructure of the brain.
Once the first thought in a stream of related thoughts is brought forth in our conscious awareness, the subsequent thoughts appear to be constrained by the content (the meaning) of the initial thought and increasingly so with each new thought as this collection of emerging thoughts matures into a complete new insight. The underlying physical processes which materialism claims give rise to these thoughts would, therefore, also be increasingly constrained and more tightly specified as more thoughts were brought forth just as the configurations in my brain causing the movements of my hands and fingers would have to be increasingly constrained as I type out this sentence.
Therefore, under a materialist assumption, in order for a continuous, coherent stream of related thoughts to occur, an enormous number of molecular components in the brain would have to be continuously arranged in increasingly precise and specific ways. The sheer number of molecular components involved betrays a very high degree of complexity. These streams of thought exhibit extraordinary quantities of complex specified information and usually, creative complex specified information.
Especially noteworthy are the spontaneous emergence of unique and novel thoughts that lead to an expansion of human knowledge in profound and important ways. Although each of us have unique and novel thought streams (insights) each day, most are not significant in this regard.
If materialism is true, its account of such unique and novel phenomena would entail that the underlying local causation in the brain results in a unique sequence of arrangements—precise patterns of neural firings—in the brain; patterns of neural firings that these components would have never produced before. In and of itself that is not significant because by chance, local physical causation of components in the brain will almost always result in unique patterns of neural firings.
But what is special about the complexity here is the types of unique, precise and complex patterns of neural firings. These complex patterns of neural firings (to produce a fundamental truth about the nature of reality) would be highly specified and convey information at the mental level that has meaning—important meaning—in human discourse. This is a spectacular occurrence of complexity conforming to a known-pattern which is a signature of teleology.
These sequences of precise patterns of neural firings would comprise an infinitesimally small set of possible dynamic configurations of the brain’s molecular components, the vast majority of which would convey absolutely no useful information at all in human discourse. (This all of course assumes that a sequence of neural firings can produce anything at all in our subjective mental experience as materialism claims.)
In addition to a material mechanism to account for the generation of continuous sequences of novel, complex, specified neural firing patterns, there would have to be a physical process in the brain that would somehow know in advance either where those specific neural circuits were that were incubating a spontaneous emerging thought. Or, this physical process would have to know whether the outcome of a physical process is producing a thought that is useful in an existing sequence of related thoughts.
This physical process would also have to know how these thoughts were structured and how they were bounded within the neural circuits such that a whole, distinct, coherent thought could be captured, sequestered, transmitted and presented to our consciousness in a timely fashion.
These physical processes in the brain would have to pass these precise and structured neural events, which purportedly represent distinct thoughts, to another putative physical process which would serialize them properly with other neural events representing other emerging thoughts and prepare them for rendering in our conscious experience. How these physical processes would know where and when these useful related neural events, that were to give rise to related thoughts were emerging, how they were structured and bounded, how they should be sequenced and rendered in our consciousness are intractable mysteries.
These seemingly omniscient and clairvoyant physical processes of engendering coherent, contextually relevant thoughts, locating and identifying them as they emerge, sequencing them and preparing them for rendering in our consciousness would have to be repeated continuously and unerringly throughout the entire life of a human—and indeed all humans—such that our conscious awareness was continuously presented with a coherent stream of related thoughts.
These putative physical processes of the brain would have to account for the seamless rendering of a continuous stream of thoughts despite interruptions from our senses. They would have to be able to continuously reassert prior thought streams following interruptions and integrate them with our memories and with any new information presented through the senses. They would have to allow for a rehashing or reviewing of each thought stream as one struggles for meaning.
12.2.4 Continuity of Thought and Free Will
Now, before summarizing this falsification of materialism, I want to talk about free will and offer another approach as to how thought streams falsify materialism by demonstrating that free will exists. Generally materialist regard free will as an illusion. “Compatibilists” attempt to reconcile materialism with free will but they do so by demoting free will to decision making—the type of decision making that you would find in a programmatic algorithm.
Materialists point to the Benjamin Libet type experiments as evidence that free will does not exist. I will discuss the Libet experiments in more detail in the next section of the paper. Some materialists also, ironically enough, use thought streams, the very phenomenon that I am discussing here as evidence against free will.
Please watch the following lecture by atheist neuroscientist Sam Harris from the 5:45 mark to the 14:00 minute mark and note the discussion about thoughts arising in the brain near the 12:30 mark.
Harris trips all over himself in this lecture. Now is not the time for a detailed rebuttal of the lecture, but what he is trying to claim is, that thoughts just arise unsolicited, randomly in the brain. They do in a sense, but as I point out, they are related in context through a coherent thought stream. Harris could not be giving a lecture and staying on topic if that were not the case. If fact he tries to make light of the way the brain (I would say mind) pops certain thoughts, such as “snow-shoeing” into the mind (he would say material mind), as though that were evidence that our mental thought flows were random. But importantly, notice that he then returns right back to his topic. How does that happen if he has no control over his thoughts?
There are shifts in our thoughts between topics. Some are driven by events in one’s life such as sadness, a concern, or something immediate like a bird flying by the window. Some—many in fact—topical shifts in our thoughts certainly appear to be driven by our own directive—our free will. I want to focus on these thought streams, i.e. those that arise following a directive that we appear to give our minds. I think this offers the best evidence that free will truly exists and is not some illusion as materialists would claim. A materialist explanation (or more accurately an explaining away) of free will does not work in this case. Here is why…
For materialism to account for the phenomenon of a related stream of thoughts that appear to our subjective conscious experience as something we perceive to have been directed by our free will, would mean that the material brain merely planted the notion that we were going to direct our thoughts to a particular topic. A materialist, must, to be consistent with his/her philosophy, propose that the brain fools itself into thinking it is in control as an epiphenomenon—an undirected insight. Materialists generally do not deny this. This, in and of itself, is quite peculiar but not what is important here with respect to a proof of free will.
So…how could a material brain—a computational device—present a pre-planned set of actions to your conscious awareness? It seems that this would require foresight; foresight as to what the brain was going to do next. But a computational device could have no such foresight, no such knowledge. It is not possible for a physical brain, whether a materialist proposes a reductive account, or an emergent account, to possibly know what complex novel, unique arrangements a set of brain components were going to configure themselves into in the future. And further, which
such configurations would just happen to give rise to a set of related thoughts—mental events. Again, creative thoughts cannot be attributed to deterministic causes. And most of our thought streams contain at least some novelty and uniqueness if they are not entirely novel and unique.
Here is an example, let’s say that there is some problem that you are struggling with at work. And let’s say that it is an entirely new problem, one you have never encountered in life before. On the drive in you tell yourself that you want to think about this problem and solve it. After a few false starts, interruptions and a brief tussle with self-awareness, i.e. thinking that you are trying to think about something and being aware of it, your mind begins to bring forth a series of thoughts related to the problem.
Again to repeat the important point: Invariably these thoughts in instances such as the example above, are unique and often quite creative and novel. Uniqueness, creative and novel thoughts would mean that the brain arranged itself in ways it had never arranged itself into before but that just happened to all be within a narrow topical frame of reference at the mental level. And, if we are to accept materialism, the brain would have had to have tipped you off prior to what it was going to do. It is a great mystery, or should be if you are a materialist and it should give a materialist pause to thought. For someone who believes in an immaterial aspect to mind, it is no surprise at all; it is what you would expect.
Here is an experiment; you can try this on the ride home or whenever, to prove the point. Let’s say your daughter gives you a puzzle. You have a chicken, a fox and chicken feed. You need to get all of them to an island with a rowboat. But the chicken cannot be left alone with the chicken feed and the fox cannot be left alone with the chicken. Only you and one of the others—chicken, chicken feed, fox—can fit in the row boat on any one trip. So how do you get them all to the island?
Your daughter wants to know if you can figure it out. You are rushing off to work and do not have time. She says mom figured it out in 2 minutes. Now you are really motivated but not ready for a contest at the moment, you give her a kiss goodbye and tell her you will think about it on the drive in. You get in the car start thinking about something you saw but you are subconsciously aware that you have to get back to this thing in the back of your mind about the fox and chicken. Finally you put yourself into autopilot and begin thinking about the problem. Your mind brings forth a series of thoughts related to the problem. It takes you 3 minutes to figure out the puzzle.
So let’s see how a materialist explanation would work here. To believe that the brain is all there is, would mean that the brain configured itself to produce a continuous pattern of neural firings, such that it produced an epiphenomenal thought that you were going to think about this fox-chicken- feed problem. And this occurs at Time0 let’s say. At Time1 through TimeX you were thinking about the fox-chicken-feed problem.
The thoughts you had during that time were for the most part, barring interruption, a continuous stream related to the fox-chicken-feed problem. But your brain tipped you off, somehow, that it was going to “think” about that problem at Time0; a problem—a set of thoughts—that you had never experienced before. How did the mechanical brain, which at best cases is a set of algorithms, “know” that it was going to configure its components in just such a precise way in accordance with the topic and logically to find a solution. This would require foresight, would it not? The brain had never arranged itself in these configurations before. An algorithm cannot do what it has not been programmed to do.
And of course all this says nothing at all about how extraordinarily difficult it is to even imagine how an algorithm could orchestrate a vast number of neurological components to bring about the epiphenomenon of the illusory thoughts that the brain was giving rise to.
Despite the intuitive implausibility of materialists claims given the foregoing, it is not possible to adequately quantify the probabilities. There are at least two reasons for this. First, we cannot know the scope (the super set) of the possible alternative brain states, within which any coherent continuous thought stream would reside, because materialism cannot tell us how thoughts are, or could be, generated in the brain or how many physical components would be required to represent them.
But we do know that the super set of possible brain states, given the number of neurons and connections, would be vast and the probabilities of landing on a specific sequence of brain states that might produce a specific series of coherent mental states would be very unkind to materialism’s brain only hypothesis, if it could be done at all.
Secondly, thoughts have no obvious material qualities at all and therefore cannot be quantified except by using a proxy calculation using symbolic language which would grossly understate the complexity involved—syntax is not the same as semantics—and therefore this approach would be excessively charitable to materialism. But materialism would fail miserably nonetheless even if we were to grant that concession. Let’s take a look.
We can arrive at some rough calculations on probabilities while being extraordinarily charitable to materialism in the manner described above using syntax as a proxy for semantics. Were you to tap into your sub vocalizations (the symbolic [and silent] serializations of your thoughts) and transcribe them, you would probably arrive at a few pages of words during this fox-chicken-feed experience.
According to information theory, with each subvocalized word, there is a ruling out of many alternative words. The more words in a coherent text sting, the more specific the string becomes and the more information and complexity that is conveyed because vast sets of possible word configurations are ruled out with each phrase.
There are about five thousand words in the average person’s working vocabulary. Let’s assume that about 2500 subvocalized words would be required to express your thoughts during the period of time you were thinking about the fox-chicken problem. This means that a very rough calculation of probabilities for the brain to generate thoughts (words as a charitable proxy) that are within a specific topic is about 5,0002500 (5,000 words possible in the average vocabulary raised to the power of the number of subvocalized words to express your thoughts which we said was 2500).
This very large number represents the super set of possible word configurations within which your thought stream—represented by words—related to the fox-chicken-feed problem resides. Now there would be many ways to express the thoughts you had related to the resolution of this problem. There might be ten billion word combinations that could lead to a resolution of the problem, there might even be 10 trillion viable world combinations, but whatever that number is, it would be infinitesimally small compared with the vast superset of possible combinations of 2500 words based on a vocabulary of 5000 words.
The superset of possible word configurations is clearly a prohibitively large number no matter how many possible viable word combinations there could be. The probabilities would be tens of thousands of orders of magnitude beyond the universal probability bound of 10139. And this assumes that a material brain would even be able to consistently produce an arrangement of neurons firing that led to anything meaningful at all setting aside the problem of how material processes could give rise to thoughts in the first place!
Just one more thing to tie up here…your daughter said your spouse figured out the problem in 2 minutes, it took you 3 minutes, you report to your daughter by text, that you solved the problem in a minute and a half.
Absent an immaterial mind, materialism is left with the physical brain. The brain then has to account for everything we experience in our mental lives. This is an enormous burden. According to materialism, each quality of “mind” is underwritten by a physical process in the brain. The only explanation materialism has to offer as to how all these marvelous qualities of mind could have arisen (and arisen so quickly), is evolutionary theory –Neo-Darwinism.
According to Neo-Darwinism each of these processes would have had to have been assembled piecemeal using the tandem mechanism of random mutation and natural selection. But there are serious problems with this that cannot be overcome, even in principle.
One obvious problem with an evolutionary accounting for the brain is that so many of the features and qualities of mind exhibit the signature of modern humanity. It is hard to accept that the brain could have been configured by evolution in the distant past to harbor a vast set of latent capabilities which when manifested would just happen to be useful in the context of 21st century humans. It is one thing to have the general capability for something but quite another thing to explain the specific causes that could bring forth vast quantities of novel, complex specified information spontaneously, continuously and near instantaneously and that offer value to modern humanity!
Secondly, in order for evolution to have produced a brain with the capabilities and qualities of mind we all experience, the physical processes which materialism purports gives rise to them in the brain, would have to be encoded and stored in the DNA. These configurations might then be subject to “random mutational” changes such that they could be selected. However, the configurations for these processes cannot be identified or even inferred from the DNA. The buildout of the brain is epigenetic; the trillions of synaptic connections are not specified in the genes.
So where does all this complexity come from? And where is it stored? Think of it this way: If materialism is true and if science is the only pathway to truth, then it is reasonable to say that nature and in fact all reality is transparent to human reason, i.e. that all reality can be modeled in the brain. In effect then, the brain could be said to have the capability of subsuming the complexity of all reality. Yet the complexity of the DNA –especially those more limited segments that produce the proteins used in the brain—is hopelessly insufficient to account for the total complexity of reality.
Furthermore, this complexity involved in abstract thought would have had to have arisen throughout the lives of far too few individuals throughout the brief evolutionary period during which the descent of modern man is believed to have occurred.
There are many more problems with an evolutionary explanation but I will have to leave it at that for now.
12.2.7 Continuity of Thought - Summary
I have briefly sketched out the intractable difficulties of a materialist account involving the continuity of human thought. It is perhaps the greatest of all mysteries along with consciousness itself. If any of this sounds at all plausible to you by material explanations alone, then let me suggest that you have been irreparably brainwashed by the scientism which has come to dominate Western civilization.
If it is unreasonable to believe that these marvelous qualities of mind that we all experience continuously cannot be explained by an electrochemical “machine” of sorts i.e. the brain, then we have to consider alternatives such as mind/brain dualism and dismiss materialism as a false hypothesis.
And in fact it is unreasonable to believe that material processes in the brain could account for these qualities of mind. Setting aside the intractable difficulties in explaining how abstract thoughts are represented in the brain and rendered in consciousness or even what consciousness is, there is no reason to suspect that physical processes would have the foreknowledge to identify specific areas in the vastness of the brain that just happened to be readying themselves to produce a specific, coherent stream of thoughts that have meaning in human discourse.
And there is also no reason to believe that it is likely or even possible for the brain –unaided by an immaterial mind –to arrange its components in such a way that it would generate a succession of complex, specified neural firing patterns continuously and unerringly throughout one’s life.
These problems are fundamental and will not surrender to an entreaty to promissory materialism because foreknowledge and spontaneous generation of novel, continuous, complex specified information is required and these cannot be accounted for by physical processes in the brain.
Let me close with a supreme example of human thought. To believe that the streams of thought Einstein must have experienced as he sought the solution to the problems whose eventual resolution became a fundamental truth about reality –relativity—happened as a result of continuous sequences of chance or deterministic arrangements of molecular neural events, is such a draft on common sense that one would have to conclude—given the general acceptance of materialism—that any belief, no matter how foolish and no matter how contrary to direct human experience, could come to be accepted if wrapped in the sophistication of intellectualism and delivered with the full authority of science. One has to wonder at the irony as to how a method of inquiry—science—which has been spectacularly successful, with its intention to seek truth empirically through open rational inquiry, could lead us down a dead end path and become like that which it sought to counter—the tyranny of an overbearing institutional religion which itself had departed from its own charter.
12.3 Falsification of Materialism #3: Constancy and Resumption of Self
The next falsification of materialism I would like to offer involves what I would call the resiliency or constancy of self and resumption of self. One of the striking things about our experience as conscious, thinking humans is how constant our sense of self –our identity –is and also how constant our ability to think, our knowledge and experience is. We forget things occasionally but we carry with us at all times a sense of identity and who we are and a general catalog of experiences. Never in my life has there been any suspension or change of my conscious sense of who I am other than during sleep. Throughout our lives our brains change considerably. A myriad of new synaptic connections are formed especially in the early years. Yet one’s identity is immutable.
Aside from these ongoing modifications of the brain, there are catastrophic changes as well. Those who have experienced surgery under general anesthesia or suffered cardiac arrest have had their brains shut down and consciousness suspended even if only briefly.
Near death experiences represent a more profound disruption of consciousness often involving complete cessation of detectable brain activity. Yet we know from countless surgeries conducted under general anesthesia and near death experiences that one’s consciousness, sense of self and mental faculties, i.e. memories, knowledge, beliefs, etc. are usually fully restored even in extreme cases following the event. One would not expect this regardless of what philosophy of mind a materialist subscribes to.
Why is it that our sense of self, our cognition, our knowledge and life experiences are so constant even when the brain is subjected to change and catastrophic effects? What material causal processes in the brain could account for this constancy of self?
Near death experiences are dismissed by materialists as hallucinations resulting from a brain in distress; this despite the fact that many near death type experiences occur when the subject is not near death. Nevertheless, materialists believe that by dismissing near death experiences as hallucinations they are safeguarding their materialist world view.
The reality is that when materialists make this claim they are unwittingly embracing an explanation that disproves materialism. If near death experiences are hallucinations, they cannot be hallucinations of a material brain, they can only be hallucinations of an immaterial mind.
The reason is simple: the brain, being an electrochemical computer in a sense, cannot possibly generate vast quantities of novel, continuous, unique, complex specified information spontaneously especially when it involves unearthly and ineffable visual and abstract mental content which accompany near death experiences. As we have just discussed, the brain cannot even account for the complex specified information we experience in our nightly dreams or thought streams.
12.3.1 Materialist Claims
Although it is not known or even imaginable how our mental experiences could be reducible to physical phenomena in the brain. Nevertheless, this is what materialists believe. According to
materialism’s favored approach these days— the emergent mind—consciousness and all mental phenomena we experience are the result of complex molecular interactions in the brain—neural firing patterns. It must be the case then that all these complex mental processes for consciousness itself, memory management, resource management, all knowledge and analysis thinking functions, etc. would have to be stored in the brain somehow and probably in multiple places. These thinking functions and memories would have to be known to other thinking functions in order for our mental lives to be a rich and agile as we know them to be. We have looked at the complexity of all this and found that no neuroscientist has any idea how all this could work and we identified profound problems with the emergent theory of mind.
12.3.2 Near Death Experiences
What would happen—what should happen—under a materialist accounting of mental phenomena, if the precise and specific causal sequences of events in the brain—all those stored thought programs and memories represented by neural firing patterns, from which all mental phenomena are believed by materialists to be derived—were to be disrupted in a catastrophic way?
Many such cases of complete disruption have occurred and they occur very commonly. I want to focus on three well-known near death experience cases: A musician Pam Reynolds, neurosurgeon Dr. Eben Alexander, and orthopedic surgeon Mary Neal.
Pam Reynolds had a large aneurysm deep in the base of her brain. In order to remove the aneurysm, the medical team would have to use a procedure referred to as “standstill” whereby all molecular activity in her brain would be halted. The surgery was a success. The surgeon removed the aneurysm and Pam arose from the dead so to speak.
I discuss the near death experience in the section on mystical experiences but for now I just want to focus on the fact that Pam Reynolds was under deep anesthesia and in “standstill” which involved chilling her body to about 60 degrees Fahrenheit and then draining all the blood from her head. This was the only way to remove the aneurysm—to halt all molecular activity in her brain. There is no doubt about any of this.
The second case is neurosurgeon, Dr. Eben Alexander. In Dr. Alexander’s case CT scans of his brain revealed that his neo-cortex was badly damaged and not functioning due to bacterial meningitis. There were “…severe alterations in the cortical function and dysfunction of extraocular motility indicating damage to the brain stem.”
Mary Neal, an orthopedic surgeon, was kayaking in Chili when she became trapped under a water fall. She was submerged without oxygen for at least 15 minutes and perhaps as long as 25 minutes. Typically, brain damage is said to begin when the brain is starved for oxygen for 5-7 minutes unless the person is submerged in very cold water.
18.104.22.168 Resumption of Self following Near Death Experiences
For the primary point I am making in this article, it really does not matter whether or not Pam Reynolds or Eben Alexander or Mary Neal had the subjective experiences associated with near death experiences that they claim. Personally I have little doubt that they experienced what they claim. What matters for the case I am going to make here is that their brains were severely affected. Pam was effectively brain dead throughout “standstill.” Eben Alexander was in a coma for seven days. Mary was submerged probably for about 15 or 25 minutes. In Eben and Pam’s case, these facts about the adverse effects on their brain are known with certainty based on medical records.
In each of these cases, following resuscitation, these individuals’ consciousness, their sense of self, knowledge, their memories and presumable all, or most all, of their mental capabilities were restored. That their sense of self and all other complex mental phenomena were restored, is an inference that can be made by watching interviews with them on youtube and reading accounts of interviews with them.
Note: In the section of this paper on Mystical Experiences there are links to interviews with each of these three persons. You can jump ahead now and view them or wait until I discuss them.
Just to cite one example, shortly after Pam Reynolds regained consciousness, she recognized the Eagle’s song “Hotel California” and commented about a particular line in the song in a clever way to the attending physician. In order to do this, she would have to have been conscious, cognizant as to who she was and what had happened to her, recognized the song, understood the meaning of the lyrics and applied the meaning differently in a metaphorical way. All these mental phenomena are extraordinarily complex and would necessarily have extraordinarily complex material process underlying them if materialism is true.
In order to reestablish one’s consciousness, sense of self, beliefs, knowledge and memories and all associated cognitive abilities following complete cessation of, or damage to, the brain, under a materialist perspective, some prior set of conditions would have had to have been reestablished and resynchronized throughout the brain. But by what set of complex material causes could a prior set of conditions been preserved and reestablished throughout the 100 trillion synaptic connections in the brain? And how could such a marvelous function have evolved in the first place?
There could have been nothing like an orderly shutdown of Pam Reynold’s brain given the nature of the general anesthesia and the “standstill” process. Nor could there be an orderly shutdown in Dr. Alexander’s or Mary Neal’s case. There must have been countless molecular reactions interrupted, neurotransmitters half built, aborted synapse firings, synaptic connections partially constructed as, for example in the case of Pam Reynolds, she transitioned through deep general anesthesia, to a cooling down of her body and to “standstill” without any blood in her brain.
The delicate balance of interdependencies that must have existed during their prior set of neural sequences of events would have been irreparably lost. There would be no conceivable way to restore the prior conditions to any sort of “known good” state. Rather, a new set of “initial conditions” would have asserted themselves upon resuscitation and given materialism’s strict bottom up causation the sequence of molecular activity would continue to act in accordance with this new set of local causal sequences of events. But it would have been completely random as to which synapses within which neurons within which area of their brains would have come up first and begun operating.
To gain just a hint of the complexity involved, imagine if you stored a computer’s boot loader, operating system and application programs in volatile memory and then pulled the power plug. What would you expect to happen when you plugged the power cord back in?
To think that the precise, specific set of complex brain processes that materialism alleges give rise to consciousness, one’s sense of self, memories, knowledge and beliefs, cognition could reestablish themselves, strictly through material causation following complete cessation of brain function, is an appeal to miracles but without any human testimony or empirical evidence to support them.
Even if somehow the storage of all these programs and experiences and consciousness itself, which are all instantiated as precise patterns of neural firings, managed to be maintained, all record of where they resided, and therefore able to be recalled, in the brain would be lost.
Here is another way to think about it…the emergent theory of mind proposes that the mind, a specific mind—yours and mine—emerged from a complex interaction of brain components. There is a uniqueness to all of us. If materialism is true then this uniqueness must be predicated on a precise and specific set of neural sequences—a specific signature of at least some subset of neural components. What are the chances that, following complete disruption of all neural activity and in some cases severe damage of the neural infrastructure of the brain, that the same “person” would emerge?
Calculating the probabilities for the material causation required to bring about the necessary causal sequence of events to restore the same person cannot be done and is utterly pointless. The only reasonable conclusion is that there is some sort of immaterial quality we are endowed with—mind—that orchestrates the resumption of all the necessary brain functions to re-establish the person and all their accompanying mental faculties.
A materialist might suggest that although the neural events of her brain were disrupted in a catastrophic way, nonetheless the structures of her brain were intact. They might go on to suggest that since the structures of her brain were preserved, this could account for the resumption of her “self.” I think this is hopelessly implausible but I will address it nonetheless.
First it is implausible because thought and consciousness are processes, envisioned to be a sequence of neural firings. It would have to be a precise pattern. Even if one were to stubbornly adopt the view that the preservation of the structure (at least so some degree) could account for the resumption of self, then this explanation would not work well for Eben Alexander who had severe disruption of the structure of parts of his neo-cortex and brain stem.
12.3.3 Summary – Continuity and Resumption of Self
According to the emergent theory of mind—property dualism—consciousness and all the functions and qualities of our minds, emerged somehow from a complex series of neural firing patterns. I have argued in the previous section of this paper that this is highly implausible. Now in this section and specifically in the subsection on the Continuity and Resumption of Self, I have put forth a case which strains this already tenuous proposal well beyond its breaking point. To think that these marvelous qualities of mind emerged in the first place betrays an enormous faith in material properties. But to imagine following severe disruption to the brain, these very same qualities could re-emerge and be instantiated to bring forth the same sense of identity, with all the same mental faculties and memories is hopelessly implausible.
12.4 Summary – Falsifications of Materialism
The primary focus of this paper is to show that there is design in nature. The method for showing that there is design in nature is to demonstrate that the complex specified information exhibited by nature, within living systems, through the qualities of the mind such as dreams and thought streams defy any materialist explanations. Later we will discuss mystical experiences and hallucinations in the same context of complex specified information.
In this section and the previous section, I have embarked on my own form of eliminative materialism by eliminating materialist proposals for the mind. This leaves some form of substance dualism which is consistent with The Urantia Book.
But there are a couple of operational difficulties left to address related to substance dualism. When I have presented the falsifications of materialism to materialist-atheists on various forums and occasionally there are neuroscientists involved, they have never met them head-on. They sometimes quibble with a few minor points here and there but never address the information problem head on even for those who have seemed to have taken the time to understand it. Generally, there are few tactics used to counter any anti-materialist proposal of consciousness and mind.
If it is a neuroscientist, they often attempt at obfuscation by throwing out a bunch of terminology in hopes they can impress you or intimidate you into compliance. Stoney Brook neuro-surgeon Michael Egnor calls this "neuro-babble." This doesn't work because when you cut through the terminology there is really nothing there and what typically happens is these folks never respond further.
More commonly, the run of the mill atheist-materialist will do what I have been doing in this paper, but in reverse. They attempt to take down the idea of substance dualism rather than defend their materialist proposal. There are three ways they go about doing this.
They will often simply invoke authority and dismiss any non-materialist proposal as a non- scientific claim and that virtually all neuroscientists agree that it has been established that consciousness arises from the brain because neural-correlates for consciousness have been identified. And with that, they feel they have won the day. No further comment is required. They will often acknowledge that there are some difficulties that remain with any materialist proposal of mind, but go on to ask, why entertain the foolish and superstitious notion of an immaterial mind concluding with remarks comparing such proposals to Phlogiston or the flat-earth.
For those materialists who are a bit more thoughtful and curious (and probably older), they may claim that the universe is causally closed so any proposal for an immaterial mind which would require an interaction between a non-physical "substance" (immaterial mind) and material substance (the physical brain) is inviable because the universe, they will say, is causally closed. Therefore any such proposal violates the laws of physics. I will address the claim that the universe is causally closed in the next section. For now, I will simply say that the notion of causal closure of the universe is a holdover from classic (Newtonian) physics which was entirely deterministic. Quantum Mechanics changed all that but you still encounter arguments based on that.
More commonly, a materialist-atheist will cite the obvious causal correlations between brain and "mind." The most common point is that effects on the brain clearly affect the mind. Drink a margarita and our minds do in fact seem to be affected. Moreover, scientific proposals involving perturbing the brain can induce a predictable output. Similarly, using functional MRI machines (fMRI), neuroscientists can in some cases predict which of a few items a person is "thinking" about by examining the signature of the fMRI output. It is all far more involved than all that but I will address these objections to substance dualism in brief in the next section of this paper.
There are no viable explanations for consciousness or intentionality and no resolution for the complex specified information problem especially as related to human subjective experience. I hope that I have made that clear at this point.
In this section I am going to address the common claim by materialists that the idea of an immaterial mind such as that proposed by a substance dualists, is untenable. We will discuss common objections to substance dualism such as causal closure, the obvious correlations between the physical brain and our minds, and also experiments purporting to disprove free will.
13.1 Causal Closure of the Universe
In Section 3 we talked about the materialist claim that reality—the universe—is a physically causally closed system that does not permit any outside influence. We said that classical physics seems to discount the possibility of an influence or interaction by some imagined immaterial agent or force. But Quantum mechanics, at least the most commonly accepted interpretation, the “Copenhagen Interpretation” nullifies this deterministic and mechanistic view of the universe by showing that nature is inherently probabilistic.
The claim of the Copenhagen interpretation is that the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but instead “must be considered a final renunciation of the classical idea of ‘causality.’" There is an intrinsic randomness associated with observations of micro particles such as photos, electrons, atoms and even larger particles and molecules behave as “probability waves.” So quantum physics enables true agency causation and nullifies the materialist claim of a causally closed universe or at the very least suggests that the universe may be open.
But in order for quantum physics to be relevant at all here, there should at least be some kind of a plausible proposal as to how a bidirectional interaction between an immaterial mind and a material brain might occur. There are a few theories as to how an immaterial mind might interface with the physical brain through quantum indeterminacies. These are proposed by Nobel Prize winner Sir John Eccles, quantum physicist Henry Stapp and quantum theorist Evan Harris Walker.
Neurologist and Nobel Prize winner Sir John Eccles famously said:
“I maintain that the human mystery is incredibly demeaned by reductionism, with its claim in promissory materialism to account eventually for all of the spiritual world in terms of patterns of neural activity. This belief must be classed as a superstition…. We are spiritual beings with souls in a spiritual world, as well as material beings with bodies and brains existing in a material world.
“I would maintain that this possibility of a future existence [immortality] cannot be denied on scientific grounds.”
Eccles speculates that consciousness affects brain activity by manipulating the way neurotransmitter chemicals are released into the synaptic gap by controlling these microsites. Eccles envisions a two-way interaction between brain and mind with the,
“brain receiving from conscious mind in a willed action and in turn transmitting to mind in a conscious experience.”
Henry Stapp envisions the interface between mind and matter at the level of the calcium ion which is much smaller than the neurotransmitter molecules involved at the synaptic junction. The firing of a synapse which releases neurotransmitters into the synaptic gap is triggered by the flow of electrically charged calcium atoms (“ions ” ). Calcium ions are certainly small enough to demonstrate quantum effects.
Quantum theorist Evan Harris Walker has developed the most detailed and comprehensive model of quantum consciousness so far. Walker places the interaction between mind and matter at the level of the electron, which is almost one hundred thousand times less massive than the calcium ions.
Each of these proposals for quantum-based mind-brain interaction is at least somewhat speculative. No one knows how this could work. And since these scientists proposing interactionalism are dualistic and nonmaterialistic in nature, they represent minority opinions in science today.
The main point of this subsection is that the interaction between a putative immaterial mind and the physical brain does not appear to be precluded by physics. The universe, there is reason to believe, is not causally closed.
Perhaps the most important set of evidences for materialism are the obvious correlations between the brain and what we perceive as our subjective mental state. This has been a common sentiment even among Christian intellectuals and is expressed by Nancey Murphy Professor of Christian Philosophy at Fuller Theological Seminary in the following comment:
“Currently, localization studies by contemporary neuroscientists—that is, finding specific regions or distributed systems in the brain associated with particular cognitive and emotional functions—provide some of the most compelling evidence that it is the brain, rather than a mind or soul, that is responsible for these capacities.”
My general comment about correlation is that it confuses correlation with causation and necessary causation with sufficient causation. We have all heard the analogy of an untuned piano or damaged radio receiver and its obvious effect on the sound of the music or radio reception. That doesn’t mean that the piano is sufficient to explain the music.
The topic of brain/mind correlation is a very involved topic and I can only cover a couple of the more serious objections to the idea of an immaterial mind based on correlations.
The first thing is that there are cases where even the loosest imaginable correlation between brain states and mental states, which would still qualify as a materialist reductionist philosophy, would not explain some peculiar cases where a person had severe damage to their brain or had part of their brain missing.
If the brain is sufficient to account for the mind, then one would expect that any marked damage to the physical brain or any thing missing from the brain would likely to be so disruptive that there would be nothing even similar to normal consciousness or thought. There are a few cases I will mention aside from the obvious cases of drug induced hallucinations.
13.2.1 Man with Almost no Brain
The first case involves a normal middle aged man who was missing most of his brain. You can read the article at the following links:
Here is a quote from the article:
“How the man was able to function normally remains a mystery, but then again, so do many aspects of the brain's operation. The best explanation scientists gave is that the brain is plastic and highly adaptable.
“While the enormous “holes” in these brains seem dramatic, the bulk of the grey matter of the cerebral cortex, around the outside of the brain, appears to be intact and in the correct place – this is visible as the dark grey ‘shell’ beneath the skull. What appears to be missing is the white matter, the nerve tracts that connect the various parts of the cerebral cortex with each other, and with the other areas of the brain.”
13.2.2 Girl with Half Brain Removed
Another case involves a young girl who had seizures on a daily basis. She had a degenerative disease on one side of her brain called Rasmussen’s syndrome. Neurosurgeons saw no other alternative but to take the extraordinary procedure to remove half of her brain. Here is a link to a video about this little girl who seems quite normal post-op.
13.2.3 Persistent Vegetative State
Here is another remarkable case involving a women who incurred massive brain damage from an accident. She was in a persistent vegetative state by all measurable criteria. The doctors carried out a remarkable experiment using a functional MRI system. The experiment revealed that she may have been consciously aware of all that was going on around her. It is a well-known case in the literature. Here is a link to the article but the full article is behind a pay wall:
This case sparked a debate between militant atheist neuroscientist Steven Novella and Stony Brook neurosurgeon Michael Egnor who is a Thomist substance dualist and Intelligent Design proponent.
I do not want to condense this exchange because Dr. Egnor, who I think clearly gets the best of the exchange, offers some brilliant insights on the mind/brain question. You can read the exchange at the link below which is Dr. Egnor’s final response in the debate:
13.2.4 Persinger and God Helmut
Michael Persinger, cognitive neuroscience researcher at Laurentian University in Sudbury, Ontario, created a stir a while back when he announced that he had experimental proof that he can induce a spiritual experience in subjects using a device —“The God Helmet” —that pulses magnetic fields at various locations and in close proximity to the brain. Even Richard Dawkins gave it a try, but he says nothing really happened.
Persinger claimed that about two thirds of the subjects “sensed a spiritual presence” some of whom referred to it as a “spirit guide” when subjected to the magnetic fields but so did a third of the subjects in a control group who were not subjected to magnetic fields at all.
In any case the effects, as ambiguous as the results were, could not be reproduced in a double blind project lead by a Swedish team. The Swedish team found no spiritual effects at all.
The following video recounts and experiment with a person who had a near death experience. This person tried on the God Helmet and compares the two. He also underwent a polygraph test related to his experience.
My general comment is that, perturbing a physical system, which is what Persinger undoubtedly believes the mind to be, in a significant way would be more likely to break something than to subtly modify some abstract property of the system. Computer scientists take great pains to ensure that all the components and traces of a circuit board are immune to effects of heat and especially electro-magnetic interference (EMI).
The precision and specificity of a computer would be adversely affected and produce bizarre results, if any at all, from any significant influence affecting the voltage level on a board, i.e. flipping a logical bit from a 0 to a 1 or 1 to a 0. The reason the airlines ask you to turn off your cell phones is because the EMI from the phone may couple onto the wires that control the flaps and other avionic sensing systems. I would not expect nuance at the mental level when the brain is subjected to large magnetic fields. I would expect disruption.
Callosotomy is a procedure that severs the corpus callosum connecting the two hemispheres of the brain. It is usually a last resort to treat something called refractory epilepsy. Some claim that two distinct persons are in effect created. Atheists have seized on this to make light of the idea of the soul by claiming that here, in this case, a human doctor is in effect creating a second person and presumably ensouling it at the same time. Recently militant atheist, neuroscientist VS Ramachandran made a point of this in a speech:
Despite the rantings and ravings of this flim-flam man, upon closure examination, the apparent “two persons” is really a single person who may at times be confused resulting from the fact that their perception and actions are affected because there is no communication between the left and right brain. What counts is what they perceive.
Were a person who had a callosotomy to lie down and be sensory deprived and unable to take any action, they would then be left with their thoughts. They would still have perception—qualia—but their inner life could consist primarily of their intentionality, i.e. their thought streams, which would be about something. The real test as to whether a “person” comprises one person or two persons is whether they feel they have a single identity, a single sense of self and if they have a single thought stream at any one time. Frankly the subject is a bit unresolved in my mind but the balance of evidence leads me to believe that there is a single identity, a single will, and a single thought stream. Part of that inference of mine comes from watching interviews such as the following:
Also it is instructive to look at how a materialist alternative to some substance dualism, which I subscribe to, fares in assessing the results of a callosotomy. What would one expect from a materialist perspective and its view that the brain accounts for all human experience were one to perform a radical procedure such as a callostomy? I would not expect merely a bit of confusion when one deals physically with things on the left or right side. I would expect complete disruption of everything including consciousness. The fact that you can perform such a radical procedure on the brain and only suffer a relatively minor set of peculiarities is nothing short of stunning from a materialist perspective and in fact demonstrates that the brain, while necessary, is not sufficient, to account for human consciousness and thought.
13.3 Libet-Type Experiments and Free Will
The experiments conducted by Dr. Benjamin Libet and others like it are often put forth as evidence against free will and by extension, against the idea of an immaterial mind. Libet was a professor at San Francisco State University. Libet’s experiments formed one of the main premises of Sam Harris’ book, Free Will which sought to debunk the notion of true free will. The book, which was really more of a pamphlet, was widely panned on Amazon.
Libet was trying to establish correlations between brain activity and conscious experience. He used electrodes attached to the brain to detect brain activity. Modern versions of the test use much more accurate functional MRI machines.
The most important of these experiments involve measuring electrical activity in the brain when volunteers were asked to move their wrist. What he found was that there was brain activity associated with the act of moving one’s wrist about a half a second before one’s conscious awareness of the intent to move their wrist. He called this preceding brain activity the “readiness potential.” This was unexpected. What one might expect from a substance dualist assumption at first thought is that the awareness or intention to move would precede the brain activity and then the wrist would move.
The experimental results have been duplicated many times using functional MRI machines. In some cases, the brain activity is said to precede the conscious intent to more by a few seconds. From these experiments, Harris and others, advancing an atheist agenda, have concluded that there is no free will.
Interestingly Libet himself did not believe that his experiments disproved free will. In fact Libet defended the idea of free will and thought that his experiments confirmed free will. With a bit of self-reflection, one can understand why. The following is one of the essential statements of Libet’s summation of his experiments:
“Potentially available to the conscious function is the possibility of stopping or vetoing the final progress of the volitional process, so that no actual muscle action ensues. Conscious- will could thus affect the outcome of the volitional process even though the latter was initiated by unconscious cerebral processes. Conscious-will might block or veto the process, so that no act occurs.
“The existence of a veto possibility is not in doubt. The subjects in our experiments at times reported that a conscious wish or urge to act appeared but that they suppressed or vetoed that. In the absence of the muscle's electrical signal when being activated, there was no trigger to initiate the computer's recording of any RP (Readiness Potential, i.e. the electric signal detected in the brain of the intention to act) that may have preceded the veto; thus, there were no recorded readiness potentials with a vetoed intention to act. We were, however, able to show that subjects could veto an act planned for performance at a pre- arranged time. They were able to exert the veto within the interval of 100 to 200 milliseconds before the pre-set time to act. A large readiness potential preceded the veto, signifying that the subject was indeed preparing to act, even though the action was aborted by the subject...
“The role of conscious free will would be, then, not to initiate a voluntary act, but rather to control whether the act takes place. We may view the unconscious initiatives for voluntary actions as 'bubbling up' in the brain. The conscious-will then selects which of these initiatives may go forward to an action or which ones to veto and abort, with no act appearing.
“My conclusion about free will, one genuinely free in the non-determined sense, is then that its existence is at least as good, if not a better, scientific option than is its denial by determinist theory.”
I think Libet's conclusion is essentially correct. We freely choose whether or not to act on the impulses that “bubble” up in our unconscious mind. We have veto power.
Some commenters on Libet’s work, who use the experiments as evidence of free will, have, in my opinion, made the mistake of concluding that we have no control over the thoughts and impulses that “bubble” up in our unconscious. As I show in the previous section on Continuity of Thought, it seems quite obvious that we have executive level control of our minds in that we can specify the topic of our thoughts but not precisely how or what is presented.
In order to reason and analyze things, it could not be any other way. Think of how unworkable life would be if we could not instruct our minds to focus on this or that topic. As I write this segment, my mind is bringing forth useful insights related to free will and the Libet experiments. Barring an interruption or an occasional glitch, the stream of related thoughts is continuously providing me insights that I select from. It is a marvelous quality of mind and an essential one for the human experience.
One final note on the Libet experiments…It strikes me as odd to be discussing how the experiments prove or disprove free will without noting that in order for the experiments to be carried out at all, a participant has to decide whether or not to go along with the instructions or not; not to mention show up at the clinic to begin with.
13.4 Non-Local Consciousness
In this section of the paper we are looking at materialist objections to the idea of substance dualism—an immaterial mind.
There are alternative views of the brain/mind for those who do not accept the current materialist view of the “mind” that it is a derivative of the brain but rather believe that there is an immaterial mind that interacts with the brain in normal life. One such alternative view is referred to as “non-local consciousness.” In the non-local consciousness view, consciousness is not spatially localized in the brain. The brain is envisioned as a filtering mechanism or a radio receiver or tuner that usually interacts with the brain.
Non local consciousness would be considered a form of monistic idealism, I suppose, which holds that consciousness, not matter, is the fundamental thing of all being. It is monist because it holds that there is only one fundamental thing in the universe and idealist because it holds that one fundamental thing is consciousness.
There are three lines of evidence for non-local consciousness: 1) The fact of the unresolved “hard problem” of consciousness. 2) The apparent non-necessity of the brain at least in some cases during a near death experience. 3) The “encounter” between human consciousness and physics, specifically related to quantum physics.
We have already discussed the “hard problem of consciousness” which I have indicated is certainly not by any means the only hard problem of consciousness. We will discuss whether the evidence from near death experiences show that the brain is unnecessary at least on some occasions. Here I wanted to talk briefly about “consciousness’ encounter with physics,” as MIT physicists, Bruce Rosenblum and Fred Kuttner puts it in their book, The Quantum Enigma.
13.4.1 Consciousness and Quantum Physics
There is an awful lot of debate, and I might add very contentious and confusing debate, about the just how embedded consciousness might be in to physics, specifically quantum physics.
Minimally it seems that the manner in which a person chooses to run an experiment can determine whether a set of particles behaves as a wave or a particle.
Some of the contentious points are related to what affects observation has on the behavior or micro particles and what constitutes an “observation.”
22.214.171.124 Double Slit Experiment
According to theoretical physicist Richard Feynman, all of the peculiarities of quantum mechanics are exhibited in the so called “double slit” experiment. But there is another way of running the double slit test that is easier to understand. The easier to understand version is called the “two box” experiment. The following is my own description of the experiment but I encourage you to view the video at the link below.
The main stumbling block in understanding quantum mechanics for most people initially is the concept of a “probability wave.” The probability wave can perhaps best be thought of as an unrealized potential that materializes in a random, i.e. probabilistic way. The probability wave is not really a physical wave, it is an abstraction, it just represents a known-probability distribution of the way a particle will materialize once observed.
Imagine you have two evacuated boxes (no other matter within them) that are capable of capturing a small particle such as an atom or an electron or a photon such that once you capture the particle it would remain in the box until you allowed it to escape. There are ways to fire single particles using mirrors (or other techniques depending on the type of particle), toward these boxes such that they have equal chances of going in to either box.
So let’s say that you did that. You fired millions of particles one by one through the test apparatus and you checked where they ended up after each particle was fired. After you fired each particle, you opened up a small slit in both boxes at the same time. And when you opened them the particles would be directed toward a piece of film so that you could see where the particles ended up as they left the box, i.e you could detect them by observation. Intuitively one would expect to see an arrangement of particles that would be a sort of distribution of dots like a rifle target at two places, i.e. the two straight line trajectories from the hole or slit in each box to the film.
But that is not in fact what happens. What happens is that there is what is called an interference pattern, i.e. a series or grouping or collections of dots where each particle hit the film. (This will be more clear if you have viewed the videos.) All physicists will tell you that the occurrence of an interference pattern means that the particles left the boxes as though they were a pair of waves with one wave emanating from each box through the slits. These waves interact in much the same way two water waves would interact in a pond. This is not what was expected. This is evidence of the probability wave I described above. The only explanation in this case, and this is pretty much universally agreed upon, is that there must have been something of each particle in both boxes at the same time. They call this a superposition state. In this case the “particle” acted like a wave until it was detected “observed” by the detection film.
Now, let’s say you conduct the same experiment but instead of opening up (with a hole or slit) both boxes at the same time after each particle is fired into the boxes, you instead open them one at a time. Now what would you expect? Had you not seen the results of the prior experiment, there would be no reason to suspect that the results would not be the same for each experiment. But they are.
If you open up one box at a time, you do not get the interference pattern. You get two collections or groupings of dots indicating that the particles left the boxes as a particle and went in a straight line to the detection film. But the particle will have come from one box or the other, not both.
Here is an important point: once you look in one box, i.e. once you open it, the particle is either in that first box you open or it is not. If it is in that first box, it proceeds in a straight line toward the detection film where it can be observed. If it was not in that first box that you opened, nothing would happen. But having opened the first box and finding the particle there, you will not find any particle in the second box. Conversely if you open the first box and did not find the particle there, when you open the second box, the particle will always be there. So in this second experiment, opening up one box at a time, the act of looking, meant that the particle materialized in one box or the other, once you observed it in either one of the two boxes. It did not leave the box as a probability wave.
126.96.36.199 Quantum Enigma – The Observer Effect
Here is how Bruce Rosenblum and Fred Kuttner, summarize it in their book, The Quantum Enigma: Physics Encounters Consciousness:
“Quantum physics does not tell the probability of where an object is, but rather the probability that, if you look, you will observe the object at a particular place. The object has no “actual position” before that position is observed. In quantum mechanics the position of an object is not independent of its observation at that position. The observed cannot be separated from the observer.
We experience an enigma because we believe that we could have done other than what we actually did. A denial of this freedom of choice requires our behavior to be programmed to correlate with the world external to our bodies. The quantum enigma arises from our conscious perception of free will. This mystery connecting consciousness with the physical world displays physics’ encounter with consciousness.”
So, because of the way you chose to conduct the experiment: 1) either opening up both boxes at the same time, or 2) opening them one at a time, the “particles” either acted as (a) probability waves, or (b) distinct materialized particles. The effect of the choice of the observer as to how to carry out the experiment on micro particles is what Kuttner and Rosenbloom call the “Quantum Enigma.”
The Quantum Enigma is just one of the bizarre implications of quantum mechanics and the double slit experiment. Entanglement —which Einstein’s rejected as “spooky action at a distance” —but nonetheless has been verify, the controversial idea that the past can be affected by the present, not to mention the probabilistic nature of reality, are all aspects of quantum mechanics but are beyond the scope of this paper and certainly this section of the paper.
The point of all this discussion of the “quantum enigma” is that, if it can be shown that in some way human choice or observation can affect the actions of particles—matter—then one could make the case that consciousness is a viable candidate for The fundamental thing in the universe. The hierarchy of materialism would then be reversed as shown below.
Standard Monistic Materialist Structure
Alternative Structure “Consciousness-Fundamental”
One note to the depictions above…if vitalism is true as I have surmised what it could be, then Biology would move down to the same plane as Psychology.
13.4.2 Radio Receiver Model
One way to think about non-local consciousness is that the brain tunes us into a specific frequency just as a radio receiver would. When we think about something specific, we are in effect continually narrowing our brains to filter out thoughts that we are uninterested in at the moment by exclusion in that we are intent on focusing on other topics we are interested in. Pim Van Lommel is one proponent of non-local consciousness:
There are many proposals as to what the tuning mechanism might be. One’s DNA has been proposes as the tuning mechanism for example. But it is our free will that tunes the tuning mechanism and we do this through our thoughts. These willful acts of tuning enable us to control our thoughts which is what we all subjectively feel is the case. One’s own introspection can gauge the validity of this theory although it seems perfectly plausible to me at first blush—certainly it is far more plausible than the materialist accounts.
Yes, thoughts just mysteriously seem to arise in minds, but as I mentioned in the falsification of materialism – continuity of thought, these thoughts arise consistently with what we deem to be interested in—in effect, what we have tuned to. And this, far from being evidence of a materialist brain as Sam Harris suggests in his book, actually is powerful evidence for free will because the thoughts that arise are related to one another in sequence.
One final note, the non-local theory of consciousness would, of course, explain why the mind is affected by the brain. If you damage a radio receiver, of course you will damage the signal received.
The discussions in this section are defensive measures on my part in that they make the counter case against materialist objections to substance dualism by denying these objections. But what about a positive approach? Are there any lines of evidence suggesting that there is an immaterial mind, irrespective of any problem associated with the materialist idea that the brain is all there is? These will be the subject of the final sections of the paper.
In the previous two sections of this paper we concluded that dreams, thought streams, and the constancy and resumption of self, constituted strong evidence for falsifying materialism based on the amounts of creative complex specified information they generate. In the immediately preceding section, we discussed the viability of the idea of an immaterial mind (substance dualism) despite the common objections of materialists. We found that the objections carry little weight and are primarily based on a materialist presupposition.
Mystical experiences and various hallucinations offer another dimension to the information-based argument against materialism offered in the previous sections of the paper. This added dimension pertains to the creative element that they exhibit. Dreams and thought streams, as I pointed out, are creative as well. But the content or aboutness of dreams and thought streams remain grounded in earthly subject and topics. Near death experiences and some hallucinations breach that general rule that the mind seems to abide by in that they provide rich, ineffable visual and auditory content.
These mystical experiences have another quality that evidences design and precludes materialistic explanations. They produce complex information that is specified but specified in often precise repeatable ways. It is one thing to produce a set of images in the imagination but quite another thing when the collection of these images across many individual who have experienced them, reveal themselves to be so similar.
Most of the discussion in this section pertains to near death experiences and other related end of life experiences. I briefly touched on near death experiences in one of my falsification of materialism entitled, “Constancy and Resumption of Self.” I noted that even if all three of the examples of near death experiences I cited were outright fabrications, the logic of the falsification was unaffected. In that case, I just needed to show that the mind of a person whose brain was severely affected could still experience a resumption of self and all that that entails.
In this section I am going to assume that the subjective accounts of at least some of the persons purporting to have mystical experiences, especially near death experiences, but also various types of hallucinations, are true. When I say “true” I mean only that these people are accurately conveying what they experienced, in other words, they are not just making these stories up—they really believe these things happened to them.
Now of course the current scientific community and medical profession assume that all these near death and end of life phenomenon are hallucinations of the physical brain. In this section, I will discuss why these phenomena cannot be hallucinations of the physical brain while acknowledging that they could be hallucinations of an immaterial mind.
There will be a bit of seemingly incidental information presented as well that does not play into the main point of the section but helps establish some background for the concluding sections.
14.1 Near Death Experiences
There are many near death experience accounts going back in history to Plato recounting of the story of the soldier Ur thought to be killed in battle. Ur revived on the funeral pyre, claiming that he had seen the world beyond. In more recent time, psychologist Carl Jung experienced a near death experience. Near death experiences are not new phenomenon.
Here is an interview with a world renowned expert on near death experiences, Pim Van Lommel, a retired Dutch cardiologist and author of Consciousness Beyond Life, The Science of Near Death Experiences. The interview was conducted by Ian McNay of Conscious TV. This video interview is a good introduction to near death experiences.
According to oncologists and near death expert, Jeffrey Long in his book, Evidence of Afterlife: The Science of Near Death Experience.
“There are a variety of estimates as to how common near death experiences are. Kenneth Ring and Michael Sabom estimate that about 30 percent of those individuals who come close to death report an NDE. An English study of 63 survivors of cardiac arrest found that only about 10 percent experienced an NDE. A Dutch study of 344 patients found that although 12 percent reported a deep NDE, after adjusting for multiple crisis events (several of the patients had experienced more than one cardiac arrest), the researchers estimated the true frequency of the reported experience to be about 5 percent.”
Near death experience accounts run the gamut from the sublime to ridiculous. The link below is to the Near Death Experience Research Foundation list of exceptional near death experiences.
Further below I have some links to video interviews and documentaries on some of the more spectacular near death experiences.
If you have looked at these accounts you will no doubt marvel at the variety of experiences. I think Stony Brook neuro surgeon Dr. Michael Egnor, expressed it best when he said:
“NDE's are varied and complex things, and my take on them is that they most likely represent a spectrum of experiences -- fraud, delusion, dreaming, drug effects and a real core of actual experiences of the afterlife. In a way, they're like religion, broadly understood. Lots of chaff around a core of wheat.”
A typical near death experience could involve an out-of-body experience, a trip through a dark tunnel with a white light at the end, a meeting with deceased relatives, an encounter with a divine personage, a view of cities and wonderful, never before seen colors and music, a life review and a trip back to the earthly body. There is an ineffable quality throughout much of the experience. One of the common comments is that time is very different there. They describe it as there being no really past, present and future in the way we understand it. Often near death experiencers say that life here on earth is an illusion. Let’s briefly look at these components.
There have been many people who claim to have had out-of-body experiences, perhaps many thousands if we are to believe the surveys. Often out-of-body experiences come as the initial event of a near death experience. I will have much more to say about out-of-body experiences in the next section of the paper.
14.1.2 Tunnel and Light, Deceased Relatives, Divine Beings
Often during a near death experience there is a sensation of flying or floating through a tunnel and a light at the end of the tunnel. At the end of the tunnel and from the light, often there is a meeting up with deceased relatives—seldom if ever living relatives. In most cases, a person encounters some person close to them who has passed on. The person takes on a different form. Chris Carter in his book Science and the Near-Death Experience: How Consciousness Survives Death, states that,
“The deceased are often described as looking better than remembered, and the presence is sometimes described as a being of light. Communication with the deceased or the presence is usually described as telepathic.”
14.1.3 Life Review
The life review is also very intriguing. The life review accounts talk about a panoramic visual movie shown very quickly and offered in the presence of some divine person who comments on each event. The ultra-real imagery of the life review is often said to be from a 3rd person perspective so they cannot be a replaying of memory. They have to be spontaneously generated imagery that represents genuine experiences. A study by Jeffrey Long showed that the persons having a life review indicated that all the images they saw replayed of their life during a life review were genuine. I will have more to say about life reviews in a later section of the paper.
14.1.4 A Barrier
Often the experiencer is advised of a barrier through which they are not allowed to pass. Often this occurs in the presence of one of these diving beings.
14.1.5 Ineffable Content
The visual and audible content of a near death experience is said to be unearthly and ultra-real, the most real thing these persons have every experienced. The places these experiencers visit are analogous to earthly things but their composition is different. Buildings, grass, trees and beings, are translucent as though made of light. Here are the comments of an near death experience as conveyed in Steve J Miller’s book Near-Death Experiences as Evidence for the Existence of God and Heaven: A Brief Introduction in Plain Language:
“I was there. I was on the other side. It’s simply too much for human words. Our words, which are so limited, can’t describe it. It was real – as real as me sitting across from you and talking to you now. Nothing could ever convince me otherwise. I didn’t have to think, I knew everything. I passed through everything. At once I realized: there’s no time or space here.
“I saw the most dazzling colors, which was all the more surprising because I’m color- blind.
“All the pain vanished and I began to experience the most wonderful feelings. I couldn’t feel a thing in the world except peace, comfort, ease. I felt that all my troubles were gone. I’ve never felt so relaxed. I’ve never felt this happy before. It was so emotional that I can’t possibly describe it. I was overcome with a feeling of peace that I’d never known on earth…. An overwhelming feeling of love came over me, not the earthly feeling I was quite familiar with, but something I can’t describe.
“What I saw was too beautiful for words. I was looking at a magnificent landscape full of flowers and plants that I couldn’t actually name. It all looked hundreds of miles away. And yet I could see everything in detail. It was both far away and close. It was completely three- dimensional and about a thousand times more beautiful than my favorite holiday destination in spring. I was always surrounded by loving spiritual beings of light.
“I saw a bright light, and on my way there I heard beautiful music and I saw colors I’d never seen before. The light… was of a kind that I’d never seen before and that differs from any other kind such as sunlight. It was white and extremely bright, and yet you could easily look at it. It’s the pinnacle of everything there is. Of energy, of love especially, of warmth, of beauty. I was immersed in a feeling of total love. …from the moment the light spoke to me, I felt really good – secure and loved. The love which came from it is just unimaginable, indescribable. It was a fun person to be with! And it had a sense of humor, too – definitely! I never wanted to leave the presence of this being.”
Many near death accounts describe time as being different. In his book, Proof of Heaven, neurosurgeon Eben Alexander describes this:
“Because I experienced the nonlinear nature of time in the spiritual world so intensely, I can now understand why so much writing on the spiritual dimension can seem distorted or simply nonsensical from our earthly perspective. In the worlds above this one, time simply doesn’t behave as it does here. It’s not necessarily one-thing-after-another in those worlds. A moment can seem like a lifetime, and one or several lifetimes can seem like a moment.
But though time doesn’t behave ordinarily (in our terms) in the worlds beyond, that doesn’t mean it’s jumbled, and my own recollections from my time in coma were anything but.”
14.1.7 Changed Lives
The evidence is overwhelming that those experiencing a near death experience view the experience as a life-changing event. And the evidence shows that changes do occur and that they persist. People become more compassionate and less concerned with material things.
14.1.8 Notable Near Death Experience Cases
The best way to understand the near death experience is to listen to those who claim to have experienced them. The following are some spectacular accounts —Michael Egnor’s “core of wheat.” These near death experiences are among the most spectacular and well-known.
188.8.131.52 Vicki Noratuk
One of the striking things about these near death experiences is that blind people can see. Even persons blind from birth. Interestingly, those who become blind after birth do dream with visual imagery, but not those who are blind from birth. One of the most exceptional near death experiences cases involving a women blind from birth is the case of Vicky Noratuk. The first link is an account of her case. It is a bit of a dramatization. The second link is an interview with her.
184.108.40.206 Pam Reynolds
The most studied and most well-known near death case is that of Pam Reynolds. Her physical condition was closely monitored as she was undergoing a very risky operation to remove an aneurism deep in her brain. In order to repair the aneurism, the doctors had to chill her body to 60 degrees and remove all the blood from her brain for about an hour. This procedure is called “standstill.” For an hour there would be no molecular activity at all in her brain.
Pam did live through the operation and recounted what is the most spectacular near death experience ever. She had two out-of-body experiences one at the beginning of her near death experience and a second as it ended. Importantly Pam claims that her near death experience was continuous from the time of her first out of body experience, while anesthetized but not yet in standstill, watching the doctors cut open her brain, to the second out of body experience just prior to her resuscitation. This would mean that the near death experience occurred while there was no brain activity whatsoever—the doctors having drained all the blood from her head. The continuous nature of the near death experience cannot be verified however. Pam’s near death experience was in 1991, Pam Reynolds has since passed away.
The following links describe Pam Reynolds remarkable near death experience.
220.127.116.11 Mary Neal
Mary Neal, is an orthopedic spine surgeon. She had a near death experience while kayaking in
Chili. The following links recount her story.
18.104.22.168 Eben Alexander
Eben Alexander was a Duke University neurosurgeon who contracted E-coli bacterial meningitis and lapsed into a coma. He had a prolonged near death experience. The following is an interview with Dr. Alexander:
14.2 Neuroscientists Views on Near Death Experiences
The claim of mainstream neuroscience is that near death experiences are hallucinations of the physical brain brought on by oxygen deprivation or some other physiological phenomenon. But the rapidity with which consciousness is extinguished, judging from known neural correlates of consciousness, provide very little time for these rich, ultra-real, ineffable near death experiences to occur—perhaps only a few seconds. There is evidence of a brief flurry of brain activity just as a person is losing consciousness and some scientists have taken that as evidence that the brain is constructing these near death hallucinations. I will talk more about the viability of neuroscience explanations in the next section of the paper.
14.2.1 Psychological Factors
Some skeptics claim there is a psychological component which plays a role in these near death experiences as well as a physiological component. Skeptic Susan Blackmore, for example, believes that near death experiences are hallucinations induced by some sort of wishful need in a time of great distress as some sort of a defense mechanism. However, in order to posit this type of psychological factor, one would need to posit an emergent physical mind (property dualism) to account for top down causations. Top down causation would be required to explain near death experiences as the result of a wishful, defensive mechanism. But we have already looked at the idea of the emergent mind and found that it is extraordinarily problematic.
14.2.2 N-Dimethyltryptamine (DMT)
Perhaps the most serious alternative contender for inducing near death experiences (alternative to the notion that they are genuine) are that they are produced by similar means as hallucinogenics such as Ketamine (“special K” ) and especially N-Dimethyltryptamine (DMT). DMT effects have been described by Dr. Rick Strassman who conducted experiments in the 1990s and recorded them in his book, DMT – The Spirit Molecule. Strassman describes DMT as, “The most powerful hallucinogen known to man and science.” According to hallucinogenic expert Terrance McKenna, “You cannot imagine a stranger drug or a stranger experience than DMT.”
DMT is a naturally and commonly occurring molecule emitted by the pineal gland, a gland deep in the center of the brain. Descartes proposed that the pineal gland somehow was the “seat of the soul,” Sometimes the pineal gland is referred as “the third eye” as it has a lens, a cornea and a retina in reptiles.
There are many similarities between DMT experiences and near death experiences. DMT does produce some euphoric and some would say ‘spiritual” experiences such as the sensing of a presence and the feeling that the universe is a loving and caring place and loss of linear time. And some of the imagery in a DMT trip is peculiar and ineffable particularly the imagery and the non- sequential nature of time.
Some claim that all, or nearly all, the phenomena reported during near death experiences have been duplicated by DMT. Having read many accounts of near death experiences and DMT trips and having watched many interviews, I don’t believe that is the case at all. There is nothing like a life review, no visit with deceased relatives, no barrier, no return to the body, no logical conclusion to the experience as there is in a near death experience.
Furthermore, some of the DMT experiences are quite bizarre involving aliens performing various medical procedures. Consider these statements from Dr. Strassman relating the accounts of some participants in the study who claim to have encountered aliens which is a somewhat common experience:
“I was in a void of darkness. Suddenly, beings appeared. They were cloaked, like silhouettes. They were glad to see me. They indicated that they had had contact with me as an individual before. They seemed pleased that we had discovered this technology. I felt like a spiritual seeker who had gotten too far off course and, instead of encountering the spirit world, overshot my destination and ended up on another planet. They wanted to learn more about our physical bodies. They told me humans exist on many levels. … Somehow we had something in common. They told me to “embrace peace.”
“I went directly into deep space. The aliens knew I was coming back and they were ready for me. They told me there were many things they could share with us when we learn how to make more extended contact. Again, they wanted something from me, not just physical information. They were interested in emotions and feelings. I told them, “We have something we can give you: spirituality.” I guess what I really meant was Love. I tried to figure out how to do this. I felt a tremendous energy, brilliant pink light with white edges, building on my left side. I knew it was spiritual energy and Love.”
Here are a few links of persons recounting their DMT trips. You can compare these accounts with the interviews of near death experiencers provided above.
A neuroscientist recounts her spiritual DMT experience. This first one is very good and I highly recommend watching it:
A scientist working on a documentary , “The Spirit Molecule” recounts his experience with ayahuasca—a drinkable version of DMT used by native Americans in South America.
More good DMT trip with aliens:
https://www.youtube.com/watch?v=GmvTbsc04xg Terrance McKenna, expert on psychedelics:
14.2.3 Demon Haunted World – Alien Abduction Experiences
Interestingly, Rick Strassman believes that the alien abduction phenomena and DMT trips are too close in experiential content to be coincidental. An obvious conclusion is that alien abduction experiences result from a spontaneous release of DMT from the pineal gland. DMT is thought to play a role in dreams as well. Here is how Dr. Strassman describes an alien abduction and his concluding thoughts about them in relation to DMT trips:
“The resemblance of the alien abductions of ‘experiencers’ to the contacts described by our own DMT volunteers is undeniable. How can anyone doubt, after reading our accounts … that DMT elicits ‘typical’ alien encounters? If presented with a record of several of our research subjects’ accounts, with all references to DMT removed, could anyone distinguish our reports from those of a group of abductees?”
In The Demon-Haunted World, Carl Sagan, noting that the alien abduction accounts are remarkably similar to stories of demon abduction common throughout history, comments:
"There is no spaceship in these stories. But most of the central elements of the alien abduction account[s] are present, including sexually obsessive non-humans who live in the sky, walk through walls, communicate telepathically, and perform breeding experiments on the human species. Unless we believe that demons really exist, how can we understand so strange a belief system, embraced by the whole Western world (including those considered the wisest among us), reinforced by personal experience in every generation, and taught by Church and State? Is there any real alternative besides a shared delusion based on common brain wiring and chemistry?"
Although Sagan points to an interesting mystery, I think his comment attributing alien abductions solely to brain wiring and chemistry—really what he means is neural patterns of firing—is quite foolish. It is foolish for the same reason believing that dreams could be produced by neural firing patterns is foolish.
But believing that alien abductions are hallucinations of the physical brain is even more foolish than believing that dreams could be produced solely by the physical brain, because the complex specified information flowing into the mind of a person experiencing an alien abduction is clearly creative. This creative and unearthly content is that added dimension of the information argument I mentioned in the introduction to this section. Alien abductions could not possibly be derived from memory and because of that and because of the massive created complex specified information they produce, could not possibly be spontaneously generated by the brain or any physical system.
The late John E. Mack former professor at Harvard Medical School was the leading expert on the alien abduction phenomenon. He describes many accounts in his book Passport to the Cosmos. The following comments summarize the alien abduction experiences:
“The apparent expansion of psychic or intuitive abilities, a heightened reverence for nature with the feeling of having a life-preserving mission, the collapse of space/time perception, a sense of entering other dimensions of reality or universes, the conviction of possessing a dual human/alien identity, a feeling of connection with all of creation, and related transpersonal experiences—all are such frequent features of the abduction phenomenon that I have come to feel that they are, at least potentially, basic elements of the process.
“One of the properties of this other reality, or realities, is the different experience of time, space, and dimensionality within them. One of the first things Andrea observed when she began to examine her experiences was that ‘I went through a tunnel and I lost all conception of time.’ She stressed what a shock it was to experience that time was collapsing. She has learned to think beyond linear time to feel that the past, present, and future are one.”
“As the event [alien abduction] begins, consciousness is disturbed by a bright light, humming sounds, strange bodily vibrations or paralysis . . . or the appearance of one or more humanoid or even human-appearing strange beings in their environment. The sense of high-frequency vibrations many abductees report, which may cause them to feel as if they are coming apart at the molecular level. Some find themselves in familiar
environments, like ‘a park with swings,’ and figures ‘emerge’ out of the background. Abductees also often find themselves on some type of examining or treatment table. Experiencers are absolutely under the aliens’ control. Despite the obviously unexpected and bizarre nature of what they are undergoing, there is no doubt in their minds that it really is happening. Thus, they describe their experiences as ‘more real than real.’ The individual may ‘float’ or otherwise make their way ‘into a curved enclosure that appears to contain computer-like and other technical equipment.’ Once the person arrives, ‘[s]trange beings are seen busily moving around doing tasks the experiencers do not really understand.’ Abductees commonly report seeing energy-filled tunnels and cylinders of light in these environments. The ‘typical’ alien looks like the ones portrayed commonly in the media: large head, skinny body, big eyes, small or no mouth, gray skin.’
“Abductees report ‘that the beings appear to be greatly interested in our physicality and emotionality, seeming, as is said of angels, to envy our embodiment . . . they need something that only human love can provide.’ This may even take the form of alien-human sexual encounters. These experiences ‘can range from cold and bodiless to ecstatic, beyond what is known to them in earthly love.’ ‘The experience of connection between one or more of the alien beings and the abductees with whom they relate is a powerful and consistent aspect of the experience. . . . Experiencers often report that the aliens are urgently notifying them that Earth is in danger. Their abduction relates to this, inasmuch as they either provide reproductive material for the hybrid project or decide to spread the message of environmental degradation to a wider audience.
This is the transformational and spiritual nature of the encounter: ‘[t]he collapse of space/time perception, a sense of entering other dimensions of reality or universes . . . a feeling of connection with all of creation.’ Abductees’ sense of belonging in that realm may be so acute as to create a yearning for it—a desire ‘not to come back.’ Many abductees no longer feared death, knowing that their consciousness would survive the body’s death.”
[Emphasis mine throughout]
For the overall purposes of the paper in trying to falsify materialism based on the spontaneous flow of creative complex specified information in our conscious experience, it doesn’t much matter whether near death experiences or DMT trips or even alien abduction accounts are real or not. They all offer additional powerful evidence disproving materialism in and of themselves because the experience is perceived to be real; “more real than real” and as such they exhibit massive amounts of creative complex specified information. The physical brain, or any physical system, cannot possibly spontaneously generate vast amounts of creative, ineffable, complex specified information—it cannot generate any complex specified information in fact.
14.3 Deathbed Visions
According to some studies, about 40 percent of dying patients reported some kind of deathbed vision. There are a few phenomenon related to deathbed visions. The term deathbed vision can be used to describe experiences by those attending the death of a loved one. In some cases those attending the passing of a loved one experience a mist leaving the body and sometimes in the form of a being. Other deathbed type “visions” involve a feeling such as an intense sense of love and comfort that things will be okay. And this may occur even in those not physically present.
But the term deathbed vision more often refers to an event where the dying person appears to see something. Their expressions and comments betray what could be inferred as a divine vision of some sort. For example, according to the wife of Steve Jobs, the famed tech entrepreneur, just before he passed away, looked at his wife and his kids and then appeared to look past them and uttered the words, “Oh Wow, Oh Wow, Oh Wow.”
Thomas Edison is purported to have said, “It’s very beautiful over there” just before taking his last breath.
The wife of the late film critic, Roger Ebert recounted this remarkable story during the film critic’s final days:
“On April 4, [Roger] was strong enough again for me to take him back home. My daughter and I went to pick him up. … We just sat there on the bed together, and I whispered in his ear. I didn’t want to leave him. I sat there with him for hours, just holding his hand. Roger looked beautiful. He looked really beautiful. I don’t know how to describe it, but he looked peaceful, and he looked young. The one thing people might be surprised about—Roger said that he didn’t know if he could believe in God. He had his doubts. But toward the end, something really interesting happened. That week before Roger passed away, I would see him and he would talk about having visited this other place. I thought he was hallucinating. I thought they were giving him too much medication. But the day before he passed away, he wrote me a note: ‘This is all an elaborate hoax.’ I asked him, ‘What’s a hoax?’ And he was talking about this world, this place. He said it was all an illusion. I thought he was just confused. But he was not confused. He wasn’t visiting heaven, not the way we think of heaven. He described it as a vastness that you can’t even imagine. It was a place where the past, present, and future were happening all at once. It’s hard to put it into words.”
Another related phenomenon to the death bed visions is something called “terminal lucidity.” Terminal lucidity is a case where a terminal patient who has lapsed into a coma or has suffered from a mental decline, even prolonged mental decline, suddenly appears lucid and communicative just prior to death. Terminal lucidity is one of the more common end-of-life experiences.
Here is a case reported by a skeptic, Jesse Bering, in Scientific American,
“When my mother died in early 2000, we had a final farewell that some researchers might consider paranormal. At the time, it did strike me as remarkable—and after all these years, I still can’t talk about it without getting emotional. The night before she died at the age of 54 (after a long battle with ovarian cancer), I was sleeping in my mother’s bedroom alongside her. The truth was that I’d already grieved her loss a few days earlier, from the moment she lapsed into what the Hospice nurses had assured us was an irretrievable coma. So at this point, waiting for her body to expire as a physical machine wasn’t as difficult as the loss of “her” beforehand, which is when I’d completely broken down. It had all happened so quickly and, I suppose being young and in denial about how imminent her death really was, I hadn’t actually gotten around to telling her how very grateful I was to have had her as my mom and how much I loved her. But then, around 3am, I awoke to find her reaching her hand out to me, and she seemed very much aware. She was too weak to talk but her eyes communicated all. We spent about five minutes holding hands: me sobbing, kissing her cheeks, telling her everything I’d meant to say before but hadn't. Soon she closed her eyes again, this time for good. She died the next day.”
Peter Fenwick a neuro psychologist and world renowned expert on the mystical experiences of death and dying was interviewed by on a web show Conscious TV. It is a fascinating interview and gives great insight into the phenomena that often occurs in the dying process.
14.4 After Death Communication
Some surveys show that roughly half the people who are mourning the recent passing of a loved one, claim to have some kind of contact with the deceased love one in the months following their death. The contact is not always visual and not always even sensory. There are a variety of phenomena. Some involve visual images, some are exclusively auditory, or telepathic, or emotional, i.e. a sense of warmth and love. When you talk to people who have experienced, as I have, these persons are certain that what they experienced is not a dream or hallucination.
14.5 Induced After Death Communication
Typically, after death communication experiences occur spontaneously. But after death communication experiences can be induced as well. Michael Crichton in his book, Travels, recounts his experience in the “astral plane” and this account is typical of some of the after death communication accounts:
“The idea of astral travel didn’t seem too alarming, and I tried it with Gary. It is, after all, just another kind of guided meditation in an altered state. … I saw my grandmother, who had died while I was in medical school. She waved to me, and I waved back. ‘Do you see anybody else?’ Gary said. Then: ‘Yes. My father.’ … I hadn’t had an easy time with my father. Now he was showing up while I was vulnerable, in an altered state of consciousness. I wondered what he would do, what would happen. He approached me. My father looked the same, only translucent and misty, like everything else in this place. I didn’t want to have a long conversation with him. I was quite nervous. Suddenly he embraced me. In the instant of that embrace, I saw and felt everything in my relationship with my father, all the feelings he had had and why he had found me difficult, all the feelings I had had and why I had misunderstood him, all the love that was there between us, and all the confusion and misunderstanding that had overpowered it. I saw all the things he had done for me and all the ways he had helped me. I saw every aspect of our relationship at once, the way you can take in at a glance something small you hold in your hand. It was an instant of compassionate acceptance and love… this incredibly powerful experience had already happened, complete and total, in a fraction of a second. By the time Gary had asked me, by the time I burst into tears, it was finished. My father had gone. We never said a word. There was no need to say anything. I couldn’t really explain it to Gary—I couldn’t really explain it to anybody—but part of my astonishment at the experience was at the speed with which it had occurred. …In less time than I took to open my mouth to speak, something extraordinary and profound had happened to me. And I knew it would last. My relationship with my father had been resolved in a flash. There hadn’t even been time to cry, and now that it was over, crying seemed after-the-fact.”
After death communications can also be induced under clinical settings by a procedural variant of Eye Movement Desensitization and Reprocessing (EMDR). This was a recent discovery by Dr. Allan Botkin who practices in Lincolnshire, IL. These are well documented in Botkin’s book,
Induced After Death Communication: A Miraculous Therapy for Grief and Loss. Many of his patients were suffering from post-traumatic stress syndrome from the Viet Nam war. Here is Dr. Botkin’s account of his first successful induced after death communication involving a veteran who had a tragic experience involving a young Vietnamese women:
“Just as they got all of the children onto the truck, shots rang out and bullets zipped past. Risking their lives, Sam and the other soldiers quickly pulled the children off the truck to the relative safety of the ground. The shooting stopped as quickly as it had started, and they began to put the children back onboard. Nearly all of the children were back on the truck when Sam realized he didn't see Le. He walked to the rear and saw her lying face down with a spot of blood on her back. Sam rolled her over and was horrified to see that her front torso was blown open from a bullet that had entered from behind. Sam sat on the ground, holding her lifeless body, and cried. Other soldiers eventually had to pull Sam away and take Le's body to bury her.
“For the remainder of his tour in Vietnam, he numbed the pain of his profound loss with anger and rage, volunteering for dangerous patrols to kill any enemy he could find or be killed himself. After Vietnam, he returned to the States and fathered a daughter, but then avoided her for years because she triggered anger, guilt, deep sadness over Le's death, and gruesome images of Le's dead body.
“For nearly twenty-eight years, Sam spent most of his days secluded in the basement of his home, separated physically and psychologically from his family. To help him open up and work through the grief that was dominating his life, I decided to use core-focused EMDR.
Sam sobbed quietly from the overwhelming pain of his grief.
‘I asked him to focus on his sadness while I administered the first set of eye movements. As I expected, the sadness that had held him isolated in grief for twenty-eight years increased. I gave him more sets of eye movements and his sadness began to decrease. While tears ran down his face, I administered a final eye movement procedure and asked him to close his eyes. Neither of us was prepared for what happened next.
“The tears that had been flowing from his closed eyes suddenly stopped, and he smiled broadly. He giggled softly. When he opened his eyes, he was euphoric. ‘When I closed my eyes, I saw Le as a beautiful woman with long black hair in a white gown surrounded by a radiant light. She seemed genuinely happier and more content than anyone I have ever known.’ Sam's tear-reddened face glowed. ‘She thanked me for taking care of her before she died. I said, ‘I love you, Le,’ and she said ‘I love you too, Sam,’ and she put her arms around me and embraced me. Then she faded away.’
“Sam was ecstatic and absolutely convinced that he had just communicated with Le. ‘I could actually feel her arms around me,’ he proclaimed. As Sam's psychologist, I wasn't sure what to make of what he was telling me. I assumed that the agony of his grief had somehow produced a hallucination based on fantasy or wishful thinking. I had never witnessed or heard of such a response during psychotherapy.”
The induced after death communications appear to be quite similar to the normal after death communication experiences. And they are similar in “quality and impact” to near death experiences according to Dr. Botkin:
“I've had the opportunity to talk to a few hundred people who have experienced NDEs. Most of these reports were provided by people who were normal, psychologically healthy individuals. The NDEs were often life changing for those who experienced them. One patient named Pete had both an NDE and an IADC experience. He described them as having the same quality and psychological impact.”
Now it is important to note that when Dr. Botkin is inducing these experiences, he is not doing so in a suggestive way as might be the case in hypnosis. In other words, he is not describing a particular visual setting or any other specific imagery. He is just putting their minds into a state that appears to facilitate a person’s own experience.
The fact that these experiences can be induced seems to argue that they are not genuine revelations but rather hallucinations of the mind. Again though, these after death communications cannot be hallucinations of the physical brain for the same reasons dreams and near death experiences cannot be. Skeptics who invoke brain hallucination, engendered by wishful thinking as a causal explanation are mistaken.
Not to belabor the point, but there are two reasons these mystical experiences cannot be hallucinations of the physical brain: 1) “Wishful thinking” would be a top-down causation and therefore an attribute of an emergent mind, not the physical brain which is necessarily limited to bottom-up causation due to the reductive nature of material explanations. And as we have seen, the idea of an emergent material mind as in property dualism would be required to explain top- down causation and these theories are fraught with profound problems. 2) The information problem, i.e. the spontaneous and instantaneous appearance of novel coherent information in the form of these visions could not be the result of any high level algorithm for reasons I detail in the section on dreams. The physical brain, or the putative algorithms suggested by materialists for reflection and thought, could not produce a novel coherent multi-sensory phenomenon.
14.6 Summary – Mystical Experiences and Hallucinations
The primary focus of this paper is to show that there is design in nature. The method for showing that there is design in nature is to demonstrate that the complex specified information exhibited by nature, within living systems, and especially through the qualities of the mind in dreams and thought streams as well as these mystical experiences and hallucinations, defy any materialist explanations.
The near death experiences, end of life experiences, DMT trips and alien abductions provide an added dimension to the argument based on complex specified information because they produce what is clearly creative content that is ineffable and ultra-real; “more real than real.” And furthermore, they produce complex information that is specified in often precise repeatable ways across many subjects. So, just as convergent evolution adds an extra dimension to the case for design in the evolution of life, so too, does the pattern of repeated mystical experiences across subjects, offer an added inference of design.
It is these added creative dimensions to these experiences—the creativity, the ineffability and the repeated conformance to patterns—that extinguish any doubt (if there was any) that the physical brain—even while positing an emergent physical mind—could possibly account for them. A purely physical system, such as the brain, no matter how well programmed one imagines it could be, cannot account for this level of creative complex specified information and especially given that this content is produced spontaneously. These experiences must, therefore, be produced by an immaterial mind.
The assumption all along in this section has been that the brain, while not a sufficient condition for consciousness and thought, is nevertheless a necessary condition. One of the questions we will ask in the next section is whether in some cases the brain is necessary at all.
In the previous section we have talked about mystical experiences such as near death experiences, deathbed visions, after death communications and even hallucinations involving DMT trips and alien abductions. These phenomena offer extensions of the cases made in previous sections related to dreams and thought streams. Mystical experiences and the types of hallucination experiences discussed are all powerful evidence against materialism by virtue of the ineffable, unearthly, ultra- real, creative complex specified information they exhibit.
Modern neuroscientists believe that consciousness and mind are reducible to the physical brain and therefore cease at the moment of death. Substance dualists and others, who believe in an immaterial mind, would say that the brain, while a necessary condition for consciousness and thought, is not a sufficient condition.
One of the questions asked in this section is whether in some cases the brain is necessary at all. Demonstrating that consciousness and thought can exist independent of the physical brain (and body) takes us one step closer to assessing the prospect of surviving death. In order to do this, we are going to take a closer look at some aspects of near death experiences and especially out-of- body experiences.
In this section we will also take a closer look at materialist explanations for near death experiences.
If you wish you can review some of the near death experience accounts I discussed in the previous section, as I have repeated the links here:
The following links provide accounts of Vicky Noratuk.
The following links describe Pam Reynolds remarkable near death experience.
Mary Neal, an orthopedic spine surgeon:
Neurosurgeon Eben Alexander contracted e-coli bacterial meningitis and lapsed into a coma:
A fundamental problem with near death experiences is determining precisely when the near death experience actually occurs. In other words, are the near death experiences occurring just as, or just after, a person is losing consciousness. Out-of-body experiences offer the hope of time anchors to show when the near death experiences are actually occurring. Time anchors are associations between the testimony of the person experiencing the out-of-body condition and the medical professionals attending to them. Time anchors offer a way of establishing the time of an out-of- body experience.
And that has in fact been done on many occasions and there are some very spectacular cases of corroboration. Often a person recounts having watched their own resuscitation and in many cases can describe what was going on while they were unconscious and their brains were ostensibly in a flat-lined state.
Some experts claim that these accounts demonstrate beyond any reasonable doubt that the physical brain—in specific cases—is not necessary for consciousness. The case of Pam Reynolds was one such case. There are many others. And my intent here is not to rehash any of them. You can listen to the interviews and links above and the many accounts in the references I provide. But always remember that they are second and third hand accounts.
15.1 Out-of-Body Experiences
For many, the out-of-body experience is the most interesting aspect of near death experiences because they offer the potential to show that the brain, in some cases, is entirely unnecessary. A single verified veridical out of body experience would disprove materialism and show that the physical brain is unnecessary to support consciousness in some cases. There have been thousands, perhaps many tens of thousands of people who claim to have had out-of-body experiences if we are to believe the surveys.
Often out-of-body experiences come as the initial event of a near death experience. But out-of- body experiences occur under other conditions as well. Fighter pilots when subjected to excessive g-forces can experience out-of-body conditions as well as other aspects that some say are similar to near death experiences. Sometimes the visions pilots have are described as “dreamlets.” But these do not appear to consist of the ineffable quality of near death experiences. And there is little or no consistent structure to these pilot dreamlets.
15.1.1 Dissociation – Out-of-Body
The psychological phenomenon of dissociation can involve an out-of-body experience. More typically dissociation is more benign, characterized by a sense of detachment often accompanied by memory gaps. But during extreme psychological trauma when a person’s mind is faced with what appears to be an existential threat, or a proposition where both choices involve what is perceived to be an existential threat, an out-of-body condition can result. The following is an example from a dissociation forum:
“About 54 years ago, when I was a girl of 8 I was raped by an elderly (50-60 yr old) friend of my grandfather. My brother was sleeping in the other bed in the farm shed where we were on ‘holiday’ with the man. I struggled and screamed thinking I was being murdered. Then I realized I could not die as the man would also kill my 6 yr old brother. I had to stay alive to protect him. Instantly I left my body, and remember that my head was hovering from the corner of the ceiling on top of my brother where he was sleeping. I wanted to flee out of the building, but was forced to stay to protect my brother.
“I was not looking at myself, but at my brother sleeping at the other side of the room away from where I was physically. I was conscious that I could not leave altogether because I had the responsibility of keeping him from being attacked and murdered too. I was not focused on my body, but on him.
“After some time, I remember hearing distant sobbing and shuddering sounds, and thought to myself ‘someone is crying’. I became concerned about that person too, and was distracted from my brother by slowly looking towards the sound. At that moment I found myself back in my body lying under the weight of the man, and the deep shuddering sobbing sounds were coming from me.
“I sometimes think I was near death, or had died, and my spirit left my body. I certainly had no awareness of my own body or what was happening to me, during that time.”
The standard psychological explanation for dissociation events like this is that there is nothing supernatural about them. It is just the brain tricking us into thinking we have left so we can survive by not going insane at that moment. But that cannot be the case if what the person is seeing and hearing is veridical, i.e. reflects objective reality. There are other problems as well with the idea of the brain “tricking us” in that it requires top-down causation and as I have shown, there are profound problems with the idea of the emergent physical mind (property dualism).
If we are to accept the account offered by this women at face value, both her vision and auditory “senses” were detached as she hears sobbing from a distance and is looking down on her brother. In order for this to be an hallucination of the brain, the brain would have had to have constructed a dynamic set of images simulating an entirely different vantage point. Either that or the eyes and ears become disembodied. But I think we can all agree about the impossibility of that.
Dissociation events are different from the type of out-of-body experiences that occur as part of near death experience. In a near death experience, the implication is that there is no corporeal consciousness because the brain is shutting down. But in the dissociation case described above, the body and brain are intact. It would seem that there was something related to consciousness, in the case described above, remaining in the body as “she” continued crying and sobbing and must then have had some sense of awareness of what was going on even though she had no memory of it.
Dissociation out-of-body incidences clearly do not occur in controlled environments where definitive assessment can be made as to whether they are veridical. Furthermore, in line with the primary goal in this section, because the body and brain are intact, it is more difficult to use them to show that consciousness can exist apart from the physical brain. What is needed to demonstrate with greater certainty that consciousness can exist without the physical brain, is to show both that consciousness can become detached (non-localized) and that during that period of non-local consciousness, the brain was inactive.
15.1.2 Near Death Experience – Out-of-Body
There are out-of-body experiences that occur in hospitals where the testimony of an individual with a near death experience can be compared with medical records. In most cases, these out-of- body experiences are retrospective in nature (interviews after the experience, sometimes well after the experience) and therefore subject to confabulation, error or even deceit. However, there are prospective studies as well that offer a way to determine whether or not an out-of-body experience is veridical by corroboration with medical staff.
Pam Reynolds ’ experience, although retrospective, is an important case. Her statements about her visual experience from above during her out-of-body experience and the later corroboration with medical personnel, indicates that consciousness existed while she was under deep anesthesia but not yet in a standstill condition. However, as I discussed, she claimed that her entire near death experience was continuous from the first out-of-body experience while under deep anesthesia as they were cutting her skull open, to her second out-of-body experience when she was watching the defribulation attempt to correct her heart beat during her resuscitation. This period would have included the period when she was in “standstill” with all the blood drained from her brain and no molecular activity possible.
There have been prospective cases as well and there have been some spectacular claims of corroboration involving dentures and shoes on ledges among others. There is often a time lag between an interview with a person and the purported event. In any case, from a person reading about these incidences third hand, though interesting, they are still subjective and anecdotal.
15.2 The AWARE Near Death Experience Study
A study on near death experiences was conducted for five years concluding in December of 2012 called the AWARE (AWAreness during Resuscitation) study. The primary purposes of the AWARE study was to demonstrate the veridical nature of out-of-body conditions during near death experiences. To test this, researchers placed cardboard placards with numbers on top of shelves in emergency rooms throughout South Hampton England hospitals and other areas as well.
The AWARE study ended without any viable near death out-of-body experiences occurring in the rooms where the numbers were placed. However, there was one very intriguing case that comes quite close to confirming the veridical nature of out-of-body experiences. The case has been recounted by near death expert, and one time skeptic, Dr. Sam Parnia’s in his book, Erasing Death: The Science That Is Rewriting the Boundaries Between Life and Death. The account given in the interview does appear to have been corroborated by medical personnel although the interview was conducted some months after the event.
The individual involved was a 57 year old male social worker who had a mild heart attack. He was admitted to the hospital for cardiac catheterization. During the catheterization procedure he went into ventricular fibrillation, a condition that is fatal unless the proper heart beat can be restored immediately. When the heart stops or goes into fibrillation, a person loses consciousness within seconds—fifteen to twenty seconds at the most, but typically less than that.
According to Dr. Parnia following an interview with the patient:
“The patient specifically recalled feeling that he had been above his own body and had been looking down. He said he had seen people in the room around him and that they had given his heart electrical shock treatment (defibrillation) twice. He said he had a bird’s-eye view, while looking from above himself, of all that was happening to him below.”
The man claims that he heard the commands from the AED; the second command would have occurred at least two (2) minutes after his heart went into ventricular fibrillation, long after he would have lost consciousness. The information that he provided about the people in the room especially pertaining to the male attendant, who he had not seen come in prior to losing consciousness, were verified.
He recalls being “beckoned” to the corner of the room by a beautiful women with angelic hair and features:
“She had lovely curly hair. It wasn’t blond but it wasn’t dark, if you know what I mean. She just had lovely features about her. I would say she was an angel… I can still see her now if I want to. I felt that she knew me, I felt that I could trust her, and I felt she was there for a reason but I didn’t know what that was.”
This case is a good illustration of the difficulty with assessing whether near death experiences falsify materialism and also provide evidence of an afterlife. Although unlikely, this person could have been lying. He could have guessed about the man in the room or assembled the information about the AED and the man from information he later learned. I don’t think that is the case, but it is possible. But it is less possible, much less possible, given that so many others have had similar out-of-body events in their life. Although the evidence related to near death experiences as it relates to the falsification of materialism, is compelling, it is not definitive in my opinion.
But the alternative explanations don’t seem at all compelling. I have talked about creative complex specified information problem and have shown that these near death experiences cannot be hallucinations of the physical brain, nor does DMT appear to be an adequate explanation. In the next subsection, immediately following, I will discuss other problems with the materialist explanations for near death experiences.
15.3 Are Materialist Explanations of Near Death Experiences Reasonable?
Modern neuroscientists believe that the consciousness and mind are reducible to the physical brain and therefore cease at the moment of death. Therefore, out of necessity, materialists believe that these conscious experiences during near death experiences are hallucinations of the physical brain.
The previous sections of this paper explain in detail why this materialist view of mystical experiences is untenable from the standpoint of information generation—creative complex specified information. And this of course assumes that we are setting aside the problems a materialist explanation has related to consciousness, qualia and intentionality. Now we are going to discuss some other qualities about near death experiences that provide further evidence that they cannot be explained by the physical brain.
Materialists, when they are not claiming that the personal accounts of near death experiences are hallucinations or fabrications, or the effects of Ketamine or DMT, are claiming that they are evidence of residual consciousness in the few seconds following an event that leads to loss of consciousness as the brain is under duress. But is this really tenable?
We have looked at the out-of-body experience and have tentatively concluded that they offer evidence that consciousness can exist outside the brain. But because there is a subjective component to them and because they are typically third hand accounts, I do not consider them as definitive evidence.
How do materialists explain near death experiences? I mentioned in a previous section that a release of DMT might explain near death experiences but as you might recall there were differences between DMT experiences and near death experiences. Also, that there might be a release DMT as the brain is starved for oxygen is just an observation, not an explanation. You still have to account for the cause of the experience.
One thought is oxygen deprivation. But again oxygen deprivation is an observation not a cause. Furthermore, oxygen deprivation typically causes confusion, not elucidation. In the 1996 Everest tragedy, experienced climbers thought all their oxygen tanks were empty and discarded them. In their confusion—due to diminished oxygen—they did not realize that they were not operating them correctly. Also, there are many cases where people experience “fear death experiences” — attributes of a near death experience when one is fearful of death but not physically affected. Therefore, oxygen deprivation does not seem to be a viable explanation at all.
Neuroscientists claim to have identified neural correlates of consciousness. These are patterns that correlate with consciousness. There are inconsistencies with these as I pointed out a few sections earlier. But in any case, it is highly unlikely that these neural correlates of consciousness are intact during the period the near death experience is occurring but it is difficult to determine that for sure. But recall from the case of the women in the persistent vegetative state, discussed in an earlier section, that it seems possible that consciousness could occur even when all the normal neuro-correlates of consciousness are completely absent. However, remember that if there is consciousness with very little brain activity, this is evidence that contravenes materialism.
In order to assess whether any materialistic explanation can account for near death experiences, it is helpful to briefly walk through what would have to be the case in order to sustain a materialistic explanation. The assumption of a materialists is that these experiences are hallucinations of the brain that occur in the interval of time beginning with the extinction of normal waking consciousness and the time that there is no residual brain activity at all. This transition is a common occurrence during cardiac arrest where the time is assessed to be about 15 seconds and no more than 20 seconds.
A materialist would have to believe that this rich, ineffable mental content is either produced by the normal neural correlates of consciousness or that there is an auxiliary seat of consciousness generated by some other area of the brain. Of the two choices, the more tenable theory would be that it is the same components of the brain that brings forth our normal waking consciousness even if they cannot be fully detected. Positing an auxiliary seat of consciousness that kicks in just when the primary consciousness fails, seems hopelessly implausible but only slightly more implausible than all the other reasons.
In order to believe that the near death experiences are produced by the diminishing normal consciousness-producing neural infrastructure in the short period of time between loss of consciousness and a completely inactive brain, one must assume that the effect of oxygen starvation (or whatever phenomenon triggered the experience) is to enhance consciousness by accelerating time and enhancing imagination so as to produce the creative, unique, ultra-real, ineffable visual, auditory and abstract thought associated with these near death experiences. And just as in the case of dreams where you have areas of the brain that emulate the physical senses, the same thing happens in a near death experience. But the emulation of the normal senses during a near death experience is of an enhanced quality.
The acceleration of time is a particularly daunting problem for a materialist explanation because the content, as in the case of dreams, is generated from within and now we have to imagine how this rich content is engendered in the brain and accelerated in sync with the other emulated senses and in sync with thought itself. The sounds, the visual imagery and the thoughts would all have to be accelerated in sync with one another to create a coherent phenomenon.
Furthermore, there is every indication, not only that these people are fully conscious, but that they also possess all their mental faculties. And since they can remember the events, their memory storage functions must also be intact and have remained intact throughout the severe effects on the brain.
15.3.1 Near Death Experience – Life Review
Time acceleration and memory access and recall are particularly evident during a life review. And because the life review occurs deeper in the near death experience, there is a better case that could be made that the brain is non-functional when the life review occurs. But we need to be careful because the accounts could occur in a fraction of a second.
Some accounts of life reviews claim that all their memories are replayed during a life review but that these events are often from a different perspective. In a life review, there is often commentary by a divine being, who is often only “sensed,” is warm, caring, loving and not judgmental. The person going through the life review experience claims to feel the emotions of the persons they have offended or privileged in some way. The experience is transforming.
Often persons comment that they never realized how each seemingly trivial encounter with another person could have had such a profound influence on that person’s life and that it has a ripple effect permeating through humanity. It is quite clear that little acts of kindness and caring can be profoundly important and that each encounter with another human being should be treated as precious.
Here is an example from Steve J Miller’s book, Near-Death Experiences as Evidence for the Existence of God and Heaven: A Brief Introduction in Plain Language
“My whole life so far appeared to be placed before me in a kind of panoramic, three- dimensional review, and each event seemed to be accompanied by an awareness of good and evil or by an insight into its cause and effect. Throughout, I not only saw everything from my own point of view, but also I knew the thoughts of everybody who’d been involved in these events, as if their thoughts were lodged inside me. It meant that I saw not only what I had done or thought but even how this had affected others, as if I was seeing with all-knowing eyes. And throughout, the review stressed the importance of love. I can’t say how long this life review and insight into life lasted; it may have been quite long because it covered every single subject, but at the same time it felt like a split second because I saw everything at once. It seemed as if time and distance didn’t exist. It was clear to me why I’d had cancer. Why I had come into this world in the first place. What role each of my family members played in my life, where we all were within the grand scheme of things, and in general what life is all about. The clarity and insight I had in that state are simply indescribable.”
Here are a couple more accounts from oncologist Jeffrey Long’s book, Evidence of the Afterlife: The Science of Near-Death Experiences
“I went into a dark place with nothing around me, but I wasn’t scared. It was really peaceful there. I then began to see my whole life unfolding before me like a film projected on a screen, from babyhood to adult life. It was so real! I was looking at myself, but better than a 3-D movie as I was also capable of sensing the feelings of the persons I had interacted with through the years. I could feel the good and bad emotions I made them go through. I was also capable of seeing that the better I made them feel, and the better the emotions they had because of me, [the more] credit (karma) [I would accumulate] and that the bad [emotions] would take some of it back … just like in a bank account, but here it was like a karma account to my knowledge.”
“While in the light I had a life review and saw everything I… ever did in my life; every thought, word, deed, action, inaction was shown to me. The review was very fast, but I seemed to comprehend everything easily despite the speed. At that moment, I’m not sure exactly when, someone or something began giving me an examination of conscience, and in the blink of an eye images from my life began passing before me, beginning with my childhood. Each image had its counterpart, or as if the actions of my life were being put into a balance. Everything I ever thought, did, said, hated, helped, did not help, should have helped was shown in front of me, the crowd of hundreds, and everyone like [in] a movie. How mean I’d been to people, how I could have helped them, how mean I was (unintentionally also) to animals! Yes! Even the animals had had feelings. It was horrible.
“I fell on my face in shame. I saw how my acting, or not acting, rippled in effect towards other people and their lives. It wasn’t until then that I understood how each little decision or choice affects the world. The sense of letting my Savior down was too real. Strangely, even during this horror, I felt a compassion, an acceptance of my limitations by Jesus and the crowd of others.”
“All of a sudden in my mind from left to right like an IMAX movie, I saw all the very important moments of my life up to that present time. Most of the earlier moments in my life … I had long forgotten about until this happened. I had mixed feelings about this but mostly was peaceful. I saw my childhood and felt the emotions my actions created in others. I learned that many of the things I thought I did “wrong” were not necessarily wrong. I also learned of opportunities to love others that I passed up. I learned that no matter what has been done to me, there is more to the story that my ego might not see or understand. My life has [changed] because I take into account more the feelings of others when I act.”
For me, the willingness to believe that the brain, as it is being starved for oxygen, could assemble such unique, ineffable, ultra-real mental content from within, access all memories, accelerate and synchronize them, changing the perspective of the experiences and all the while instilling loving values, is a “yardstick for lunacy.”
15.4 Summary – What are These Mystical Experiences
In this section and the previous section of the paper we have talked about mystical experiences such as near death experiences, deathbed visions, after death communications and even hallucinations involving DMT trips and alien abductions. These are all powerful evidence against materialism by virtue of the creative complex specified information they generate. For this type of disproof of materialism, we can accept that the physical brain, although a necessary condition for consciousness and thought, is not a sufficient condition to account for them.
In this section we entertained the idea that the brain might not be necessary at all in some cases to support consciousness and thought. This would offer an even stronger and unassailable case against materialism that would go beyond a falsification of materialism based on complex specified information. I am going to leave this issue in an unresolved state but with the preponderance of evidence, somewhat subjective as it is, supporting the notion that consciousness could exist outside the physical brain.
But we can say with a high degree of certainty that there is an immaterial mind, apart from the physical brain, based on the previous lines of evidence and augmented by the evidence presented in this section.
Given that, what can we say about our nature and destiny? It seems reasonable to conclude that any immaterial mind must be an endowment by a higher intelligence. But that does not give us complete added insight about our destiny. Exploring other avenues which may enable us to gain a greater degree of assurance about our destiny is the subject of the next section which will depart from the main theme of this paper and discuss the possibility that these mystical experiences are in some sense genuine revelations.
So thus far we have shown that materialism is in all likelihood false because it probably cannot account for the complex specified information related to the origin of life, the evolution of life, the essential operations in the living cell and definitely cannot account for the complex specified information exhibited by common subjective mental experiences such as dreams, continuity of thought, constancy of self and further mystical experiences and hallucinations. For these types of falsifications of materialism, we accept that the physical brain, although not a sufficient condition to account for consciousness and thought is at least a necessary condition.
But in the immediately preceding section we went a step further and asked if these mystical experiences could tell us whether in some cases the physical brain is necessary at all for consciousness and thought. The tentative conclusion of that preceding section is that the question of the necessity of the physical brain during out-of-body experiences and near death experiences in general, is unresolved. The issue is unresolved because the experiences are entirely subjective and from our perspective, anecdotal. And because of the subjective nature, there is some residual doubt as to whether the brain is necessary or not based on these cases. But on balance the evidence is nevertheless compelling.
In this section we are going to approach a falsification of materialism in a different manner that goes much further. In this section we ask if there are any other aspects to mystical experiences that would enable us to corroborate them as genuine. In other words, we are not only going to try and eliminate the notion that these mystical experiences are hallucinations but we are also going to entertain the idea that they could be genuine divine revelations in some limited sense.
The Urantia Book says:
“There can be no exhibition of any sort of personality or ability to engage in communications with other personalities until after completion of survival. Those who go to the mansion worlds are not permitted to send messages back to their loved ones. It is the policy throughout the universes to forbid such communication during the period of a current dispensation.” 112:3.7 (1230.5)
The statement above from The Urantia Book is unambiguous on this point. It rules out the near death experiencer’s encounters with deceased relatives as being genuine or real.
But what are these mystical experiences then? Are they simply hallucinations of an immaterial mind? That is certainly my starting assumption because there is enough similarity between DMT and alien abductions on the one hand and near death experiences on the other, for me to suspect that near death experiences might be hallucinations.
However the veridical nature of the out-of-body experiences, the ineffable content, the commonality of the experiences, aspects of the life review and the logical end point of near death experiences, are difficult to reconcile with any hallucination theory. And there is another confounding factor in these mystical experiences that we will look at next—the most astounding of all human mental phenomena.
16.1 Shared Death Experiences
What other criteria could be used to distinguish hallucinations of the mind from genuine divine revelation? Stated another way, what information could render the hallucination theory untenable? Since hallucinations would always be specific to an individual, any type of mystical experience shared with other individuals would call the hallucination theory into question, would it not? If these phenomena are hallucinations of the mind, how likely would you say it would be that multiple persons could experience the same hallucination at the same time? Not very likely at all would be my answer.
I now want to discuss what I believe is probably the most astounding of all human mental phenomena. These are phenomena that tie all three of these categories of mystical experiences— near death experiences, deathbed visions, after death communications—together. And the human quality that ties them together and gives them validity, is the power of love.
Let’s start with what Ray Moody refers to as “shared death experiences.” Moody claims that there is a phenomenon, a sort of blending of near death experiences and deathbed visions, whereby multiple persons in the room of a dying loved one can experience the same sort of deathbed vision.
In his book, Glimpses of Eternity: Sharing a Loved One’s Passing from This Life to the Next, Moody claims there are many hundreds of such cases. In fact Ray Moody claims to have experienced a shared death experience when his mother was passing in the presence of several other family members who also experience the end of life visions. Here is an example from Moody’s book:
“The first thing that happened when my mother passed was the light changed intensity and grew much brighter real fast. All kinds of things started happening at once, such as a kind of rocking motion that went through my whole body. It was like my whole body rocked forward one time real quick and then instantly I was seeing the room from a different angle from above and to the left side of the bed instead of the right side. It was like I was viewing my mother’s body from the wrong side according to where I was stationed in the room.
This rocking forward motion was very comfortable, and not at all like a shudder and especially not like when a car you are riding in lurches to the side and you get nauseous. I did not feel uncomfortable but in fact the opposite, I felt far more comfortable and peaceful than I ever felt in my life. I don’t know whether I was out of my body or not because all the other things that were going on held my attention. I was just glued to scenes from my mother’s life that were flashing throughout the room or around the bed. I cannot even tell whether the room was there anymore or if it was, there was a whole section of it I hadn’t noticed before. I would compare it to the surprise you would have if you had lived in the same house for many years but one day you opened up a closet and found a big secret compartment you didn’t know about. This thing seemed so strange and yet perfectly natural at the same time. The scenes that were flashing around in midair contained things that had happened to my mother, some of which I remembered but others I didn’t. I could see her looking at the scenes too, and she sure recognized all of them, as I could tell by her expression as she watched. This all happened at once so there is no way of telling if that matches the situation. The scenes of my mother’s life reminded me of the old-fashioned flashbulbs going off. When they did, I saw scenes of my mother’s life like in one of the 3-D movies of the 1950s. By the time the flashes of her life were going on, she was out of her
body. I saw my father, who passed seven years before, standing there where the head of the bed would have been. By this point the bed was kind of irrelevant and my father was coaching my mother out of her body. That was amusing because in life he had been a football coach at the high school I attended. Frankly, I felt a little disappointed that he still had that coaching mentality, as if he had not moved on to better things since his death. I looked right into his face and a recognition of love passed between us, but he went right back to focusing on my mother. He looked like a young man, although he was seventy-nine when he died. There was a glow about him or all through him— very vibrant. He was full of life. One of his favorite expressions was ‘look alive!’ and he sure did look alive when he was coaching my mother out of her body. A part of her that was transparent just stood right up, going through her body, and she and my father glided off into the light and disappeared.”
It is hard to know what to make of these rather spectacular accounts. Again, this is third hand but there are many such accounts apparently. Moody ends his book with the comment about shared death experiences:
“Even after all of these years I still wonder, if these aren’t proof of life after life, what are they?”
Since shared death experiences are by their very nature, corroborated, skeptics are strained to dismiss them as anecdotes because they really cannot be hallucinations because they are not specific to an individual. I suppose a healthy bit of skepticism is well and good but at some point, it seems to strain one’s credibility to dismiss all of them as fabrications especially given the number of them and given the fact that multiple persons bear witness to them.
As an interesting aside, not specifically related to shared death experiences, but what you might refer to as a shared life experience, if you have watched the show Bizarre Foods with Andrew Zimmern, and specifically the episode where he is in Africa with the Bushman, he claims to have experienced a similar phenomenon to shared death experience during a Bushman ritual. The youtube video is accessible at the following link (express view to the 3:00 mark and view to the 5:30 minute mark):
The video does not tell the whole story. The account in Zimmern’s book is more detailed:
“Xaxe, a great hunter, healer, and shaman, laid hands on me…for about twenty-five or thirty seconds, but it felt like he had only touched me for a split second. Time stood still. I literally had a short out-of-body experience. I could see him touching me from just above my body, almost like I was floating six feet off the ground, watching myself. All of a sudden, I was back in my body observing an image of him thumbing through the book that contained all the pictures and moments in my life. I saw images of my childhood I hadn’t remembered in years, pictures of my mother and me walking on a beach and shelling, very strong images. At the time, both during his touch and immediately afterward, I described it as him flipping through the pages of my life. I felt he was curious and wanted to see what I was all about.”
What is not clear about this account is whether or not Xaxe was presented with this same set of images that Zimmern was, but that seems to be a reasonable inference given his comments further down in the book. Also, it is not clear whether or not what Andrew Zimmern saw in his out-of- body experience was veridical. And, when Zimmern says he saw the images of Xaxe thumbing
through the pictures in his book of life it is not clear whether the pictures were actual photos or images of incidences in his life from a different perspective. In other words, were they images of him walking on the beach that were not actual photos and therefore that he had not seen before? And were these images from a different perspective? There is no way of knowing without talking to Zimmern.
But here is the really interesting part: When Westerners are privileged with a life review during a near death experience; they describe the experience as though watching a 3D, 360 degree or panoramic movie. But when those in some cultures of the developing world experience a “life review” as part of a near death experience, they describe the experience as watching a great teacher flipping through the pages of the experiencer ’s “book of life;” identical to the way Andrew Zimmern described it.
16.2 Shared Induced After Death Communication
Allan Botkin, who you met in the previous section, pioneered the induced after death communication therapy for treating the grieving. But the experience—whether real or not—of having communicated with a deceased love one, induced by whatever means, could easily be dismissed as an hallucination of the mind. However, Dr. Allan Botkin discovered a stunning twist to his Induced After Death Communication therapy which, like shared death experiences, is strongly suggestive of an afterlife. This twist corroborates the nature of both shared death experiences and near death experiences and would seem to rule out hallucination of the mind.
While training other therapists, Botkin discovered that in certain circumstances, the same vision presented itself in both the patient and an empathetic observer in the room! Here is how he describes his first experience with “shared” induced after death communication:
“The first time I became aware of the shared IADC phenomenon, I was inducing an IADC while a psychologist I was training observed. The patient wanted to resolve his grief by having an IADC of his deceased uncle, with whom he had spent much of his youth. While I induced the IADC with the patient, the observing psychologist closed his eyes and performed the eye movements himself to relax. Images appeared to the psychologist in training: a vivid scene of a swampy area with cattails, a pond, and a willow tree. He felt as though he was lying on the grass with the pond at eye level. It made no sense to him, so he opened his eyes and continued to observe the patient and me. The patient had not yet begun speaking, so the psychologist had no knowledge of what the patient was experiencing during the IADC. When the patient opened his eyes after the IADC, he said he saw a swamp scene. He felt like he was lying in the grass looking at the swamp. The psychologist in training was surprised at this coincidence and asked, ‘Did you see cattails?’ The patient said, ‘Yes,’ not expecting that to be an unusual statement since he had said it was a swamp. The psychologist then said, ‘Did you see a pond and a willow tree?’ The patient was clearly surprised. ‘Yes,’ he said. “How did you know that?’ The psychologist explained what he had done and the two continued to compare notes with great accuracy between their reports of what they had experienced. One part of the scene did not match, however. The psychologist asked, ‘Did you see the ducks fly overhead?’ The patient said, ‘No, that's the one thing you've described that I didn't see.’ I asked the patient, ‘Why did you see a swamp?’ The patient answered, ‘The swamp was in the backyard of my uncle's farm. I used to play there and would lie in the grass by the pond.’”
Here is Dr. Botkin’s commentary on these shared induced experiences:
“For me, witnessing them [shared induced after death communication] was a turning point in my view of IADCs. To preclude the afterlife explanation, skeptics may have to argue that two people sharing an IADC really aren't sharing a perception of spirits of the deceased, but are somehow magically, telepathically sharing the same hallucination.”
He goes on to say…
“In all cases of shared IADCs we have recorded, the observers had experienced their own IADC before the shared IADC episode. That seems to sensitize them to the IADC experience.
“The observers also had a rapport with the IADC experiencer from listening to the experiencer's story and understanding the accompanying grief. My colleagues and I did attempt to replicate shared IADCs under controlled conditions. In our informal experiment, we used two observers who had had several successful shared IADCs. As a control, the observer had no knowledge of the patient's issues because we had the observer come into the session just before the induction, completely uninformed about the patient's case. When we tried it this way, the shared IADC did not occur. The stronger the empathy of the therapist, the greater the likelihood of tapping into an IADC.’
“Dr. Mannelli, the psychotherapist from Milwaukee who experienced one of the shared IADCs, finished his account with these words, which may be the profound insight into shared IADCs: ‘I firmly believe that the energy of an IADC is linked to the power of love.’”
16.3 Summary – The Power of Love
Given the statement in The Urantia Book precluding communication between living persons and the deceased on the one hand and the corroborative nature of the shared death experiences, both induced and non-induced on the other, one might ask if there is another alternative. Is there an alternative explanation to these types of mystical experiences that, while not necessarily real representations, are nevertheless genuine revelations in a sense?
One possibility is that these phenomena could be facsimiles of our eternal future? After all, if our minds are more than our brains, then the creative information that we have talked about so much in this paper, has to have come from somewhere outside of us. My current suggestion would be that these mystical experiences could in fact be facsimiles of sorts, induced in our minds at particular moments when we are in the greatest need for love.
I will end this chapter with the words of Dr. Eben Alexander in his second book, The Map of Heaven:
“But the rules of how things work there—the laws of heaven’s physics, if you will—are different from ours. The one rule we need to remember from here, however, is that we end up, in the end, where we belong, and we are led by the amount of love we have in us, for love is the essence of heaven. It is what it is made of. It is the coin of the realm.
“We are wise to apply that principle in our earthly lives as well—to truly love ourselves as the divine, eternal spiritual beings that we are, and pass along that love to our fellow beings and to all of creation. By serving as conduits for the unconditional love of the Creator for the creation, by showing compassion and forgiveness, we bring healing energy of infinite capacity into all levels of this material realm. That’s also why the main quality required of us if we are to catch a glimpse of this zone while alive on earth is not great intellect, nor great bravery, nor great cunning, fine as all those qualities are. What it takes is honesty. Truth can be approached in a thousand different ways. But because, as Plato himself said, like attracts like, what we need in order to apprehend truth more than anything else is to be truthful ourselves, and honest about the goodness and waywardness at work inside us. On this, voices as disparate as Buddha’s, Jesus’, and Einstein’s are unanimous.
“Like understands like. The universe is based on love, but if we have no love in ourselves, the universe will be shut off from us. We will spend our lives triumphantly declaring that the spiritual world doesn’t exist because we have failed to awaken the love in ourselves that alone will render this most obvious of facts visible to us. You cannot come to truth dishonestly. You cannot come to it telling lies to yourself, or to others. You cannot come bringing only a superficial sliver of yourself, while your larger, deeper self is left behind. If you want to see all of heaven, you have to bring all of yourself, or else just stay home.”
In the Introduction of this paper, I contended that materialism and design constitute a binary proposition. My claim has been that if it can be demonstrated that information flow resulting in the complex specified information that nature exhibits, including the origin of life, the evolution of
life, the fundamental operation of life and especially human intellect, cannot be accounted for by material causes, then design—teleology—can be inferred.
17.1 Creative Complex Specified Information Flow
Throughout this paper I have endeavored to show that material causes cannot, in fact, account for the creative complex specified information exhibited by living organisms and human intellect. Although, on the matter of the evolution of life, the debate could be regarded by some as unsettled, there is nevertheless an undeniable trend in the evidence toward the idea that material causes cannot account for this creative complexity that we see. With each passing day, it seems, there are new discoveries depicting immense complexity with “mechanisms” unimagined just a few decades ago that are extraordinarily difficult to reconcile with any material process.
Added to that is the ubiquity of convergent evolution on the molecular level, the organ system level and the organism level. Living organism exhibit complexity, the complexity arises suddenly and the complexity conforms to a repeated and independently established pattern. When you look at this—the sudden and repeated appearance of astounding complexity—teleology is the obvious inference unless one has been infected with the disease of scientism that lurks through the corridors of academia and is now spreading through Western culture.
The origin of life is an unsolved problem and there are no intellectually honest calculation, that I am aware of, indicating that life can arise from chemistry in any sort of a reasonable time despite the on-going assurances and promises that a solution is near. The most favorable estimate for abiogenesis (which is not to say the most reasonable estimate) is one chance in 101018 which is about 900 orders of magnitude short of what is even theoretically possible anywhere in the universe and for the entire duration of the universe.
Vitalism has never been falsified despite the universal belief that it has been. Intelligence exhibited by the cell, during morphogenesis and by molecular location in the cell and through evolution in the building of new complex adaptation, are unresolved problems with no clear materialist account to resolve them other than the assurance that it could not be any other way.
The paper also detailed several profound problems with the favored materialist theory of mind—property dualism. These go beyond the already materially intractable problems related to human consciousness, qualia and intentionality.
More decisively, the paper presented three falsifications of materialism based on the creative complex specified information flows we all experience in our subjective inner lives. Dreams, continuity of thought streams, constancy and resumption of self cannot be reconciled with materialism even in principle, given the massive spontaneous, instantaneous and creative flow complex specified information, the top-down causation required and the foresight that would seem to be required to produce these subjective phenomena.
Mystical experiences such as out-of-body experiences, near death experiences and especially shared death experiences and shared induced after death communication experiences offer further and definitive evidence that the widely held belief among Western intellectuals that the brain gives rise to consciousness and thought, is false. The shared mystical experiences also offer corroborating evidence to the apparent revelatory testimony of individual near death experiences of an afterlife even if not true representations of reality. I speculate that perhaps these mystical experiences are facsimiles of our destiny, induced in our minds at particular moments when we are in our greatest need.
17.2 Is Materialism Waning?
The Urantia Book says that the worst of the materialist is over:
At the time of this writing the worst of the materialistic age is over; the day of a better understanding is already beginning to dawn. The higher minds of the scientific world are no longer wholly materialistic in their philosophy, but the rank and file of the people still lean in that direction as a result of former teachings. [195:6.4 (2076.9)]
That may be the case. But it may also be the case that it may have seemed that way in the early part of the 20th century. It is possible that the events of World War II changed things. Philosopher Thomas Nagel has an interesting comment in his book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False:
“There has always been resistance to dualism, but for several centuries after Descartes, it expressed itself primarily through idealism, the view that mind is the ultimate reality and the physical world is in some way reducible to it. This attempt to overcome the division from the direction of the mental extends from Berkeley—who rejected the primary- secondary quality distinction and held that physical things are ideas in the mind of God— to the logical positivists, who analyzed the physical world as a construction out of sense data. Then, in a rapid historical shift whose causes are somewhat obscure, idealism was largely displaced in later twentieth-century analytic philosophy by attempts at unification in the opposite direction, starting from the physical.” [Emphasis mine]
I think some of the answer to this conundrum lies in the rise of computational sciences as Stephen Robbins points out in his book, Time and Memory: A Primer on the Scientific Mysticism of Consciousness. In the early days of computing, Robbins seems to suggest around 1972, philosophers of mind took notice of the fact that the neuro-firings of the brain constituted a binary state system just like computers. Neurons either fire or they do not. From that, many concluded that the brain was just like a computer, and materialism, if it had been waning, was re-energized.
Materialism and scientism have been widely adopted and in my experience, are gaining adherents as young adults matriculate through the higher educational system. Materialism and scientism are the primary impediments to a renaissance in true religion. Religious fundamentalism is also an impediment but nowhere near as significant.
Materialism is now so ingrained in the minds of the educated social classes that many seem impervious to anything calling it in to question. The underlying assumption of materialism is that science is the primary and perhaps the only way of ascertaining the truth. This mind set is referred to as “scientism.” A shrug of the shoulders, a wave of the hand and a dismissive comment such as, “they’ll figure it all out” is a common rejoinder that one encounters when offering a well-reasoned argument with supporting evidence contravening materialism to educated people indoctrinated with materialism and scientism.
17.3 Cultural War - Materialism vs Idealism
The most fundamental question in philosophy is whether materialism or idealism is true. And that is the essence of much of what we refer to when discussing the “cultural war” in Western culture. Make no mistake about it; it is a war, and both sides know it. It is a war for the soul of man. One side sees humanity as incidental, an accident with no transcendent purpose. For them, love is an illusion, simply the result of random firings of neurons. On the other side, are those who believe in a transcendent loving God who has a divine purpose for all of us. For these folks, love is real and an eternal adventure awaits us.
On one side you have the members of the new atheist movement. These are secular humanists who appear to firmly believe, not only that God does not exist, but also that the belief that God exists, is damaging to the humanist goal of the “perfectibility of man.”
At its best, secular humanism is espouses the notion that, “God is our chosen name for the ceaseless creativity in the natural universe, biosphere, and human cultures” as Stuart Kauffman puts it. But that is not what is really going on…Berlinsky (fast forward to 22:19 view to 30:00 mark).
The primary tactic of these secular humanists is to show that the idea of a Creator —“The God Hypothesis” —is superfluous. These folks seek to demean the human experience; to eviscerate the hopes and longings of those who believe in an eternal destiny in the presence of a loving God. And far from wanting to keep this dire view to themselves, they in fact insist on informing us all that our fate is nowhere spelled out and that there is no hope of salvation.
On the other side of the cultural war you have theistic philosophers and scientists who hold that material causes are insufficient to account for the origin and evolution of living organisms as well as human consciousness and mind. These scholars believe that an intelligent designer—a Creator—is necessary to explain the creative complexity exhibited here on earth. How that is achieved is left unsaid. Many of these teleologists—the ones who are most visible in the cultural war—fall under the banner of “Intelligent Design.”
Now one might think that those who put forth the positive message of a loving God and who have fought most vigorously to defend that proposition—to defend the God Hypothesis—and to espouse the notion that we are part of a divine plan with an eternal purpose, would be deemed “the good guys” in popular culture. If you thought that you would be wrong. It is these folks, who it seems, the greatest scorn has been reserved for. They are continuously subjected to scorn, ridicule and vilified in the public square—the media, education, academia, entertainment industry, wikipedia and now even in religious institutions. If ever there were a championship case of collective confusion as to who is wearing the white hats and who is wearing the black hats, this is it.
It is only the Intelligent Design group who seeks an end to the complete secularization of science education by introducing the idea that design is a plausible hypothesis to guide research.
“The complete secularization of science, education, industry, and society can lead only to disaster. During the first third of the twentieth century Urantians killed more human beings than were killed during the whole of the Christian dispensation up to that time. And this is only the beginning of the dire harvest of materialism and secularism; still more terrible destruction is yet to come.” [195:8.13] (P. 2082)
For many on the outside looking in, meaning and purpose in life hinges on the outcome of this cultural war.
People can engage in the battle or sit at home and hope for the best, but as Edmund Burke once said,
“All that is required for evil to triumph is for good men to do nothing.”
Study the Book
- A Strategy and Practice for In-depth Study Groups of The Urantia Book
- A Study of the Master Universe
- An Artist's Conception of the Master Universe
- Appendices to A Study of the Master Universe
- Bible Study
- Bill Sadler Talks
- Concerning Human Survival
- Consideration of Some Criticisms of The Urantia Book
- Crystallization of Water Vapor: What May We Discern From a Snowflake
- Foreword and Part 1
- General Organization of The Urantia Book
- Guide to Pronunciation of Names and Words in The Urantia Book
- Implications of Free Will in the Cosmos
- In His Steps - Maps of Jesus' Travels
- Index of The Urantia Book
- Is There Design in Nature?
- Jesus' Travels - Google Earth
- Science in The Urantia Book
- The Atom
- The Cosmogenetic Principle
- The Seven Adjutant Mind Spirits - a Revelation for a Scientific Explanation of Mind
- The Spiritual Brain
- The Unceasing Campaign of the Master Seraphim
- The Urantia Book Concordance
- The Value of Study Groups
- Theology of The Urantia Book
- Topical Studies
- Unicellular Organisms – The Cambrian Explosion – Nano Machines
- Worship and Wisdom