In Darwin’s Dangerous Idea, Daniel Dennett has described natural selection as an algorithmic process. In this post, I want to look at what this algorithm is. But let me begin with an introduction to algorithms in general. We normally think of an algorithm as a set of instructions for completing a task. A recipe, for example, is an algorithm of sorts. But we usually use the term with respect to computer programs. A computer completes various tasks by following a set of instructions given to it in a programming language. Note that the algorithm is not identical with the specific set of instructions given. Different code can express the same algorithm. Here are two examples of code which express the same algorithm:
10 print "Hello World!" 20 goto 10
and
while (1) { printf("Hello World!n"); };
The first is in BASIC, and the second is what the same algorithm looks like in C or PHP. The result of each piece of code is to repeatedly print a line of text that says “Hello World!”.
More precisely, each piece of code corresponds to a different algorithm at the machine level. Even when the same code is used by two different languages, C and PHP in this case, the underlying algorithms used to run the code at the machine level are different. But those details are usually unimportant to the programmer. The programmer normally concerns himself with an algorithm at a higher level of abstraction, one at which the same algorithm may be expressed differently by different languages or with different code in the same language. For example, here’s another way to express the same algorithm illustrated already in PHP:
do { echo "Hello World!n"; continue; } while (false);
The important thing to note is that we can understand how these different pieces of code can express the same algorithm without understanding the details, which also involve algorithms, of how it gets executed.
Computers normally run many more advanced algorithms than the one used in the illustration here. When you view video on a website, this is made possible by detailed and complicated algorithms that translate the contents of a computer file to images on your monitor and sound in your speakers. When you surf the web, you rely on algorithms. When you scroll the text on this page (or page down), you are relying on algorithms to get things done for you. Short of using your computer as a paperweight or a footstool, there is little you do with it that does not make use of algorithms. A computer is a machine designed specifically for running algorithms, and living in the computer age, you are well aware of many of the amazing things computers do by running algorithms.
Now you may be thinking, especially if you’re of a creationist bent, that if natural selection is an algorithm, it must have a programmer, just as all the algorithms running on your computer have programmers. I hope to explain the algorithm of natural selection in enough detail that you will understand why it doesn’t require a programmer.
Let’s begin with the tautology that, other things being equal, more stable things outlast less stable things. The more stable something is, the longer it is likely to last. Given a universe full of unstable things, things are going to get more stable over time. Unstable things don’t last, and when they cease to be, their components fall into different arrangements, whether with each other or with other things. Most of the configurations that things fall into may be unstable, and those won’t last. But when things fall into stable configurations, however rare that may be, they will last longer. Given enough time, more and more things will fall into stable configurations. Given that stable things last, the stable things will accumulate. And given that unstable things do not last, the unstable things will not accumulate. So, the overall trend is for stability to increase in the universe, and this is by an unprogrammed algorithmic process that is grounded in the logic of stability.
Once there are a sufficient number of stable things in the universe, these stable things will become building blocks for the formation of new things. Again, many of the formations that these building blocks fall into may well be unstable. But the stable ones, however rare they are to happen, will accumulate. This will eventually produce a new level of larger building blocks. The same procedure will repeat over and over, and given enough time, there will be countless stable things at multiple scales. The biggest stable things we know of in the universe are galactic clusters, which are made up of galaxies, which are made up of solar systems, which are made up of stars and planets, etc. The smallest building blocks we know of are subatomic particles, such as quarks. Protons, electrons and neutrons are made up of quarks. Atoms are made up of protons, electrons, and neutrons. Molecules are made from atoms. Larger pieces of matter are made of molecules. And the things we see around us in the world are composed of various, often complex, arrangements of these larger pieces of matter.
So far, we have been looking at self-sustaining things, arrangements of matter that are naturally stable. Given an unstable universe, things keep falling into different arrangements. Given enough time, some arrangements of matter pop up that have the ability to replicate themselves by drawing on nearby resources. However rare this is to happen, it only needs to happen once. Once there is one thing that can replicate itself, the copies it makes of itself will also be able to replicate themselves. And so on. Even if the replicators do not last much longer than it takes to make new copies of themselves, copies of them will spread around, making more copies, and so on, so that even far in the future, there may be multiple-generation copies of them. This process follows from the logic of replication, and like the tendency toward stability, it needs no programmer. The process of copying oneself is one that generations of replicators can repeat indefinitely.
Although some, even many, replicators may make perfect copies of themselves, some may make imperfect copies of themselves. This introduces variation in the design of a replicator. In some instances, it may prevent the copy from being able to replicate. But when this happens, the change will not spread to new generations. In other instances, it will not impair the ability to replicate, and the change will spread to multiple generations. In some instances, this change in design will give the new replicators who have it an advantage over other replicators. Since the replicators will be competing for limited resources in their local area, those with an advantage will tend to out-replicate those without it. Thus, improvements to a replicator’s design will spread more than impairments to a replicator’s design will. Even if most copying errors are more harmful than helpful, it is mainly the helpful ones that will spread. So, among replicators, there is a general trend toward spreading new properties that help them replicate better, and the improvements to design, but not the impairments to design, will accumulate over time.
There are two general improvements that can enhance a replicator’s ability to replicate itself. One is to speed up the rate of replication, and the other is to increase the longevity of the replicator, so that it has more time to make more copies of itself. One way to speed up replication is to increase the ability to assimilate resources, such as making the replicator actively respond to the presence of resources by moving nearer to them or by grabbing ahold of them. This involves a primitive stimulus-response system that can detect potential resources and take appropriate action regarding them. Given that some replicators will try to assimilate other replicators, one way to increase the longevity of a replicator is to steer it away from replicators that would assimilate it. This also makes use of a stimulus-response system. In both cases, it helps a replicator when it can direct its actions based on cues from its environment. So, when copying errors led to replicators with such a stimulus-response system, it gave an important advantage to the replicators who had it and got copied more.
With replicators preying on each other, advantages that made a replicator a better predator or better at avoiding predators both helped replicators make more copies of themselves.This also led to an arms race between predator and prey. Advantages to one made things more difficult for the other, such that it took new advantages to continue to do as well. This competition between predator and prey helped drive the engine of natural selection to keep selecting for new improvements. Think of it this way. If a species of replicator had a super-advantage that always trumped other advantages, other changes to it would not matter, and even the most advantageous changes to it would not accumulate. But if the advantages a species gain have a limited time of usefulness before new advantages are required, it will continue to accumulate the new advantages that help it continue to reproduce.
Another advantage that helped increase the longevity of replicators was the ability to work together with relatives as a team or to form symbiotic relationships with unrelated replicators. Advantages such as these eventually led to cellular life, then to multi-cellular life, then to packs or herds of animals working together, then, in the case of humans, to civilization. One advantage that improved the chances of teamwork was some kind of signalling. Communication between members of a team, however primitive it may be at first, would better enable them to work together as a unit. Among humans, this has led to language, to writing, to the printing press, and to the internet.
I could take time to go into matters in greater detail, but I have now provided enough of an understanding of the algorithm behind natural selection. Let me summarize. Stable things remain longer and accumulate over time. As stable things accumulate, they become building blocks for new arrangements of matter. The process repeats itself indefinitely, leading to ever larger and more complicated types of stable things. Some new arrangements of things, having gained the ability to replicate, replicate themselves. Imperfect replication leads to changes in replicator designs, and the advantageous designs get replicated more. This process repeats indefinitely, leading, perhaps slowly, to ever more sophisticated replicators. At some point, some replicators became sophisticated enough to be called lifeforms, and some lifeforms gained the sophistication to be conscious and intelligent. The details of how this happened may be buried in time, but the general algorithm that produced us depends on nothing more than the nature of stability and replication, and these do not require a conscious designer or programmer.
Now the question turns to how did the universe start in the first place. One form of the Cosmological argument for God’s existence maintains that since everything has a cause, there must have been an uncaused cause of all existence, and this is alleged to be God. The thing about assuming that it is God is that God is a vastly complicated entity whose origin remains unexplained. To assume that an omniscient, omnipotent, omnibenevolent deity has always existed, completely uncaused by anything, leaves a huge mystery about the nature of the universe. But there is less of a mystery if we take the first cause to be not God but formless, unstable matter. There is no complexity to this that needs to be explained. And given the existence of such stuff, the algorithm of natural selection can account for the complexity and appearance of design we witness in the universe. To account for the universe by saying God made it does not account for the origin of complexity and order, but the algorithm I have described, combined with the existence of some undefined something, does account for it.