What Is Information?

9-26-2008_IDG-Blog
Deoxyribonucleic Acid, or DNA.

What is Information?

 

In order to develop this argument and avoid equivocation, it was necessary to carefully de-fine what type of information was present in the cell (and what type of information might,

 

___________

 

  • Gian Capretti (1983: 143) has developed the implications of Peircian abduction. Capretti and oth-ers explore the use of abductive reasoning by Sherlock Holmes in detective fiction of Sir Arthur Conan Doyle. Capretti attributes the success of Holmesian abductive “reconstructions” to a willingness to em-ploy a method of “progressively eliminating hypotheses.”


A Scientific History – and Philosophical Defense – 15
of the Theory of Intelligent Design  

 

based upon our uniform experience, indicate the prior action of a designing intelligence). In-deed, part of the historical scientific method of reasoning involves first defining what philoso-phers of science call the explanandum – the entity that needs to be explained. As the historian of biology Harmke Kamminga (1986: 1) has observed, “At the heart of the problem of the ori-gin of life lies a fundamental question: What is it exactly that we are trying to explain the ori-gin of?” Contemporary biology had shown that the cell was, among other things, a repository of information. For this reason, origin-of-life studies had focused increasingly on trying to ex-plain the origin of that information. But what kind of information is present in the cell? This was an important question to answer because the term “information” can be used to denote several theoretically distinct concepts.

 

In developing a case for design from the information-bearing properties of DNA, it was necessary to distinguish two key notions of information from one another: mere information carrying capacity, on the one hand, and functionally-specified information, on the other. It was important to make this distinction because the kind of information that is present in DNA (like the information present in machine code or written language) has a feature that the well-known Shannon theory of information does not encompass or describe.

 

During the 1940s, Claude Shannon at Bell Laboratories developed a mathematical theory of information (1948: 379–423, 623–56) that equated the amount of information transmitted with the amount of uncertainty reduced or eliminated by a series of symbols or characters (Dretske, 1981: 6–10). In Shannon’s theory, the more improbable an event the more uncer-tainty it eliminates, and thus, the more information it conveys. Shannon generalized this rela-tionship by stating that the amount of information conveyed by an event is inversely propor-tional to the prior probability of its occurrence. The greater the number of possibilities, the greater the improbability of any one being actualized, and thus the more information is trans-mitted when a particular possibility occurs.11

 

Shannon’s theory applies easily to sequences of alphabetic symbols or characters that func-tion as such. Within a given alphabet of x possible characters, the occurrence or placement of a specific character eliminates x-1 other possibilities and thus a corresponding amount of un-certainty. Or put differently, within any given alphabet or ensemble of x possible characters (where each character has an equi-probable chance of occurring), the probability of any one character occurring is 1/x. In systems where the value of x can be known (or estimated), as in a code or language, mathematicians can easily generate quantitative estimates of information-carrying capacity. The greater the number of possible characters at each site, and the longer the sequence of characters, the greater is the information-carrying capacity – or Shannon in-formation – associated with the sequence.

 

The way that nucleotide bases in DNA function as alphabetic or digital characters enabled molecular biologists to calculate the information-carrying capacity of those molecules using the new formalism of Shannon’s theory. Since at any given site along the DNA backbone any one of four nucleotide bases may occur with equal probability (Küppers, 1987: 355-369), the probability of the occurrence of a specific nucleotide at that site equals 1/4 or .25. The infor-mation-carrying capacity of a sequence of a specific length n can then be calculated using

 

___________

 

  • Moreover, information increases as improbabilities multiply. The probability of getting four heads in a row when flipping a fair coin is 1/2 X 1/2 X 1/2 X 1/2 or (1/2)4. Thus, the probability of attaining a specific sequence of heads and/or tails decreases exponentially as the number of trials increases. The quantity of information increases correspondingly. Even so, information theorists found it convenient to measure information additively rather than multiplicatively. Thus, the common mathematical expression (I = –log2p) for calculating information converts probability values into informational measures through a negative logarithmic function, where the negative sign expresses an inverse relationship between in-formation and probability.


16                                                Stephen C. Meyer

 

Shannon’s familiar expression (I = –log2p) once one computes a probability value (p) for the occurrence of a particular sequence n nucleotides long where p = (1/4)n. The probability value thus yields a corresponding measure of information-carrying capacity for a sequence of n nu-cleotide bases (Schneider 1997: 427-441; Yockey 1992: 246-258).

 

Though Shannon’s theory and equations provided a powerful way to measure the amount of information that could be transmitted across a communication channel, it had important limits. In particular, it did not and could not distinguish merely improbable (or complex) se-quences of symbols from those that conveyed a message or performed a function. As Warren Weaver made clear in 1949, “The word information in this theory is used in a special mathe-matical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.” (Shannon and Weaver 1949: 8.) Information theory could measure the information-carrying capacity of a given sequence of symbols, but it could not distinguish the presence of a meaningful or functional arrangement of symbols from a random sequence.

 

As scientists applied Shannon information theory to biology it enabled them to render rough quantitative measures of the information-carrying capacity (or brute complexity or im-probability) of DNA sequences and their corresponding proteins. As such, information theory did help to refine biologists’ understanding of one important feature of the crucial bio-molecular components on which life depends: DNA and proteins are highly complex, and quantifiably so. Nevertheless, the ease with which information theory applied to molecular bi-ology (to measure information-carrying capacity) created confusion about the sense in which DNA and proteins contain “information.”

 

Information theory strongly suggested that DNA and proteins possess vast information-carrying capacities, as defined by Shannon’s theory. When molecular biologists have de-scribed DNA as the carrier of hereditary information, however, they have meant much more than that technically limited term information. Instead, leading molecular biologists defined biological information so as to incorporate the notion of specificity of function (as well as complexity) as early as 1958 (Crick, 1958: 144, 153). Molecular biologists such as Monod and Crick understood biological information – the information stored in DNA and proteins – as something more than mere complexity (or improbability). Crick and Monod also recognized that sequences of nucleotides and amino acids in functioning bio-macromolecules possessed a high degree of specificity relative to the maintenance of cellular function. As Crick explained in 1958, “By information I mean the specification of the amino acid sequence in protein […] Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein (1958: 144, 153).”

 

Since the late 1950s, biologists have equated the “precise determination of sequence” with the extra-information-theoretic property of “specificity” or “specification.” Biologists have de-fined specificity tacitly as ‘necessary to achieving or maintaining function.’ They have deter-mined that DNA base sequences are specified, not by applying information theory, but by making experimental assessments of the function of those sequences within the overall appa-ratus of gene expression (Judson,1979: 470-487). Similar experimental considerations estab-lished the functional specificity of proteins.

 

In developing an argument for intelligent design based upon the information present in DNA and other bio-macromolecules, I emphasized that the information in these molecules was functionally-specified and complex, not just complex. Indeed, to avoid equivocation, it was necessary to distinguish:

 

“information content” from mere “information carrying capacity,” “specified information” from mere “Shannon information” “specified complexity” from mere “complexity.”

 

A Scientific History – and Philosophical Defense – 17
of the Theory of Intelligent Design  

 

The first of the two terms in each of these couplets refer to sequences in which the function of the sequence depends upon the precise sequential arrangements of the constituent charac-ters or parts, whereas second terms refer to sequences that do not necessarily perform func-tions or convey meaning at all. The second terms refer to sequences that may be merely im-probable or complex; the first terms refer to sequences that are both complex and functionally-specified.

 

In developing an argument for intelligent design from the information-bearing properties of DNA, I acknowledged that merely complex or improbable phenomena or sequences might arise by undirected natural processes. Nevertheless, I argued – based upon our uniform expe-rience – that sequences that are both complex and functionally-specified (rich in information content or specified information) invariably arise only from the activity of intelligent agents. Thus, I argued that the presence of specified information provides a hallmark or signature of a designing intelligence. In making these analytical distinctions in order to apply them to an analysis of biological systems, I was greatly assisted in my conversations and collaboration with William Dembski who was at the same time (1992-1997) developing a general theory of design detection which I discuss in detail below.

 

In the years that followed, I published a series of papers (see Meyer 1998a: 519-56; Meyer 1998b, 117-143; Meyer 2000a: 30-38; Meyer 2003a: 225-285) arguing that intelligent design provides a better explanation than competing chemical evolutionary models for the origin of the biological information. To make this argument, I followed the standard method of histori-cal scientific reasoning that I had studied in doctoral work. In particular, I evaluated the causal adequacy of various naturalistic explanations for the origin of biological information including those based on chance, law-like necessities and the combination of the two. In each case, I showed (or the scientific literature showed) that such naturalistic models failed to explain the origin of specified information (or specified complexity or information content) starting from purely physical / chemical antecedents. Instead, I argued, based on our experience, that there is a cause – namely, intelligence – that is known to be capable of producing such information. As the pioneering information theorist Henry Quastler (1964: 16) pointed out, “Information habitually arises from conscious activity.” Moreover, based upon our experience (and the find-ings of contemporary origin-of-life research) it is clear that intelligent design or agency is the only type of cause known to produce large amounts of specified information. Therefore, I ar-gued that the theory of intelligent design provides the best explanation for the information necessary to build the first life.12v

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s