Creationist Definition of Information

Discussion in 'Science' started by Gup20, Apr 17, 2005.

  1. Gup20

    Expand Collapse
    New Member

    May 11, 2004
    Likes Received:
    Dr. Werner Gitt Audio:

    Dr. Werner Gitt Article:


    Information, science and biology

    by Werner Gitt

    Energy and matter are considered to be basic universal quantities. However, the concept of information has become just as fundamental and far-reaching, justifying its categorisation as the third fundamental quantity. One of the intrinsic characteristics of life is information. A rigorous analysis of the characteristics of information demonstrates that living things intrinsically reflect both the mind and will of their Creator.

    Information confronts us at every turn both in technological and in natural systems: in data processing, in communications engineering, in control engineering, in the natural languages, in biological communications systems, and in information processes in living cells. Thus, information has rightly become known as the third fundamental, universal quantity. Hand in hand with the rapid developments in computer technology, a new field of study—that of information science—has attained a significance that could hardly have been foreseen only two or three decades ago. In addition, information has become an interdisciplinary concept of undisputed central importance to fields such as technology, biology and linguistics. The concept of information therefore requires a thorough discussion, particularly with regard to its definition, with understanding of its basic characteristic features and the establishment of empirical principles. This paper is intended to make a contribution to such a discussion.
    Information: a statistical study

    With his (1948) paper entitled ‘A Mathematical Theory of Communication’, Claude E. Shannon was the first to devise a mathematical definition of the concept of information. His measure of information which is given in bits (binary digits), possessed the advantage of allowing quantitative statements to be made about relationships that had previously defied precise mathematical description. This method has an evident drawback, however: information according to Shannon does not relate to the qualitative nature of the data, but confines itself to one particular aspect that is of special significance for its technological transmission and storage. Shannon completely ignores whether a text is meaningful, comprehensible, correct, incorrect or meaningless. Equally excluded are the important questions as to where the information comes from (transmitter) and for whom it is intended (receiver). As far as Shannon’s concept of information is concerned, it is entirely irrelevant whether a series of letters represents an exceptionally significant and meaningful text or whether it has come about by throwing dice. Yes, paradoxical though it may sound, considered from the point of view of information theory, a random sequence of letters possesses the maximum information content, whereas a text of equal length, although linguistically meaningful, is assigned a lower value.

    The definition of information according to Shannon is limited to just one aspect of information, namely its property of expressing something new: information content is defined in terms of newness. This does not mean a new idea, a new thought or a new item of information—that would involve a semantic aspect—but relates merely to the greater surprise effect that is caused by a less common symbol. Information thus becomes a measure of the improbability of an event. A very improbable symbol is therefore assigned correspondingly high information content.

    Before a source of symbols (not a source of information!) generates a symbol, uncertainty exists as to which particular symbol will emerge from the available supply of symbols (for example, an alphabet). Only after the symbol has been generated is the uncertainty eliminated. According to Shannon, therefore, the following applies: information is the uncertainty that is eliminated by the appearance of the symbol in question. Since Shannon is interested only in the probability of occurrence of the symbols, he addresses himself merely to the statistical dimension of information. His concept of information is thus confined to a non-semantic aspect. According to Shannon, information content is defined such that three conditions must be fulfilled:

    Summation condition: The information contents of mutually independent symbols (or chains or symbols) should be capable of addition. The summation condition views information as something quantitative.

    Probability condition: The information content to be ascribed to a symbol (or to a chain of symbols) should rise as the level of surprise increases. The surprise effect of the less common ‘z’ (low probability) is greater than that of the more frequent ‘e’ (high probability). It follows from this that the information content of a symbol should increase as its probability decreases.

    The bit as a unit of information: In the simplest case, when the supply of symbols consists of just two symbols, which, moreover, occur with equal frequency, the information content of one of these symbols should be assigned a unit of precisely 1 bit. The following empirical principle can be derived from this:

    Theorem 1: The statistical information content of a chain of symbols is a quantitative concept. It is given in bits (binary digits).

    According to Shannon’s definition, the information content of a single item of information (an item of information in this context merely means a symbol, character, syllable, or word) is a measure of the uncertainty existing prior to its reception. Since the probability of its occurrence may only assume values between 0 and 1, the numerical value of the information content is always positive. The information content of a plurality of items of information (for example, characters) results (according to the summation condition) from the summation of the values of the individual items of information. This yields an important characteristic of information according to Shannon:

    Theorem 2: According to Shannon’s theory, a disturbed signal generally contains more information than an undisturbed signal, because, in comparison with the undisturbed transmission, it originates from a larger quantity of possible alternatives.

    Shannon’s theory also states that information content increases directly with the number of symbols. How inappropriately such a relationship describes actual information content becomes apparent from the following situation: If someone uses many words to say virtually nothing, then, according to Shannon, in accordance with the large number of letters, this utterance is assigned a very high information content, whereas the utterance of another person, who is skilled in expressing succinctly that which is essential, is ascribed only a very low information content.

    Furthermore, in its equation of information content, Shannon’s theory uses the factor of entropy to take account of the different frequency distributions of the letters. Entropy thus represents a generalised but specific feature of the language used. Given an equal number of symbols (for example, languages that use the Latin alphabet), one language will have a higher entropy value than another language if its frequency distribution is closer to a uniform distribution. Entropy assumes its maximum value in the extreme case of uniform distribution.
    Symbols: a look at their average information content

    If the individual symbols of a long sequence of symbols are not equally probable (for example, text), what is of interest is the average information content for each symbol in this sequence as well as the average value over the entire language. When this theory is applied to the various code systems, the average information content for one symbol results as follows:

    In the German language: I = 4.113 bits/letter

    In the English language: I = 4.046 bits/letter

    In the dual system: I = 1 bit/digit

    In the decimal system: I = 3.32 bits/digit

    In the DNA molecule: I = 2 bits/nucleotide

    The highest information density

    The highest information density known to us is that of the DNA (deoxyribonucleic acid) molecules of living cells. This chemical storage medium is 2 nm in diameter and has a 3.4 NM helix pitch (see Figure 1). This results in a volume of 10.68 x 10-21 cm3 per spiral. Each spiral contains ten chemical letters (nucleotides), resulting in a volumetric information density of 0.94 x 1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the four nucleotide bases, that is, adenine, thymine, guanine and cytosine. The information content of such a letter is 2 bits/nucleotide. Thus, the statistical information density is 1.88 x 1021 bits/cm3.

    Proteins are the basic substances that compose living organisms and include, inter alia, such important compounds as enzymes, antibodies, haemoglobins and hormones. These important substances are both organ- and species-specific. In the human body alone, there are at least 50,000 different proteins performing important functions. Their structures must be coded just as effectively as the chemical processes in the cells, in which synthesis must take place with the required dosage in accordance with an optimised technology. It is known that all the proteins occurring in living organisms are composed of a total of just 20 different chemical building blocks (amino acids). The precise sequence of these individual building blocks is of exceptional significance for life and must therefore be carefully defined. This is done with the aid of the genetic code. Shannon’s information theory makes it possible to determine the smallest number of letters that must be combined to form a word in order to allow unambiguous identification of all amino acids. With 20 amino acids, the average information content is 4.32 bits/amino acid. If words are made up of two letters (doublets), with 4 bits/word, these contain too little information. Quartets would have 8 bits/word and would be too complex. According to information theory, words of three letters (triplets) having 6 bits/word are sufficient and are therefore the most economical method of coding. Binary coding with two chemical letters is also, in principle, conceivable. This however, would require a quintet to represent each amino acid and would be 67 per cent less economical than the use of triplets.
    Figure 1

    Figure 1. The DNA molecule—the universal storage medium of natural systems. A short section of a strand of the double helix with sugar-phosphate chain reveals its chemical structure (left). The schematic representation of the double helix (right) shows the base pairs coupled by hydrogen bridges (in a plane perpendicular to the helical axis).
    Computer chips and natural storage media

    Figures 1, 2 and 3 show three different storage technologies: the DNA molecule, the core memory, and the microchip. Let’s take a look at these.

    Core memory: Earlier core memories were capable of storing 4,096 bits in an area of 6,400 mm2 (see Figure 2). This corresponds to an area storage density of 0.64 bits/mm2. With a core diameter of 1.24 mm (storage volume 7,936 mm3), a volumetric storage density of 0.52 bits/mm3 is obtained.
    Figure 2

    Figure 2. Detail of the TR440 computer’s core-memory matrix (Manufacturer: Computer Gesellschaft Konstanz).

    1-Mbit DRAM: The innovative leap from the core memory to the semiconductor memory is expressed in striking figures in terms of storage density; present-day 1-Mbit DRAMs (see Figure 3) permit the storage of 1,048,576 bits in an area of approximately 50 mm2, corresponding to an area storage density of 21,000 bits/mm2. With a thickness of approximately 0.5 mm, we thus obtain a volumetric storage density of 42,000 bits/mm3. The megachip surpasses the core memory in terms of area storage density by a factor of 32,800 and in terms of volumetric storage density by a factor of 81,000.
    Figure 3

    Figure 3. The 1-Mbit DRAM—a dynamic random-access memory for 1,048,576 bits.

    DNA molecule: The carriers of genetic information, which perform their biological functions throughout an entire life, are nucleic acids. All cellular organisms and many viruses employ DNAs that are twisted in an identical manner to form double helices; the remaining viruses employ single-stranded ribonucleic acids (RNA). The figures obtained from a comparison with man-made storage devices are nothing short of astronomical if one includes the DNA molecule (see Figure 1). In this super storage device, the storage density is exploited to the physico-chemical limit: its value for the DNA molecule is 45 x 1012 times that of the megachip. What is the explanation for this immense difference of 45 trillion between VLSI technology and natural systems? There are three decisive reasons:


    The DNA molecule uses genuine volumetric storage technology, whereas storage in computer devices is area-oriented. Even though the structures of the chips comprise several layers, their storage elements only have a two-dimensional orientation.

    Theoretically, one single molecule is sufficient to represent an information unit. This most economical of technologies has been implemented in the design of the DNA molecule. In spite of all research efforts on miniaturisation, industrial technology is still within the macroscopic range.

    Only two circuit states are possible in chips; this leads to exclusively binary codes. In the DNA molecule, there are four chemical symbols (see Figure 1); this permits a quaternary code in which one state already represents 2 bits.

    The knowledge currently stored in the libraries of the world is estimated at 1018 bits. If it were possible for this information to be stored in DNA molecules, 1 per cent of the volume of a pinhead would be sufficient for this purpose. If, on the other hand, this information were to be stored with the aid of megachips, we would need a pile higher than the distance between the earth and the moon.
    The five levels of information

    Shannon’s concept of information is adequate to deal with the storage and transmission of data, but it fails when trying to understand the qualitative nature of information.

    Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.

    In order to be able adequately to evaluate information and its processing in different systems, both animate and inanimate, we need to widen the concept of information considerably beyond the bounds of Shannon’s theory. Figure 4 illustrates how information can be represented as well as the five levels that are necessary for understanding its qualitative nature.
    Level 1: statistics

    Shannon’s information theory is well suited to an understanding of the statistical aspect of information. This theory makes it possible to give a quantitative description of those characteristics of languages that are based intrinsically on frequencies. However, whether a chain of symbols has a meaning is not taken into consideration. Also, the question of grammatical correctness is completely excluded at this level.
    Level 2: syntax

    In chains of symbols conveying information, the stringing-together of symbols to form words as well as the joining of words to form sentences are subject to specific rules, which, for each language, are based on consciously established conventions. At the syntactical level, we require a supply of symbols (code system) in order to represent the information. Most written languages employ letters; however, an extremely wide range of conventions is in use for various purposes: Morse code, hieroglyphics, semaphore, musical notes, computer codes, genetic codes, figures in the dance of foraging bees, odour symbols in the pheromone languages of insects, and hand movements in sign language.

    The field of syntax involves the following questions:


    Which symbol combinations are defined characters of the language (code)?

    Which symbol combinations are defined words of the particular language (lexicon, spelling)?

    How should the words be positioned with respect to one another (sentence formation, word order, style)? How should they be joined together? And how can they be altered within the structure of a sentence (grammar)?

    Figure 4

    Figure 4. The five mandatory levels of information (middle) begin with statistics (at the lowest level). At the highest level is apobetics (purpose).

    The syntax of a language, therefore, comprises all the rules by which individual elements of language can or must be combined. The syntax of natural languages is of a much more complex structure than that of formalised or artificial languages. Syntactical rules in formalised languages must be complete and unambiguous, since, for example, a compiler has no way of referring back to the programmer’s semantic considerations. At the syntactical level of information, we can formulate several theorems to express empirical principles:

    Theorem 4: A code is an absolutely necessary condition for the representation of information.

    Theorem 5: The assignment of the symbol set is based on convention and constitutes a mental process.

    Theorem 6: Once the code has been freely defined by convention, this definition must be strictly observed thereafter.

    Theorem 7: The code used must be known both to the transmitter and receiver if the information is to be understood.

    Theorem 8: Only those structures that are based on a code can represent information (because of Theorem 4). This is a necessary, but still inadequate, condition for the existence of information.

    These theorems already allow fundamental statements to be made at the level of the code. If, for example, a basic code is found in any system, it can be concluded that the system originates from a mental concept.
    Level 3: semantics

    Chains of symbols and syntactical rules form the necessary precondition for the representation of information. The decisive aspect of a transmitted item of information, however, is not the selected code, the size, number or form of the letters, or the method of transmission (script, optical, acoustic, electrical, tactile or olfactory signals), but the message it contains, what it says and what it means (semantics). This central aspect of information plays no part in its storage and transmission. The price of a telegram depends not on the importance of its contents but merely on the number of words. What is of prime interest to both sender and recipient, however, is the meaning; indeed, it is the meaning that turns a chain of symbols into an item of information. It is in the nature of every item of information that it is emitted by someone and directed at someone. Wherever information occurs, there is always a transmitter and a receiver. Since no information can exist without semantics, we can state:

    Theorem 9: Only that which contains semantics is information.

    According to a much-quoted statement by Norbert Wiener, the founder of cybernetics and information theory, information cannot be of a physical nature:

    ‘Information is information, neither matter nor energy. No materialism that fails to take account of this can survive the present day.’

    The Dortmund information scientist Werner Strombach emphasises the non-material nature of information when he defines it as ‘an appearance of order at the level of reflective consciousness.’ Semantic information, therefore, defies a mechanistic approach. Accordingly, a computer is only ‘a syntactical device’ (Zemanek) which knows no semantic categories. Consequently, we must distinguish between data and knowledge, between algorithmically conditioned branches in a programme and deliberate decisions, between comparative extraction and association, between determination of values and understanding of meanings, between formal processes in a decision tree and individual selection, between consequences of operations in a computer and creative thought processes, between accumulation of data and learning processes. A computer can do the former; this is where its strengths, its application areas, but also its limits lie. Meanings always represent mental concepts; we can therefore further state:

    Theorem 10: Each item of information needs, if it is traced back to the beginning of the transmission chain, a mental source (transmitter).

    Theorems 9 and 10 basically link information to a transmitter (intelligent information source). Whether the information is understood by a receiver or not does nothing to change its existence. Even before they were deciphered the inscriptions in Egyptian obelisks were clearly regarded as information, since they obviously did not originate from a random process. Before the discovery of the Rosetta Stone (1799), the semantics of these hieroglyphics was beyond the comprehension of any contemporary person (receiver); nevertheless, these symbols still represented information.

    All suitable formant devices (linguistic configurations) that are capable of expressing meanings (mental substrates, thoughts, contents of consciousness) are termed languages. It is only by means of language that information may be transmitted and stored on physical carriers. The information itself is entirely invariant, both with regard to change of transmission system (acoustic, optical, electrical) and also of storage system (brain, book, computer system, magnetic tape). The reason for this invariance lies in its non-material nature. We distinguish between different kinds of languages:


    Natural languages: at present, there are approximately 5,100 living languages on earth.

    Artificial or sign languages: Esperanto, sign language, semaphore, traffic signs.

    Artificial (formal) languages: logical and mathematical calculations, chemical symbols, shorthand, algorithmic languages, programming languages.

    Specialist languages in engineering: building plans, design plans, block diagrams, bonding diagrams, circuit diagrams in electrical engineering, hydraulics, pneumatics.

    Special languages in the living world: genetic language, the foraging-bee dance, pheromone languages, hormone language, signal system in a spider’s web, dolphin language, instincts (for example, flight of birds, migration of salmon).

    Common to all languages is that these formant devices use defined systems of symbols whose individual elements operate with fixed, uniquely agreed rules and semantic correspondences. Every language has units (for example, morphemes, lexemes, phrases and whole sentences in natural languages) that act as semantic elements (formatives). Meanings are correspondences between the formatives, within a language, and imply a unique semantic assignment between transmitter and receiver.

    Any communication process between transmitter and receiver consists of the formulation and comprehension of the sememes (sema = sign) in one and the same language. In the formulation process, the thoughts of the transmitter generate the transmissible information by means of a formant device (language). In the comprehension process, the combination of symbols is analysed and imaged as corresponding thoughts in the receiver.
    Level 4: pragmatics

    Up to the level of semantics, the question of the objective pursued by the transmitter in sending information is not relevant. Every transfer of information is, however, performed with the intention of producing a particular result in the receiver. To achieve the intended result, the transmitter considers how the receiver can be made to satisfy his planned objective. This intentional aspect is expressed by the term pragmatics. In language, sentences are not simply strung together; rather, they represent a formulation of requests, complaints, questions, inquiries, instructions, exhortations, threats and commands, which are intended to trigger a specific action in the receiver. Strombach defines information as a structure that produces a change in a receiving system. By this, he stresses the important aspect of action. In order to cover the wide variety of types of action, we may differentiate between:


    Modes of action without any degree of freedom (rigid, indispensable, unambiguous, program-controlled), such as program runs in computers, machine translation of natural languages, mechanised manufacturing operations, the development of biological cells, the functions of organs;

    Modes of action with a limited degree of freedom, such as the translation of natural languages by humans and instinctive actions (patterns of behaviour in the animal kingdom);

    Modes of action with the maximum degree of freedom (flexible, creative, original; only in humans), for example, acquired behaviour (social deportment, activities involving manual skills), reasoned actions, intuitive actions and intelligent actions based on free will.

    All these modes of action on the part of the receiver are invariably based on information that has been previously designed by the transmitter for the intended purpose.
    Level 5: apobetics

    The final and highest level of information is purpose. The concept of apobetics has been introduced for this reason by linguistic analogy with the previous definitions. The result at the receiving end is based at the transmitting end on the purpose, the objective, the plan, or the design. The apobetic aspect of information is the most important one, because it inquires into the objective pursued by the transmitter. The following question can be asked with regard to all items of information: Why is the transmitter transmitting this information at all? What result does he/she/it wish to achieve in the receiver? The following examples are intended to deal somewhat more fully with this aspect:


    Computer programmes are target-oriented in their design (for example, the solving of a system of equations, the inversion of matrices, system tools).

    With its song, the male bird would like to gain the attention of the female or to lay claim to a particular territory.

    With the advertising slogan for a detergent, the manufacturer would like to persuade the receiver to decide in favour of its product.

    Humans are endowed with the gift of natural language; they can thus enter into communication and can formulate objectives.

    We can now formulate some further theorems:

    Theorem 11: The apobetic aspect of information is the most important, because it embraces the objective of the transmitter. The entire effort involved in the four lower levels is necessary only as a means to an end in order to achieve this objective.

    Theorem 12: The five aspects of information apply both at the transmitter and receiver ends. They always involve an interaction between transmitter and receiver (see Figure 4).

    Theorem 13: The individual aspects of information are linked to one another in such a manner that the lower levels are always a prerequisite for the realisation of higher levels.

    Theorem 14: The apobetic aspect may sometimes largely coincide with the pragmatic aspect. It is, however, possible in principle to separate the two.

    Having completed these considerations, we are in a position to formulate conditions that allow us to distinguish between information and non-information. Two necessary conditions (NCs; to be satisfied simultaneously) must be met if information is to exist:

    NC1: A code system must exist.

    NC2: The chain of symbols must contain semantics.

    Sufficient conditions (SCs) for the existence of information are:

    SC1: It must be possible to discern the ulterior intention at the semantic, pragmatic and apobetic levels (example: Karl v. Frisch analysed the dance of foraging bees and, in conformance with our model, ascertained the levels of semantics, pragmatics and apobetics. In this case, information is unambiguously present).

    SC2: A sequence of symbols does not represent information if it is based on randomness. According to G.J. Chaitin, an American informatics expert, randomness cannot, in principle, be proven; in this case, therefore, communication about the originating cause is necessary.

    The above information theorems not only play a role in technological applications, they also embrace all otherwise occurring information (for example, computer technology, linguistics, living organisms).
    Information in living organisms

    Life confronts us in an exceptional variety of forms; for all its simplicity, even a monocellular organism is more complex and purposeful in its design than any product of human invention. Although matter and energy are necessary fundamental properties of life, they do not in themselves imply any basic differentiation between animate and inanimate systems. One of the prime characteristics of all living organisms, however, is the information they contain for all operational processes (performance of all life functions, genetic information for reproduction). Braitenberg, a German cybernetist, has submitted evidence ‘that information is an intrinsic part of the essential nature of life.’ The transmission of information plays a fundamental role in everything that lives. When insects transmit pollen from flower blossoms, (genetic) information is essentially transmitted; the matter involved in this process is insignificant. Although this in no way provides a complete description of life as yet, it touches upon an extremely crucial factor.

    Without a doubt, the most complex information processing system in existence is the human body. If we take all human information processes together, that is, conscious ones (language, information-controlled functions of the organs, hormone system), this involves the processing of 1024 bits daily. This astronomically high figure is higher by a factor of 1,000,000 than the total human knowledge of 1018 bits stored in all the world’s libraries.
    The concept of information

    On the basis of Shannon’s information theory, which can now be regarded as being mathematically complete, we have extended the concept of information as far as the fifth level. The most important empirical principles relating to the concept of information have been defined in the form of theorems. Here is a brief summary of them:1


    No information can exist without a code.

    No code can exist without a free and deliberate convention.

    No information can exist without the five hierarchical levels: statistics, syntax, semantics, pragmatics and apobetics.

    No information can exist in purely statistical processes.

    No information can exist without a transmitter.

    No information chain can exist without a mental origin.

    No information can exist without an initial mental source; that is, information is, by its nature, a mental and not a material quantity.

    No information can exist without a will.

    The Bible has long made it clear that the creation of the original groups of fully operational living creatures, programmed to transmit their information to their descendants, was the deliberate act of the mind and the will of the Creator, the great Logos Jesus Christ.

    We have already shown that life is overwhelmingly loaded with information; it should be clear that a rigorous application of the science of information is devastating to materialistic philosophy in the guise of evolution, and strongly supportive of Genesis creation.2

    1. This paper has presented only a qualitative survey of the higher levels of information. A quantitative survey is among the many tasks still to be performed.
    2. This paper has been adapted from a paper entitled ‘Information: the third fundamental quantity’ that was published in the November/December 1989 issue of Siemens Review (Vol. 56, No. 6).

    Professor Werner Gitt completed a ‘Diplom-Ingenieur’ at the Technical University of Hanover/Germany in 1968, and subsequently completed the required research for his doctorate in Engineering at the Technical University of Aachen, graduating summa cum laude with the prestigious Borchers Medal. Since 1971 he has worked at the German Federal Institute of Physics and Technology, Brunswick, as Head of Data Processing, and since 1978 as Director and Professor at the Institute.
  2. Gup20

    Expand Collapse
    New Member

    May 11, 2004
    Likes Received:
  3. Gup20

    Expand Collapse
    New Member

    May 11, 2004
    Likes Received:
    The marvellous ‘message molecule’

    by Carl Wieland

    When someone sends a message, something rather fascinating and mysterious gets passed along. Let's say Alphonse in Alsace wants to send the message, 'Ned, the war is over. Al'. He dictates it to a friend; the message has begun as patterns of air compression (spoken words). His friend puts it down as ink on paper and mails it to another, who puts it in a fax machine. The machine transfers the message into a coded pattern of electrical impulses, which are sent down a phone line and received at a remote Indian outpost where it is printed out in letters once again. Here the person who reads the fax lights a campfire and sends the same message as a pattern of smoke signals. Old Ned in Nevada, miles away, looks up and gets the exact message that was meant for him. Nothing physical has been transmitted; not a single atom or molecule of any substance travelled from Alsace to Nevada, yet it is obvious that something travelled all the way.

    This elusive something is called information. It is obviously not a material thing, since no matter has been transmitted. Yet it seems to need matter on which to 'ride' during its journey. This is true whether the message is in Turkish, Tamil or Tagalog. The matter on which information travels can change, without the information having to change. Air molecules being compressed in sound waves; ink and paper; electrons travelling down phone wires, semaphore signals—whatever—all involve material mediums used to transmit information, but the medium is not the information.

    This fascinating thing called information is the key to understanding what makes life different from dead matter. It is the Achilles' heel of all materialist explanations of life, which say that life is nothing more than matter obeying the laws of physics and chemistry. Life is more than just physics and chemistry; living things carry vast amounts of information.

    Some might argue that a sheet of paper carrying a written message is nothing more than ink and paper obeying the laws of physics and chemistry. But ink and paper unaided do not write messages—minds do. The alphabetical letters in a Scrabble® kit do not constitute information until someone puts them into a special sequence—mind is needed to get information. You can program a machine to arrange Scrabble® letters into a message, but a mind had to write the program for the machine.

    How is the information for life carried? How is the message which spells out the recipe that makes a frog, rather than a frangipani tree, sent from one generation to the next? How is it stored? What matter does it 'ride' on? The answer is the marvellous 'message molecule' called DNA. This molecule is like a long rope or string of beads, which is tightly coiled up inside the centre of every cell of your body. This is the molecule that carries the programs of life, the information which is transmitted from each generation to the next. *

    Some people think that DNA is alive—this is wrong. DNA is a dead molecule. It can't copy itself—you need the machinery of a living cell to make copies of a DNA molecule. It may seem as if DNA is the information in your body. Not so—the DNA is simply the carrier of the message, the 'medium' on which the message is written. In the same way, Scrabble® letters are not information until the message is 'imposed' on them from the 'outside'. Think of DNA as a chain of such alphabet letters linked together, with a large variety of different ways in which this can happen. Unless they are joined in the right sequence, no usable message will result, even though it is still DNA.

    Now to read the message, you need a pre-existing language code or convention, as well as machinery to translate it. All of that machinery exists in the cell. Like man-made machinery, it does not arise by itself from the properties of the raw materials. If you just throw the basic raw ingredients for a living cell together, without information nothing will happen. Machines and programs do not come from the laws of physics and chemistry by themselves. Why? Because they reflect information, and information has never been observed to come about by unaided, raw matter plus time and chance. Information is the very opposite of chance—if you want to arrange letters into a sequence to spell a message, a particular order has to be imposed on the matter.

    When living things reproduce, they transmit information from one generation to the next. This information, travelling on the DNA from mother and from father, is the 'instruction manual' which enables the machinery in a fertilized egg cell to construct, from raw materials, the new living organism—a fantastic feat. This is in a new combination so that children are not exactly like their parents, although the information itself, which is expressed in the make-up of those children, was there all along in both parents. That is, the deck was reshuffled, but no new cards were added.

    Just how much space does DNA need to store its information? The technological achievements of humankind in storing information seem sensational. Imagine how much information is stored on a videotape of a movie, for example—you can hold it all in one hand. Yet compared to this, the feat of information miniaturization performed by DNA is nothing short of mind-blowing. For a given amount of information, the room needed to store it on DNA is about a trillionth of that for information on videotape—i.e. it is a million million times more efficient at storing information.1

    How much information is contained in the DNA code which describes you? Estimates vary widely. Using simple analogies, based upon the storage space in DNA, they range from 500 large library books of small-type information, to more than 100 complete 30 volume encyclopaedia sets. When you think about it, even that is probably not enough to specify the intricate construction of even the human brain, with its trillions of precise connections. There are probably higher-level information storage and multiplication systems in the body that we have not even dreamed of yet—there are many more marvellous mysteries waiting to be discovered about the Creator's handiwork.

    Not only is the way in which DNA is encoded highly efficient—even more space is saved by the way in which it is tightly coiled up. According to genetics expert Professor Jérôme Lejeune, all the information required to specify the exact make-up of every unique human being on Earth could be stored in a volume of DNA no bigger than a couple of aspirin tablets! 2 If you took the DNA from one single cell in your body (an amount of matter so small you would need a microscope to see it) and unravelled it, it would stretch to two metres!

    This becomes truly sensational when you consider that there are 75 to 100 trillion cells in the body. Taking the lower figure, it means that if we stretched out all of the DNA in one human body3 and joined it end to end, it would stretch to a distance of 150 billion kilometres (around 94 billion miles). How long is this? It would stretch right around the Earth's equator three-and-a-half million times! It is a thousand times as far as from the Earth to the sun. If you were to shine a torch along that distance, it would take the light, travelling at 300,000 kilometres (186,000 miles) every second, five-and-a-half days to get there.

    But the really sensational thing is the way in which the information carried on DNA in all living things points directly to intelligent, supernatural creation, by straightforward, scientific logic, as follows:

    1. The coded information used in the construction of living things is transferred from pre-existing messages (programs), which are themselves transmitted from pre-existing messages.

    2. During this transfer, the fate of the information follows the dictates of message/information theory and common sense. That is, it either stays the same, or decreases (mutational loss, genetic drift, species extinction) but seldom, probably never, is it seen to increase in any informationally meaningful sense.

    Deduction from observation No. 2

    3. Were we to look back in time along the line of any living population, e.g. humans (the information in their genetic programs) we would see an overall pattern of gradual increase the further back we go.


    4. No population can be infinitely old, nor contain infinite information. Therefore:

    Deduction from points 3 and 4

    5. There had to be a point in time in which the first program arose without a pre-existing program—i.e. the first of that type had no parents.

    Further observation

    6. Information and messages only ever originate in mind or in pre-existing messages. Never, ever are they seen to arise from spontaneous, unguided natural law and natural processes.


    The programs in those first representatives of each type of organism must have originated not in natural law, but in mind.

    This is totally consistent with Genesis, which teaches us that the programs for each of the original 'kind' populations, with all of their vast variation potential, arose from the mind of God at a point in time, by special, supernatural creation. These messages, written in intricate coded language, could not have written themselves, as far as real, observational science can tell us.

    Once the first messages were written, they also contained instructions to make machinery with which to transmit those messages 'on down the line'. DNA, this marvellous 'message molecule', carries that special, non-material something called information, down through many generations, from its origin in the mind of God.

    Similarly, in our example at the beginning, Ned could read the message which originated in the mind of Al without ever seeing him.

    There is another set of messages from that Genesis Creator, namely the Bible. In the book of Romans, chapter 1, we read (vv. 18-20),

    'For the wrath of God is revealed from heaven against all ungodliness and unrighteousness of men, who hold the truth in unrighteousness; Because that which may be known of God is manifest in them; for God hath shewed it unto them. For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead; so that they are without excuse'.

    These verses seem even more appropriate in our day, when we have been privileged to be able to decipher some of the biological language written on DNA by the living Word, Jesus Christ the Logos Creator, during those six days of creation. The most wonderful message from the Logos, though, is surely John 3:16, 'For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life'.
    References and notes

    1. New Scientist, November 26, 1994, p. 17.
    2. Jérôme Lejeune, Anthropotes (Revista di studi sulfa persona e la famiglia), Istituto Giovanni Paolo 11, Rome, 1989, pp. 269-270.
    3. Remember that each cell contains the total information—i.e. there are multiple copies of the same blueprint, one in each cell. The two-metre distance is enough to specify you.

    *Actually, it is as if there were two 'strings of beads' running in parallel—the information on one is from your mother, on the other from your father. Now think of these two parallel strings cut into 23 pieces—these are your chromosomes. The beads are like alphabet letters, and the particular order of the beads is what gives the information. Individual stretches (or 'sentences') of information are called genes. Theoretically, each cell contains all the information that specifies you.
  4. Gup20

    Expand Collapse
    New Member

    May 11, 2004
    Likes Received:

    Order or chaos?

    by Martha Blakefield

    Does chaos glorify God? Don’t worry, I’m not referring to your linen closet or a typical Sunday morning at your house. The chaos I’m talking about is a new area of scientific study termed ‘chaos theory.’

    Scientific thought took a new turn when Newton discovered that the laws which account for a falling apple and those that describe the moon’s orbiting the earth were one and the same. Ever since he discovered and formulated the laws that govern motion in our universe, scientists have assumed that the universe runs like a clock, explained by a few simple laws. Scientists described what seemed like complicated systems in terms of comparatively simple equations. They thought that they could look at the world, figure out how it works, write an equation to describe it, then plug in any numbers and be able to predict any outcome. Some scientists have thought that they would eventually discover how to describe everything in the universe in simple, mathematical terms. Some have even thought they would find one set of equations that describes how the entire universe formed and operates—a ‘theory of everything.’

    But even as scientists figure out equations for more and more of the universe’s systems, they are continually baffled by unexplained phenomena and systems that seem to act against the laws they have set forth to explain these actions. Wobbles in the orbits of planets, turbulence in the airflow patterns of a plane’s wing, the changing size of animal populations—every once in a while these systems and others fail to conform to the simple equations scientists have worked out for them.

    These unexplained phenomena have aroused the curiosity of the scientific community. Scientists are finding chaos where they thought they would find order. But then, looking more closely, they are finding unexplained order in what looked like chaos. With the development of faster, more powerful computers, they have been able to test equations they have been relying on for years. They have found that, under certain conditions, some of these equations produce ‘chaotic’ results. Then they realized that these systems that seemed to be so disordered were actually following strange and intricate patterns.

    When Edward Lorenz, a meteorologist, programmed a model of the weather into a computer, he got strange results. Lorenz found that minute differences in initial weather conditions produced drastic changes in the outcome. Meteorologists had long suspected this was so. In fact, they had given the idea a name—‘the butterfly effect.’ The name was based on ‘the half-whimsical belief that a butterfly flapping its wings in Asia could affect the weather in New York a few days or weeks later.’1

    Leaf photo by Tom Wagner
    Plants show similar repetitive structures in, for example, the veins on a leaf or a tree's branching limbs.
    Photo by Tom Wagner
    When Lorenz created equations to describe these differences and fed these equations into a computer which graphed the results, he found that these ‘chaotic’ equations produced evidence of an unusual kind of predictability. The line of the graph produced a twisted figure-eight—a multi-dimensional butterfly shape. But the strange part is that although the line always described essentially the same shape over and over again, it never described exactly the same shape and no point on the graph ever intersected any other point. Since Lorenz’s discovery, scientists have found many other of these ‘strange attractors’, as the phenomena are now called.

    Put simply, the equations repeatedly describe the same general shape but never repeat themselves precisely. Other chaotic equations form complex branching patterns that duplicate themselves repeatedly, but on a diminishing scale—each branching pattern a replica of the last but much smaller, just as we see in the structure of many plants (see photo, right).

    All chaotic systems seem to have an unusual sensitivity to initial conditions. They are systems in which seemingly inconsequential changes turn into major differences in outcome. Scientists have found evidence of ‘chaos’ in astronomy, epidemiology, meteorology, air turbulence, the stock market, and the human body. It is in the study of the human body that some scientists are beginning to realize just how important chaos is. Ary Goldberger of Harvard Medical School believes he has discovered not only that the rhythm of the human heart is chaotic, but that chaos in the heart is necessary. When he compared the variations in the heartbeats of a healthy person to those of one suffering from heart disease, the healthy heartbeat was actually the more chaotic.2

    This has opened some scientists’ eyes to the possibility that chaotic behaviour may not be an abnormality, but a characteristic essential to the design of some systems.
    Photo by Tom Wagner Photo by Stewart Lawson Photo by Tom Wagner
    Photo by Tom Wagner Photo by Stewart Lawson Photo by Tom Wagner

    Branching structures, all with clearly visible patterns of self-similarity, can be found all around us…and even in us. Look at the photographs (above). A tree’s main limbs branch out in all directions and they in turn have smaller branches, which have twigs, again branching off into smaller shoots…all different, yet similar. It’s interesting also to observe the way dried out mud cracks into (other) patterns which, though different, show the same concept of self-similarity on every scale. Also, ice crystal formation; the branches in a river tributary system observed from space; the intricate branching of the airways in our lungs; and the branching patterns of an electrical discharge. There are many other examples showing the same sort of ‘fractal’ patterns, as they are called.

    When we consider the exquisitely complicated patterns found in chaotic systems, it appears the theory was misnamed. ‘Chaos’ ordinarily describes any kind of disorder or confusion. In this case, what appeared to be chaos, on closer examination is another layer of more complex order in this universe God created. Scientists use the word ‘chaos’ to indicate simple things that behave in complicated and unexpected ways—things that surprise us and confound our ability to predict how they will behave in the future. Some are coming up with different names for this phenomenon as they learn more about it: ‘complexification’ and ‘the science of surprise.’

    ‘Traditionally, experts have blamed these surprises on outside influences or imperfect data … . But now scientists, studying the world around us with the aid of powerful computers, are beginning to realize that surprise is inevitable. Systems such as the weather … have surprise built into them. They will always behave in unexpected ways, no matter how well we understand them. It is in their nature to do things we can’t predict.’3

    Still, scientists are hoping these new equations could provide a method of predicting future behaviour of systems more accurately than at present. And many years from now, when we think we have these new laws of our complex world all worked out, no doubt we’ll discover another set of phenomena that defy our statements of natural law.

    The wise scientist realizes that the all-knowing, all-powerful Creator would create a universe that will take the lifetime of humanity and longer to understand fully. In that way the creation reveals the nature of the Creator (Romans 1:20).

    ‘It is the glory of God to conceal a thing: but the honour of kings is to search out a matter’ (Proverbs 25:2).
    Chaos theory: no help for evolution

    Occasionally it is claimed that the discovery of patterns of order in seeming chaos is a bright star of hope for evolutionists. They feel it holds promise for their struggle to explain how disordered chemicals could have assembled themselves into the first self-reproducing machine, in opposition to the relentless tendency to universal disorder.

    However, present indications point to this being an illusory hope. One of the classic examples of such ‘order out of chaos’ is the appearance of hexagonal patterns on the surface of certain oils as they are being heated. The minute the heating stops, this pattern vanishes once again into a sea of molecular disorder.

    These patterns, like the swirls of a hurricane, are not only fleetingly short-lived, but are simple, repetitive structures which require negligible information to describe them. The information they do contain is intrinsic to the physics and chemistry of the matter involved, not requiring any extra ‘programming.’

    Living things, on the other hand, are characterized by truly complex, information-bearing structures, whose properties are not intrinsic to the physics and chemistry of the substances of which they are constructed; they require the pre-programmed machinery of the cell.

    This programming has been passed on from the parent organisms, but had to arise from an intelligent mind originally, since natural processes do not write programs.

    Any suggestion that the two issues are truly analogous denies reality.

    1. Christopher Lampton, Science of Chaos: Complexity in the Natural World, Franklin Watts, New York, p. 68, 1992. Return to text.
    2. Ref. 1, p. 78. Return to text.
    3. Ref. 1, p. 13. Return to text.
  5. Paul of Eugene

    Paul of Eugene
    Expand Collapse
    New Member

    Oct 30, 2001
    Likes Received:
    Information as was described in these passages - real meaning as opposed to random bits of noise - how could that come about? The theory of evolution, interpreted in terms of information, explains how that could be.

    It requires that life already exist, of course, and the theory does not explain how life actually came to be.

    Here's the way it works. Consider a living organism with a certain genome. In accordance with the reproductive pattern of all living things, it will reproduce until the population reaches the carrying capacity for its way of life. Reproduction continues, but not all the offspring survive; the population remains steady over time.

    Now, we introduce some mutations in the genome. Mutations are never planned. They happen by chance, they affect the protein-coding machinery of the cells randomly, and they represent, then, a degradation of the genome.

    Now the organisms continue to reproduce. Those of the entire population that just happen to be stuck with mutations that hinder that reproduction reproduce less well. Those genes, therefore, become less and less frequent in the popultation.

    On the other hand those of the entire population that just happen to be blessed with mutations that help their reproduction (and we all know there will be very much fewer of these than the other kind) will become amplified over time just due to this reproductive edge.

    In this way, genes that help reproduction more successfully than the existing genes gradually replace the former, less effective genes.

    This, after many generations, is finally information. The organism has acquired in a perfectly natural fashion, with no direct intelligent information, a genome that is better at reproducing.

    No matter how many fine words are strung together with a assertion along this line

    it will be necessary to show what in this scenario must inevitably fail to produce the claimed information or else the attack on evolution based on this principle is fatally flawed.

    Expand Collapse
    New Member

    May 8, 2002
    Likes Received:
    More spam. Just great. And to top it all off, you still have not ever responded to the replies when you have previously spammed this information on us.

    It is funny that there must be a Creationist definition of information. That should be a red flag that there must be a problem with how information is treated in germane fields or they would not need to redefine it.

    Before I repeat my response to the last time this drivel was posted, could you please pull out the part where it tells us exactly what information is and where it gives us a well defined step by step process for determining if an alledged mutation really is new information or not?

    I was really hoping that you could give us the Gup20 summary to show us what you think "information" is but I guess this will have to do.

    We'll look at your first link. Gitt is trying to make it look as if he is using Shannon information, yet his third theorem distances himself from Shannon.

    "Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning."

    So we are no longer talking about Shannon information. This is key since he is trying to build upon Shannon to make his case. Above, he had stated that "On the basis of Shannon’s information theory, which can now be regarded as being mathematically complete." What good is a mathematically complete theory if you are going to later say that it does not apply? He wants to use the acceptance of Shannon to support his theory while later telling us why Shannon's theory does not apply.

    Of course he has to get away from Shannon because Shannon allows for random sequences to be information while Gitt's second sufficient condition says "A sequence of symbols does not represent information if it is based on randomness." His next sentence then contridicts himself by saying "According to G. J. Chaitin, an American informatics expert, randomness cannot, in principle, be proven."

    SO he his basing his work on Shannon exept when he is not basing his work on Shannon. And random sequences, which are information in Shannon, are not information for Gitt though he admits that there is not any way to determine if something is truely random or not.

    So now we are left with nothing but Gitt's theorems and his necessary and sufficient conditions. He offers no mathematical basis for these theoems. If you will go to in contrast you will see how Shannon DOES support his position with the mathematics. The whole point of Gitt finally boils down to his tenth theorem "Theorem 10: Each item of information needs, if it is traced back to the beginning of the transmission chain, a mental source (transmitter)." Which by this point, since he has abandoned the mathematical underpinnings when he abandoned Shannon, is nothing more than an unsubstantiated assertion. And a circular one at that since his assertion and his conclusion are the same.

    We also have empirical evidence that contradicts this assertion. As I have shown, we can get new "information" through duplication and mutation. I have shown much empirical evidence of cases where this has happened to show that it is not just a theoretical construct. I have also given you plenty of examples of mutations that led to new traits.

Share This Page