I was once challenged by a self-confessed atheist Darwinist in this way:
Are you holding back then? Do you have some ground breaking evidence that shows that evolution is false? I’m sure the the scientific community would love to hear about it.
Here is my initial response:
Information: For this entry we’re talking about biologically meaningful information, or semantic information or more specifically still biosemiotics. Shannon information is useful in biology as well but not at the level required for ID. That is, both descriptive info and prescriptive info.
Complexity: Here ID refers to specified complexity – and this is not an IDist invention – it was first used by Leslie Orgel. Complexity alone is insufficient. A long string of random letters for example is complex but not specified. A string of letters from a Shakespearean sonnet is both complex and specified.
Here I quote Dr David L. Abel; The Origin of Life Science Foundation:
Semantic (meaningful) information has two subsets: Descriptive and Prescriptive. Prescriptive Information (PI) instructs or directly produces nontrivial formal function (Abel, 2009a). Merely describing a computer chip does not prescribe or produce that chip. Thus mere description needs to be dichotomized from prescription. Computationally halting cybernetic programs and linguistic instructions are examples of Prescriptive Information. “Prescriptive Information (PI) either tells us what choices to make, or it is a recordation of wise choices already made.” (Abel, 2009a)
Not even Descriptive semantic information is achievable by inanimate physicodynamics (Pattee, 1972, 1995, 2001). Measuring initial conditions in any experiment and plugging those measurements appropriately into equations (e.g., physical “laws”) is formal, not physical. Cybernetic programming choices and mathematical manipulations are also formal.
DNA strings are formed through the selection of one of four nucleotides at each locus in a string. These programming choices at quaternary decision nodes in DNA sequences must be made prior to the existence of any selectable phenotypic fitness (The GS Principle, (Abel, 2009b). Natural selection cannot explain the programming of genetic PI that precedes and prescribes organismic existence.
No one has ever observed PI flow in reverse direction from inanimate physicodynamics to the formal side of the ravine—the land of bona fide formal pragmatic “control.” The GS Principle states that selection for potential function must occur at the molecular-genetic level of nucleotide selection and sequencing, prior to organismic existence (Abel, 2009b, d). Differential survival/reproduction of already-programmed living organisms (natural selection) is not sufficient to explain molecular evolution or life-origin (Abel, 2009b). Life must be organized into existence and managed by prescriptive information found in both genetic and epigenetic regulatory mechanisms. The environment possesses no ability to program linear digital folding instructions into the primary structure of biosequences and biomessages. The environment also provides no ability to generate Hamming block codes (e.g. triplet codons that preclude noise pollution through a 3-to-1 symbol representation of each amino acid) (Abel and Trevors, 2006a, 2007). The environment cannot decode or translate from one arbitrary language into another. The codon table is arbitrary and physicodynamically indeterminate. No physicochemical connection exists between resortable nucleotides, groups of nucleotides, and the amino acid that each triplet codon represents. Although instantiated into a material symbol system, the prescriptive information of genetic and epigenetic control is fundamentally formal, not physical.
If you understood that then you’ll realize that the above facts already by themselves refute Darwinism at the most fundamental level – encoded meaningful information.
Douglas Axe, for example, comments on the recent and controversial experiments by Durrett and Schmidt that supposedly contradict Behe’s Edge of Evolution:
By way of analogy, you might easily cause your favorite software to crash by changing a bit or two in the compiled executable file, but you can’t possibly convert it into something altogether different (and equally useful) by such a simple change, or even by a series of such changes with each version improving on the prior one. To get a substantially new piece of software, you would need to substantially re-engineer the original code knowing that your work wouldn’t pay off until it’s finished. Darwinism just doesn’t have the patience for this.
Furthermore, returning to the first question, it seems that even humble binding-site conversions are typically beyond the reach of Darwinian evolution. Durrett and Schmidt conclude that “this type of change would take >100 million years” in a human line , which is problematic in view of the fact that the entire history of primates is thought to be shorter than that .
Might the prospects be less bleak for more prolific species with shorter generation times? As it turns out, even there Darwinism appears to be teetering on the brink of collapse. Choosing fruit flies as a favorable organism, Durrett and Schmidt calculate that what is impossible in humans would take only “a few million years” in these insects. To get that figure, however, they had to assume that the damage caused buy the first mutation has a negligible effect on fitness. In other words, they had to leap from “the mutation need not be lethal” to (in effect) ‘the mutation causes no significant harm’. That’s a big leap.
What happens if we instead assume a small but significant cost—say, a 5% reduction in fitness? By their math it would then take around 400 million years for the binding-site switch to prove its benefit (if it had one) by becoming fully established in the fruit fly population.  By way of comparison, the whole insect class—the most diverse animal group on the planet—is thought to have come into existence well within that time frame. 
Do you see the problem? On the one hand we’re supposed to believe that the Darwinian mechanism converted a proto-insect into a stunning array of radically different life forms (termites, beetles, ants, wasps, bees, dragonflies, stick insects, aphids, fleas, flies, mantises, cockroaches, moths, butterflies, etc., each group with its own diversity) well within the space of 400 million years. But on the other hand, when we actually do the math we find that a single insignificant conversion of binding sites would reasonably be expected to consume all of that time.
The contrast could hardly be more stark: The Darwinian story hopes to explain all the remarkable transformations within 400 million years, but the math shows that it actually explains no remarkable transformation in that time.
If that doesn’t call for a serious rethink, it’s hard to imagine what would.
But it gets a lot worse.
Axe also, experimentally not theoretically (with site directed mutagenesis experiments on a 150-residue protein-folding domain within a B-lactamase enzyme) estimated that the probability of finding a functional protein among the possible amino acid sequences corresponding to a 150-residue protein is similarly 1 in 10^77!
If the universe is indeed some 13.7 billion years and since using the Plank length (smallest possible distance) which is 10^-33 centimeters, and the Plank time (number of possible events per sec.) which is 10^43 and then the number of elementary particles in the universe which is estimated to be 10^80 – calculating the number of possible events in the universe since the Big Bang gives ~10^139. That’s using Dembski’s very conservative calculation.
Other scientists have given much smaller results like University of Pittsburgh physicist Bret van der Sande’s estimate of the probabilistic resources available in the universe at 10^92 – a much less favorable number for the supposed evolutionary time frame than Dembski’s. Worse of course is that this is the number that applies since the beginning of the universe – not the beginning of Earth!
MIT computer scientist Seth Lloyd has calculated that the most bit operations the universe could have performed in its history (assuming the entire universe were given over to this single-minded task) is 10^120, meaning that a specific bit operation with an improbability significantly greater than 1 chance in 10^120 will likely never occur by chance. None of these probabilistic resources is sufficient to render the chance hypothesis plausible. Dembski’s calculation is the most conservative and gives chance its “best chance” to succeed. But even his calculation confirms the implausibility of the chance hypothesis, whether chance is invoked to explain the information necessary to build a single protein or the information necessary to build the suite of proteins needed to service a minimally complex cell.
The probability of producing a single 150-amino-acid functional protein by chance stands at about 1 in about 10^164 (when including P for the requirements for having only peptide bonds and only L-amino acids) – “L-amino acids” dominate on earth, etc. “If you mix up chirality, a protein’s properties change enormously. Life couldn’t operate with just random mixtures of stuff,” – Ronald Breslow, Ph.D., University Professor, Columbia University).
Chirality: The term chiral is used to describe an object that is non-superposable on its mirror image. The concept of handedness – right, left
See http://en.wikipedia.org/wiki/Chirality_%28chemistry%29 – section on biology
Thus, for each functional sequence of 150 amino acids, there are at least 10^164 other possible nonfunctional sequences of the same length. Therefore, to have a good (i.e., better than 50-50) chance of producing a single functional protein of this length by chance, a random process would have to generate (or sample) more than half of the 10^164 nonfunctional sequences corresponding to each functional sequence of that length. Unfortunately, that number vastly exceeds the most optimistic estimate of the probabilistic resources of the entire universe – that is the number of events that could have occurred since the beginning of its existence.
To see this, notice again that to have a better than 50-50 chance of generating a functional protein by chance, more than half of the 10^164 sequences would have to be produced. Now compare that number (0.5 x l0^164) to the maximum number of opportunities – 10^139 – for that event to occur in the history of the universe. Notice that the first number (.5 x 10^164) exceeds the second (10^139).
There is a better chance of pinpointing a single specific atom within the entire universe, entirely by luck, than the single functional 150 amino acid protein arriving by the same! And that’s a small protein.
The above is partly from Stephen Meyer’s Signature in the Cell
Remember that the above numbers are estimates since no one knows the exact age age of the universe, the earth and probabilities are often subject to other unknown variables. But the above calculations only apply to getting a single functional protein! Not a fully functional cell! Not even DNA.
Even if the odds are much better than this, they are still so bad as to merit a verdict against Darwinism’s chance and selection hypothesis! In Darwinism everything is super easy for evolution! Even if the final numbers are orders of magnitude off, their implications are still so far beyond the realm of reasonable expectations as to warrant a complete abandon of the whole chemical origin of life scheme.
Furthermore, if the origin of life is physically impossible by chance and necessity then what are the chances that the same processes could cause the evolution of some ancient ‘self-replicator’ into more than 10 million highly specified, well adapted life forms? The answer is that the chances for that are not much better at all!
Add genetic entropy to the problem and you’ll understand why neo Darwinism is a waste of time and a real science stopper.
The facts, yes facts, about genetic entropy are devastating to NDE. If the primary mechanism of mutations + selection is shown to be inadequate then the whole NDE is undone. And this has already been shown to a degree requiring a negative verdict! Mutations, the prime source of genetic variation, are largely near neutral (very slightly deleterious), many are deleterious (some fatal) and some, but very rare are beneficial.
Atheist Sir F. Hoyle commented on this problem:
“I am convinced it is this almost trivial simplicity that explains why the Darwinian theory is so widely accepted, why it has penetrated through the educational system so completely. As one student text puts it, `The theory is a two-step process. First variation must exist in a population. Second, the fittest members of the population have a selective advantage and are more likely to transmit their genes to the next generation.’ But what if individuals with a good gene A carry a bad gene B. having the larger value of |s|. Does the bad gene not carry the good one down to disaster? What of the situation that bad mutations must enormously exceed good ones in number? … The essential problem for the Darwinian theory in its twentieth century form is how to cope with this continuing flood of adverse mutations, a far cry indeed from the trite problem of only the single mutation in (1.1). Supposing a favourable mutation to occur among the avalanche of unfavourable ones, how is the favourable mutation to advance against the downward pressure of the others?” (Hoyle, F., “Mathematics of Evolution,” 
“Two points of principle are worth emphasis. The first is that the usually supposed logical inevitability of the theory of evolution by natural selection is quite incorrect. There is no inevitability, just the reverse. It is only when the present asexual model is changed to the sophisticated model of sexual reproduction accompanied by crossover that the theory can be made to work, even in the limited degree to be discussed …. This presents an insuperable problem for the notion that life arose out of an abiological organic soup through the development of a primitive replicating system. A primitive replicating system could not have copied itself with anything like the fidelity of present-day systems …. With only poor copying fidelity, a primitive system could carry little genetic information without L [the mutation rate] becoming unbearably large, and how a primitive system could then improve its fidelity and also evolve into a sexual system with crossover beggars the imagination.” (Hoyle, F., “Mathematics of Evolution,” , Acorn Enterprises: Memphis TN, 1999
Renown geneticist Dr. John Sandford’s recent work in this area is also highly revealing. Here is what he said on the endeavor itself (my bold):
Late in my career, I did something which for a Cornell professor would seem unthinkable. I began to question the Primary Axiom [neo Darwinism]. I did this with great fear and trepidation. By doing this, I knew I would be at odds with the most “sacred cow” of modern academia. Among other things, it might even result in my expulsion from the academic world. Although I had achieved considerable success and notoriety within my own particular specialty (applied genetics), it would mean I would have to be stepping out of the safety of my own little niche. I would have to begin to explore some very big things, including aspects of theoretical genetics which I had always accepted by faith alone. I felt compelled to do all this, but I must confess I fully expected to simply hit a brick wall. To my own amazement, I gradually realized that the seemingly “great and unassailable fortress” which has been built up around the primary axiom is really a house of cards. The Primary Axiom is actually an extremely vulnerable theory, in fact it is essentially indefensible. Its apparent invincibility derives mostly from bluster, smoke, and mirrors. A large part of what keeps the Axiom standing is an almost mystical faith, which the true-believers have in the omnipotence of natural selection. Furthermore, I began to see that this deep-seated faith in natural selection was typically coupled with a degree of ideological commitment which can only be described as religious. I started to realize (again with trepidation) that I might be offending a lot of people’s religion!
To question the Primary Axiom required me to re-examine virtually everything I thought I knew about genetics. This was probably the most difficult intellectual endeavor of my life. Deeply entrenched thought pattern only change very slowly (and I must add — painfully). What I eventually experienced was a complete overthrow of my previous understandings. Several years of personal struggle resulted in a new understanding, and a very strong conviction that the Primary Axiom was most definitely wrong. More importantly, I became convinced that the Axiom could be shown to be wrong to any reasonable and open-minded individual. This realization was exhilarating, but again frightening. I realized that I had a moral obligation to openly challenge this most sacred of cows. In doing this, I realized I would earn for myself the most intense disdain of most of my colleagues in academia not to mention very intense opposition and anger from other high places.
In his book, which I will not attempt to quote extensively, he notes:
One of the most astounding recent findings in the world of genetics is that the human mutation rate (just within our reproductive cells) is at least 100 nucleotide substitutions (misspellings) per person per generation (Kondrashov, 2002). Other geneticists would place this number at 175 (Nachman and Crowell, 2000). These high numbers are now widely accepted within the genetics community. Furthermore, Dr. Kondrashov, the author of the most definitive publication, has indicated to me that 100 was only his lower estimate — he believes the actual rate of point mutations (misspellings) per person may be as high as 300 (personal communication). Even the lower estimate, 100, is an amazing number, with profound implications. When an earlier study revealed that the human mutation rate might be as high as 30, the highly distinguished author of that study, concluded that such a number would have profound implications for evolutionary theory (Neel et al. 1986).
Moreover, there are strong theoretical reasons for believing there is no truly neutral nucleotide position. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding and nucleosome binding. If a nucleotide carries absolutely zero information, it is then by definition slightly deleterious – as it slows cell replication and wastes energy. Just as there are really no truly beneficial neutral letters in a encyclopedia, there are probably no truly neutral nucleotide sites in the genome. Therefore there is no way to change any given site, without some biological effect – no matter how subtle. Therefore, while most sites are probably “nearly neutral”, very few, if any, should be absolutely neutral. – Dr. John Sanford, Cornell geneticist, Genetic Entropy The most recent paper on mutation rates is this : http://www.nature.com/news/2009/090827/full/news.2009.864.html – which basically confirms the 100-200 figure.
And so much for “junk DNA”:
The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active.
Also – You still must account for semantic information in biological systems. And it is that information, along with the complex algorithms that process it, that makes Darwinism unfeasible.
Materialism, by very definition, cannot account for the existence of semantic information in living things. That kind of information absolutely requires intelligence – no exceptions exist.
I have repeated this next fact over and over again and never gotten any refutation other than mere denial! – Code, by definition, implies intelligence and the genetic code is real code, mathematically identical to that of language, computer codes etc. all of which can only arise by intelligent convention of symbologies.
The fact that the genetic code is real code and not merely analogous to code is another devastating fact against NDE.
Moreover the genome contains meta information and there is now evidence of meta-programming as well.
Meta info is information on information and we now know the genome contains such structures. But meta information cannot arise without knowledge of the original information.
Meta programming is even more solid evidence of intelligence at work.
We now know that in yeast DNA alone there are more than 300 nano machines at work performing various tasks in the cell, many of which are performed concurrently. Yet concurrency in info processing systems cannot arise without pre-knowledge of tasks requiring coordinated action!
Stuart Pullen in his book Intelligent Design or Evolution (available for reading on line, rightly calls this information “molecular knowledge”.
Read his book to see why a chance and necessity OOL hypothesis is utterly impossible. It is also viewable here.
His mathematical analysis of the chance – necessity scenario is utterly devastating to any chance OOL hypothesis and thus could be equally devastating to the Darwinian evolution of life hypothesis merely by applying the sample principles to complex bio machines.
In short the nature of cellular information systems in the genome literally rules out chance and necessity for any viable origin theory. An intelligence HAD to be intimately involved in its formation and function.
Worse still for NDE, we now know that the genome contains many poly-poly0functional and thus constrained sequences. But this poly-functionality really stretches the credibility of any chance + necessity hypothesis of ever having any chance at all of success!
In any poly-functional-constrained system, undoing – by random mutation – any one function necessarily undoes the whole.
As Sanford states,
This “complex interwoven (poly-fuctional) network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005; page 141). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. Since evolution was forced, by the established proof of Mendelian genetics, to no longer view the whole organism as to what natural selection works upon, but to view the whole organism as a multiple independent collection of genes that can be selected or discarded as natural selection sees fit, this “complex interwoven network” finding is extremely bad news, if not absolutely crushing, for the “Junk DNA” population genetics scenario of evolution (modern neo-Darwinian synthesis) developed by Haldane, Fisher and Wright (page 52 and 53: Genetic Entropy: Sanford 2005
One of the greatest mathematicians of the 20th century was Kurt Godel.
The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components. The complexity of the living things has to be present within the material [from which they are derived] or in the laws [governing their formation] -Kurt Gödel
We could also add the implications of self correction mechanisms within the genome as further evidence of design since no correction can be made to any complex system without knowledge of its correct system state and thus no such mechanism can arise randomly.
I won’t get into apoptosis and the rest here but you can read my post on Programmed Cell Death.
ID is a necessity in OOL (origin of life) and OOS (origin of species) explanations. The only thing we can reliably say of Darwinian mechanisms is that adaptation and variation occur – but only in a limited way – within the “kind”.
Now, since evolutionists are always asking what taxonomic category the biblical kind is here is my own answer: The “kind” probably corresponds best with the taxonomic ‘family’.
I.E. – No lizard to dog, frog to prince, bacteria to banana, banana to monkey, Darwinist to squid, etc. is even possible given the above humongous improbabilities.