In it we critique, for instance, Richard Dawkins METHINKS*IT*IS*LIKE*A*WEASEL (p. 1055). Question: When Dawkins introduced this example, was he arguing pro-Darwinism? Yes he was. In critiquing his example and arguing that information is not created by unguided evolutionary processes, we are indeed making an argument that supports ID.''
But does he even use the algorithm which Richard Dawkins had proposed? This is very doubtful...
Description of the algorithms
Here is the lengthy description which Dawkins gave in his book ''The Blind Watchmaker'':
So much for single-step selection of random variation. What about cumulative selection; how much more effective should this be? Very very much more effective, perhaps more so than we at first realize, although it is almost obvious when we reflect further. We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before:
WDLTMNLT DTJBKWIRZREZLMQCO P
It now 'breeds from' this random phrase. It duplicates it repeatedly, but with a certain chance of random error - 'mutation' - in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. In this instance the winning phrase of the next 'generation' happened to be:
WDLTMNLT DTJBSWIRZREZLMQCO P
Not an obvious improvement! But the procedure is repeated, again mutant 'progeny' are 'bred from' the phrase, and a new 'winner' is chosen. This goes on, generation after generation. After 10 generations, the phrase chosen for 'breeding' was:
MDLDMNLS ITpSWHRZREZ MECS P
After 20 generations it was:
MELDINLS IT ISWPRKE Z WECSEL
By now, the eye of faith fancies that it can see a resemblance to the target phrase. By 30 generations there can be no doubt:
METHINGS IT ISWLIKE B WECSEL
Generation 40 takes us to within one letter of the target:
METHINKS IT IS LIKE I WEASEL
And the target was finally reached in generation 43. A second run of the computer began with the phrase:
Y YVMQKZPFfXWVHGLAWFVCHQXYOPY,
passed through (again reporting only every tenth generation):
Y YVMQKSPFTXWSHLIKEFV HQYSPY
YETHINKSPITXISHLIKEFA WQYSEY
METHINKS IT ISSLIKE A WEFSEY
METHINKS IT ISBLIKE A WEASES
METHINKS IT ISJLIKE A WEASEO
METHINKS IT IS LIKE A WEASEP
and reached the target phrase in generation 64. m a third run the computer started with:
GEWRGZRPBCTPGQMCKHFDBGW ZCCF
and reached METHINKS IT IS LIKE A WEASEL in 41 generations of selective 'breeding'.
And this is Dembski's weasel-algorithm:
Partitioned search s a "divide and conquer" procedure best introduced by example. Consider the L =28 character phrase
METHINKS ∗ IT ∗ IS ∗ LIKE ∗ A ∗ WEASEL. (19)
Suppose that the result of our first query of L =28 characters is
SCITAMROFN ∗ IYRANOITULOVE ∗ SAM. (20)
Two of the letters {E, S} are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is finished. For the incorrect letters, we select 26 new letters and obtain
OOT ∗ DENGISEDESEHT ∗ ERA∗NETSIL. (21)
Five new letters are found, bringing the cumulative tally of discovered characters to {T, S,E, ∗,E, S,L}. All seven characters are ratcheted into place. The 19 new letters are chosen, and the process is repeated until the entire target phrase is found.
Well, at least it is short.
Obvious differences
- While Dawkins mentioned a couple of strings (it duplicates it repeatedly), Dembski only mentions one string.
- The mutation rate of Dawkins example seems to be much lower than the Dembski's, at least judging from the first two generations:
Dawkins:
1) WDLTMNLT*DTJBKWIRZREZLMQCO*P
2) WDLTMNLT*DTJBKSIRZREZLMQCO*PDembski:
1) SCITAMROFN*IYRANOITULOVE*SAM
2) OOT*DENGISEDESEHT*ERA*NETSIL
(changes in bold font) - Dawkins gives the number of generations for three experiments: 43, 64, and 41. The expected number of generations G, i.e, E(G), for Dembski's algorithm is 104.55 (=Σk(1-(1-(1/N)k)L), where N is the size of the alphabet and L the number of letters of the phrase, i.e. N = 27, L = 28 .
Claimed differences
Dembski always stressed that in the algorithm a letter once found is ''latched'' or ''ratcheted''. And the strings given as examples in Dawkins's book doesn't exclude that ''latching'' is happening. But Dawkins stated repeatedly that in his algorithm latching doesn't take place.
Conclusion
Even this superficial comparison of the two algorithms shows that both are very different: so to say two kinds of weasels.
How about:
ReplyDeleteStart with a string of 'n' letters.
A):Vary by random choice of one of:
1) Cutting in two points at once (one cut could be either end of the string). And rearranging the pieces:
ABCDEFGHI -> ABC|DEF|GHI -> DEFABCGHI
2) Adding a random letter at a random point:
ABCDEFGHI -> ABCjDEFGHI
3) Changing one letter
ABCDEFGHI -> ABCDjFGHI
4) Removing a letter from a random point:
ABCDEFGHI -> ABCDEFHI
B): Examine the result against a dictionary to see if any subsection is now a valid word: if it is then lock it.
Return to A).
If your fitness function (a kind of an oracle) only states whether your string is correct or not, you need up to N^L trials. If the oracle gives you the number of correct letters (as in Dawkins's example), you can construct a search which needs at most N*L steps. And if it returns the place of the correct letters (as for Dembski), you have a variation of the the hangman game, so you need less than N trials.
ReplyDeleteIt isn't about finding a better algorithm, it's just about the properties of the ones which where presented.