Friday, September 26, 2014

Conservation of Information in Evolutionary Search - Talk by William Dembski - part 2

For an introduction to this post, take a look here. This is quite a short section, with some annotations from me.

Part 2: 09' 40" - 12' 45''

Topics: What is a search?

William Dembski: We talked about information. Let's now look at that second key term "Search". What is a search. There are seven key components in a search.

William Dembski: You have a search space, you have a target - we are looking for something in the search space. There is initialization - where do we start off? There is a query limit - how many things in the search space can we check out? There is query feedback - when we have checked out, when we have located some item - what is it telling us about itself in terms of how it relates to the target? There is an update rule - once we have queried something, what do we query next? And then finally a stop criterion - when do we stop? How do we know that we have done enough? This is very general.

William Dembski: Let me say something about the query limit, because that will always be involved. Fact is, even though there may be multiple universes, our own universe is very small, there is not a whole lot computational power in it. The best supercomputers now are operating in petaflops, $10^{15}$ to $10^{16}$, there are less than $10^{18}$ seconds in the year, no research group that I know has ever operated for more than $10^2$ or one hundred years. The number of researchers seems to be bounded by $10^{10}$. Actually, those numbers I gave you add up to $10^{45}$. So, m for all practical purposes is always to be bounded by $10^{40}$, I think that is save to say. If you are unhappy with that, if you are a really theory based person thinking what is the absolute limit, Seth Lloyd, a quantum computational theorist at MIT, sets the absolute computational limit of the universe to $10^{120}$. That is the most computations that can ever be done. A computation is going to be involved in search, that is the assumption that I make. Especially if you are representing search in silico [???] about the limit [???] anything that we are looking at in our live-time, even with Moorse's law.

William Dembski: These are the seven key components of search. There is a connection with information, obviously: in finding a target, a search produces information. It gets to the target and rules out things that are not in the target, and thereby realizes one possibility to the exclusion of others. So searches produce information in the sense I have just described.

  • William Dembski's definition of a search differs crucially from the definitions of virtually everybody else - for whom searches just build a subset of optimization problems. Dembski and his collaborators separate the target and the feedback. While everybody else is trying to find the optimum of a function (e.g., the characteristic function of a subset $T$ of the searchspace $\Omega$), and will say that elements in the inverse image of the optimum are in the target, this kind of feedback isn't enough for Dembski: you may have found an element of $\Omega$ with an optimal feedback, but this may or may not lie in the target. In a game of Hangman with Dembski, guessing the letters F and O for a three letter word and writing them down as a solution, you may think that FOO is the solution, but even after writing the word out, Dembski would inform you that the real target was BAR. Or in evolutionary terms: some Darwin Finch may have quite a good beak for his purpose, and his species may flourish, but in real, his niche should be occupied by a unicorn.
  • More interestingly, the given seven elements of a search are quite different from the description of a search in their paper "A General Theory of Information Cost Incurred by Successful Search", which Dembski announced as one of the three key theoretical publications on CoI! At least, the new elements of a search don't sound as pompously as the former arrangement of the initiator, the terminator, the inspector, the navigator, the nominator, and the discriminator. Now, the search-space $\Omega$ and the target $T$ made the list, the initialization is the former initiator, the query limit $m$ and the stop criterion are the terminator, query feedback is the inspector, and the update rule seems to supplant navigator and nominator. Most importantly, the discriminator is gone.
    I had an interesting exchange with Winston Ewert - one of the authors of the paper - at my blog and at a thread at Uncommon Descent: Questioning Information Cost. In fact, I think that was one of the most fruitful discussions I had with a proponent of Intelligent Design for quite a while.
    Winston Ewert was able to clear up some of my misconceptions on their concept, and replace them with new objections. One of my main problems was that in their model even exhaustive searches not necessarily find the target, in fact, on average, all exhaustive searches perform only as good as a single random guess.
    I can only assume that Dembski, Marks, and Ewert finally recognized that this is indeed a problem for their framework, and perhaps have dropped the poor discriminator unceremoniously. At last, that would answer my question I’d like to know whether this “general framework” is still in use in my exchange with Ewert with a no.
  • I don't think much of those calculations of computational limits of the universe. Combinatorics lead to big numbers without great fuss: There are $52! \approx 8.07 \times 10^{62}$ ways to arrange a single deck of cards, many more than can be computed using Dembski's limit of $10^{40}$. With two identical decks, I can find $\frac{104!}{2^{52}} \approx 2.29 \times 10^{150}$ ways to arrange them, more than Seth Lloyd's limit of $10^{120}$. And still, card games are played - even solitaire...
Previous: Part 1 - Introduction, What is information?

No comments:

Post a Comment