Sunday, July 14, 2013

Dembski's, Ewert's and Marks's Concept of a Search Applied to Exhaustive Searches

At Uncommon Descent, Winston Ewert, co-author of the paper A General Theory of Information Cost Incurred by Successful Search, writes:
"The search is defined to be a six-tuple consisting of the initiator, terminator, inspector, navigator, nominator, and discriminator. The paper studies the question of picking a search at random, and that would imply picking each of the six components at random. We did not consider it necessary to specifically state that each individual component was also selected at random. That would seem to be implied.
So, let $\Omega = \{\omega_1, \omega_2, \dots, \omega_N\}$ be our finite search space with $N$ elements. We are looking for a single element $\omega_k$, so we try to maximize the fitness function $f = \chi_{\omega_k}$. To keep everything finite, we don't allow repetitions, i.e., in our search each place can only be visited once. This is - as Macready and Wolpert observed - always possible by keeping a look-up table and thus doesn't change the set-up. Therefore, our search is completed in at most $N$ steps.
(BTW: The claim that "each of the six components [is picked] at random" seems not to apply to the inspector: this is a fixed function for a search - in our case, the inspector returns the value of the fitness function. Of course, you can say that we pick the inspector at random out of the set of the one possible inspector.)
Let's take a look at all the searches which are ended by their terminator only after the $N$-s step, i.e., the subset of all exhaustive searches. The price question: What is the probability to find the target in such an exhaustive search? Until now, everyone looking at such problems would have thought that this probability is one: we certainly visited $\omega_k$ and spotted that the function $f$ takes it maximum there. But in the world of Dembski, Ewert, and Marks it is not, as a random discriminator takes its toll - and discriminators aren't obliged to return the target if it was found and identified...
Counterintuitive? That is a flattering description: the discriminator's purpose seems to be to turn even a search which is successful by all human standards into a guess to fit the idée fixe that each search can be "represented" by a measure on the search space.
Addendum: We can drop the condition of not having repetitions in our searches and just look at those searches which are terminated only after the whole search space was visited: terminators with this property exist. Such searches may have length $N$, but can be much longer. The result is the same: the probability of finding the target during a complete enumeration of the search space is (much) less than one. I have to ask: What good is a model in which an exhaustive search doesn't fare much better than a single guess?

4 comments:

  1. Dembski, Ewert, and Marks (posting under Ewert's name at UD) expect you to have parsed their Chatty-Cathy presentation of "general targeted search" just as they do. So let's take a quick look at some ways in which they've hanged themselves with their informality.

    1. It is possible for the inspector $O_\alpha$ to map the sample space $\Omega$ to tuples in $Y_1 \times \cdots Y_n$, i.e., $O_\alpha(x)$ = $(O_1(x), \ldots,$ $O_n(x))$. It's not easy to preclude this, because tuples may be encoded as scalars. In Example 3.3, the two SAT scores may be encoded $f(x)$ = $600 v(x)$ + $q(x) - 200$, where $v(x)$ and $q(x)$ are the verbal and quantitative scores, respectively, for applicant $x$. What appears to be one inspection may be many.

    2. Worse, the inspector operates without restriction on the partial search matrix, including $\{x_1, \ldots,$ $x_{i-1}\}$ in the first row, and thus may perform additional inspections of previously inspected elements of $\Omega$ while inspecting the new entry $x_i$ for the first time. It is possible for $\alpha_i$ to be a tuple of tuples.

    3. The navigator also operates without restriction on the partial search matrix. Thus what I've said about $\alpha_i$ applies also to $\beta_i$. Simply calling $O_\beta$ the "navigator" and sharing your feelings about it does not define it adequately.

    4. The discriminator operates without restriction on the search matrix, and — surprise, surprise! — can "surreptitiously" inspect the elements of $\Omega$ in the first row yet again.

    There's more I could say. But I think it's enough to point out that DEM no longer can keep track of the number of queries (now called inspections). Their work is, in a word, sloppy.

    ReplyDelete
  2. a) A joint account? Not necessarily, though I think that W. Ewert's answers reflect the position of all the authors.

    b) Yes, the discriminator starts a search all on its own, with (or without) some addition knowledge.

    c) I think that I'm doing a straight-forward reading of the paper on "Some Multiobjective Optimizers are Better than Others" by David Corne and Joshua Knowles: in the single objective case we look at the best value so far, while in the muliobjective case things become more complicated: an archive and a discriminator have very little in common.

    c) At the moment, I have to draw the conclusion In the general framework of Dembski, Ewert, and Marks searches which have enumerated the complete search space will on average find the target with the same probability as a random guess. I don't see how a concept which allows for this is of much use.

    d) It is irksome that searches which are the same in any conventional framework (i.e., they use the same fitness function, terminate after the same number of queries, look up the same points - even in the same order) will be "represented" by different elements in M(Ω), just because they have different "discriminators". Separating the fitness function and the target doesn't make much sense for me.

    ReplyDelete
    Replies
    1. I've finally gotten around to rereading Corne and Knowles. DEM can model the evolution of an archive with a suitably restricted Markov process on populations. However, the last inspected population, which a coherent discriminator must yield, is not an archive, because the elements of the population are not associated with fitness values.

      By the way, If performance is measured on the histogram of fitness values, as in the NFL framework, then archiving also matters in the single-objective case. But this is actually a departure from the NFL framework, where "algorithms" are merely samplers.

      Delete
  3. With the original formulation of Dembski and Marks, $N$ iterations of uniform sampling without replacement maximized active information, even though uniform sampling is uninformed. That is, work was counted as prior information of the problem:

    http://boundedtheoretics.blogspot.com/2009/12/work-is-not-information.html

    Dembski and Marks have never admitted that this was a severe defect in their analysis, and probably never will. But they've circumvented it now by forcing "search" to guess a single element of the sample space. Despite all of their talk about generality, they've greatly restricted what they can model.

    ReplyDelete