Abstract
In their 2010 paper
The Search for a Search: Measuring the Information Cost of Higher Level Search, the authors William A. Dembski and Robert J. Marks II present as one of two results their so-called
Horizontal No Free Lunch Theorem. One of the consequences of this theorem is their remark:
If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search. This is quite surprising, as one would expect in the tradition of the
No Free Lunch theorem that the performances are equally good (or bad). Using only very basic elements of probability theory, this essay shows that their remark is wrong - as is their theorem.
Definitions
The situation is quite simple: There is a set $\Omega_1$ - the search space - with $N$ elements, so w.l.o.g. $\Omega = \{1,\,\ldots,\,N\}$. Let $T_1 \subset \Omega_1$ denote the target. At first assume that $T_1$ consists of a single element. To find this element $T_1$ you have $K$ guesses and you aren't allowed to guess the same element twice. Such a sequence of $K$ different guesses can be described as an ordered $K$-tuple $(n_1,\,n_2,\,\ldots,\,n_K)$ of $K$ pairwise different numbers and is called a
search. The $\frac{N!}{(N-K)!}$ different searches build the space $\Omega_K$.
A
search strategy is a probability measure $\mathbf{Q}$ on $\Omega_K$. Such a measure can be given by assigning a probability $\mathbf{Q}((n_1,\,n_2,\,\ldots,\,n_K)) = q_{(n_1,\,n_2,\,\ldots,\,n_K)}$ for each of the (pure) strategies $(n_1,\,n_2,\,\ldots,\,n_K)$ in $\Omega_K$: $q_{(n_1,\,n_2,\,\ldots,\,n_K)}$ is the probability to choose the strategy $(n_1,\,n_2,\,\ldots,\,n_K)$ from $\Omega_K$. So obviously, $q_{(n_1,\,n_2,\,\ldots,\,n_K)} \ge 0\; \forall (n_1,\,n_2,\,\ldots,\,n_K)\in\Omega_K$ and $\sum_{(n_1,\,n_2,\,\ldots,\,n_K) \in \Omega_K} q_{(n_1,\,n_2,\,\ldots,\,n_K)} =1 $.
Random search (i.e., the
random search strategy) correspond to the uniform distribution $\mathbf{U}$ on $\Omega_K$, i.e. $\mathbf{U}((n_1,\,n_2,\,\ldots,\,n_K)) = \frac{(N-K)!}{N!}$.
Any other search strategy is called an
assisted search. This shows the small scope of the concept of an assisted search in the papers of Dembski and Marks: it is not possible to encode the influence of an oracle or a fitness function into the measure $\mathbf{Q}$, so most examples presented by the authors (like
partioned search or
easter egg hunt are not covered by this model.
The
performance of a search strategy $\mathcal{P}(\mathbf{Q},\delta_{T_1})$ for finding an element $T_1 \in \Omega_1$ is defined as the probability to name $T_1$ in one of the $K$ guesses of the search.
Some calculations
Using the characteristic function
$\chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(T_1)= \left\lbrace \begin{array}{cl}
1 & if\;T_1 \in \{n_1,\,n_2,\,\ldots,\,n_K\} \\
0 & otherwise
\end{array} \right. $
this probability can be written as:
$\mathcal{P}{(\mathbf{Q},\delta_{T_1})} = \sum_{(n_1,\,n_2,\,\ldots,\,n_K) \in \Omega_K} q_{(n_1,\,n_2,\,\ldots,\,n_K)} \cdot \chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(T_1)$
The characteristic function is independent of the order of the elements $n_1, n_2, \ldots n_K$. So the expression can be streamlined using instead of the space of
ordered n-tuples $\Omega_K$ the space $\mathfrak{P}_K(\Omega_1)$ of subsets $\{n_1,\,n_2,\,\ldots,\,n_K\}$ with $K$ elements of $Q_1$. This space has ${N \choose K}$ elements. $\mathbf{Q}$ on $\Omega_K$ gives instantly rise to a measure on $\mathfrak{P}_K(\Omega_1)$ - again called $\mathbf{Q}$ - by setting:
$ \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\}) = \sum_{\sigma \in S_K} q_{(\sigma_1(n_1),\,\sigma_2(n_2),\,\ldots,\,\sigma_K(n_K))}.$
Here, $\sigma = (\sigma_1,\sigma_2,\ldots,\sigma_K)$ are the $K!$ elements of the group of permutations of $K$ elements, $S_K$. Thus, the probability to search successfully for $T_1$ is
$\mathcal{P}{(\mathbf{Q},\delta_{T_1})} = \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\}) \cdot \chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(T_1).$
But how does this search perform on average - assuming
that the underlying measure is uniform? The
underlying measure gives the probability with which a target element is chosen: if there is a uniform underlying measure $\mathbf{U}_{\Omega}$, each element $1,2,\ldots, N$ is a the target with a probability of $\frac{1}{N}$. The average performance is given by:
$\mathcal{P}{(\mathbf{Q},\mathbf{U}_{\Omega})} =\sum_{l=1}^N \frac{1}{N}\sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\}) \cdot \chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(l)$
$= \frac{1}{N}\sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\}) \cdot \left(\sum_{l=1}^N \chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(l)\right)$
$= \frac{1}{N}\sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\}) \cdot K $
$=\frac{K}{N} \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} \mathbf{Q}(\{n_1,\,n_2,\,\ldots,\,n_K\})$
$=\frac{K}{N}.$
This holds for every measure $\mathbf{Q}$ on the space of the searches, especially for the uniform measure, leading to the
Special Remark: If no information about a search for a single element exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in the same search performance as random search.
But perhaps this result depends on the special condition of an elementary target? What happens when $T_m = \{t_1, t_2, \ldots, t_m\} \in \mathfrak{P}_m(\Omega
_1)$? Then a search is seen as a success if it identifies at least one element of the target $T_m$, thus
$\mathcal{P}{(\mathbf{Q},\delta_{T_1})} = \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} q_{\{n_1,\,n_2,\,\ldots,\,n_K\}} \cdot \max_{t \in T_m}\{\chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(t)\}.$
Calculating the average over all elements of $\mathfrak{P}_m(\Omega_1)$ results in:
$\mathcal{P}{(\mathbf{Q},\mathbf{U}_{\mathfrak{P}_m(\Omega_1)})} = \frac{1}{{N \choose m}} \sum_{T \in \mathfrak{P}_m(\Omega_1)} \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} q_{\{n_1,\,n_2,\,\ldots,\,n_K\}} \cdot \max_{t \in T_m}\{\chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(t)\}.$
$=\frac{1}{{N \choose m}} \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} q_{\{n_1,\,n_2,\,\ldots,\,n_K\}} \sum_{T \in \mathfrak{P}_m(\Omega_1)} \max_{t \in T_m}\{\chi_{\{n_1,\,n_2,\,\ldots,\,n_K\}}(t)\} $
$=\frac{1}{{N \choose m}} \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} q_{\{n_1,\,n_2,\,\ldots,\,n_K\}} \cdot \left({N \choose m} - {N-K \choose m}\right) $
$= \frac{{N \choose m} - {N-K \choose m}}{{N \choose m}} \sum_{\{n_1,\,n_2,\,\ldots,\,n_K\} \in \mathfrak{P}_K(\Omega_1)} q_{\{n_1,\,n_2,\,\ldots,\,n_K\}} $
$= \frac{{N \choose m} - {N-K \choose m}}{{N \choose m}} = 1-\frac{(N-K)(N-K-1)\ldots (N-K-m+1)}{N(N-1)\ldots (N-m+1)}.$
Again the probability of a successful search - and by definition the performance - is on average independent of the measure used on the space of the searches! (Note that the expression becomes $\frac{K}{N}$ for $m=1$ and $1$ for $N-K < m$). The result of this calculation is the
Less Special Remark: If no information about a search for a target exists other than the number of elements the target exists from, so that the underlying measure is uniform, then, on average, any other assumed measure will result in the same search performance as random search.
As the performance of the random search and any other search doesn't differ on any subset of targets of fixed cardinality, the next remark is elementary:
General Remark: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in the same search performance as random search.
Obviously this
General Remark negates the
Remark of Dembski and Marks: If no information about a search exists, so that the underlying measure is uniform, then, on average, any other assumed measure will result in negative active information, thereby rendering the search performance worse than random search.
Consequences for the Horizontal No Free Lunch Theorem
There is much critique on this theorem out there, e.g.,
"A Note Regarding ``Active Entropy". The previous paragraphs add to this: Dembski's and Marks's
remark is a direct consequence of the theorem, and as the remark is wrong, the theorem can't be right, neither.
How is it possible for two mathematicians to err on such a crucial point? Perhaps they were seduced by the elegance of their argument: the calculations in the sections above are simple and a little bit clumsy. Dembski and Marks smartly augmented the original search space $\Omega_1$ and identified it with the space of pure search strategies $\Omega_K$. For each original target $T_1$ a corresponding target $\tilde{T_1}$ can be found in $\Omega_K$, such that a simple search $(n_1, \ldots, n_K)$ is successful when it is an element of $\tilde{T_1}$.
So the problem of searching by $K$ guesses on $\Omega_1$ is reduced to a simple guess on $\Omega_K$. The problem which they overlooked: the corresponding targets aren't nice subsets. Especially if $K>1$, then the corresponding sets for two different targets in the original space have always a non-empty intersection. Nevertheless, Dembski and Marks use an
exhaustive partition of the augmented space in their
proof of the
Horizontal No Free Lunch theorem, but there is no such partition into subsets which makes sense in the framework of their argument.
Conclusion
Dembski and Marks have introduced their concept of
Active Information using a great number of examples. Their model for an assisted search works only for one or two of these examples. The theoretical results presented in
The Search for a Search seem not to work at all. That leaves the
Active Information as being a new performance measure, but it can't be seen how it improves the understanding of the concept of information in connection with search algorithms.