tag:blogger.com,1999:blog-1689592451067041352.post5121703407351700584..comments2018-06-29T06:04:24.602-07:00Comments on DiEbLog: Dembski's, Ewert's and Marks's Concept of a Search Applied to Exhaustive SearchesDiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-1689592451067041352.post-86843332235024548142014-02-14T06:00:08.324-08:002014-02-14T06:00:08.324-08:00I've finally gotten around to rereading Corne ...I've finally gotten around to rereading Corne and Knowles. DEM can model the evolution of an archive with a suitably restricted Markov process on populations. However, the last inspected population, which a coherent discriminator must yield, is not an archive, because the elements of the population are not associated with fitness values.<br /><br />By the way, If performance is measured on the histogram of fitness values, as in the NFL framework, then archiving also matters in the single-objective case. But this is actually a departure from the NFL framework, where "algorithms" are merely samplers. Tom Englishhttps://www.blogger.com/profile/03887540845396409340noreply@blogger.comtag:blogger.com,1999:blog-1689592451067041352.post-81810080962886186332013-07-17T15:33:40.952-07:002013-07-17T15:33:40.952-07:00With the original formulation of Dembski and Marks...With the original formulation of Dembski and Marks, $N$ iterations of uniform sampling without replacement maximized active information, even though uniform sampling is uninformed. That is, work was counted as prior information of the problem:<br /><br />http://boundedtheoretics.blogspot.com/2009/12/work-is-not-information.html<br /><br />Dembski and Marks have never admitted that this was a severe defect in their analysis, and probably never will. But they've circumvented it now by forcing "search" to guess a single element of the sample space. Despite all of their talk about generality, they've greatly restricted what they can model.Tom Englishhttps://www.blogger.com/profile/03887540845396409340noreply@blogger.comtag:blogger.com,1999:blog-1689592451067041352.post-21501896016335650032013-07-15T23:25:22.901-07:002013-07-15T23:25:22.901-07:00a) A joint account? Not necessarily, though I thin...a) A joint account? Not necessarily, though I think that W. Ewert's answers reflect the position of all the authors.<br /><br />b) Yes, the discriminator starts a search all on its own, with (or without) some addition knowledge. <br /><br />c) I think that I'm doing a straight-forward reading of the paper on "Some Multiobjective Optimizers are Better than Others" by David Corne and Joshua Knowles: <i>in the single objective case we look at the best value so far, while in the muliobjective case things become more complicated</i>: an archive and a discriminator have very little in common.<br /><br />c) At the moment, I have to draw the conclusion <b>In the general framework of Dembski, Ewert, and Marks searches which have enumerated the complete search space will on average find the target with the same probability as a random guess.</b> I don't see how a concept which allows for this is of much use.<br /><br />d) It is irksome that searches which are the same in any conventional framework (i.e., they use the same fitness function, terminate after the same number of queries, look up the same points - even in the same order) will be "represented" by different elements in <b>M</b>(Ω), just because they have different "discriminators". Separating the fitness function and the target doesn't make much sense for me.<br /><br />DiEbhttps://www.blogger.com/profile/02099109109735165335noreply@blogger.comtag:blogger.com,1999:blog-1689592451067041352.post-47896679875555486002013-07-15T18:04:53.755-07:002013-07-15T18:04:53.755-07:00Dembski, Ewert, and Marks (posting under Ewert'...Dembski, Ewert, and Marks (posting under Ewert's name at UD) expect you to have parsed their Chatty-Cathy presentation of "general targeted search" just as they do. So let's take a quick look at some ways in which they've hanged themselves with their informality.<br /><br />1. It is possible for the inspector $O_\alpha$ to map the sample space $\Omega$ to tuples in $Y_1 \times \cdots Y_n$, i.e., $O_\alpha(x)$ = $(O_1(x), \ldots,$ $O_n(x))$. It's not easy to preclude this, because tuples may be encoded as scalars. In Example 3.3, the two SAT scores may be encoded $f(x)$ = $600 v(x)$ + $q(x) - 200$, where $v(x)$ and $q(x)$ are the verbal and quantitative scores, respectively, for applicant $x$. What appears to be one inspection may be many.<br /><br />2. Worse, the inspector operates without restriction on the partial search matrix, including $\{x_1, \ldots,$ $x_{i-1}\}$ in the first row, and thus may perform additional inspections of previously inspected elements of $\Omega$ while inspecting the new entry $x_i$ for the first time. It is possible for $\alpha_i$ to be a tuple of tuples.<br /><br />3. The navigator also operates without restriction on the partial search matrix. Thus what I've said about $\alpha_i$ applies also to $\beta_i$. Simply calling $O_\beta$ the "navigator" and sharing your feelings about it does not define it adequately.<br /><br />4. The discriminator operates without restriction on the search matrix, and — surprise, surprise! — can "surreptitiously" inspect the elements of $\Omega$ in the first row yet again.<br /><br />There's more I could say. But I think it's enough to point out that DEM no longer can keep track of the number of queries (now called inspections). Their work is, in a word, sloppy.<br />Tom Englishhttps://www.blogger.com/profile/03887540845396409340noreply@blogger.com