tag:blogger.com,1999:blog-16895924510670413522018-09-28T01:13:42.869-07:00DiEbLogDiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.comBlogger59125tag:blogger.com,1999:blog-1689592451067041352.post-49925114337347231372018-02-23T15:08:00.000-08:002018-02-23T15:24:31.407-08:00An Amazon Review: Still waiting for the ultimate book on Intelligent DesignI wrote a <a href="https://www.amazon.com/gp/customer-reviews/R200B5D2O5MI7R/ref=cm_cr_srp_d_rvw_ttl?ie=UTF8&ASIN=9813142146">review at amazon</a> for <a href="http://evoinfo.org/people.html">Dr. Robert J. Marks II's, Dr. Dr. William A. Dembski's, and Dr. Winston Ewert's</a> book <a href="https://www.amazon.com/Introduction-Evolutionary-Informatics-Robert-Marks/dp/9813142146/ref=cm_cr_srp_d_product_top?ie=UTF8">Introduction to Evolutionary Informatics (1st Edition)</a>: <blockquote style="background:Lavender">We are all waiting for the ultimate book on Intelligent Design, written by R. Marks and W. Dembski. Instead we get a "textbook", another attempt to explain the concepts to laymen. I got the impression that the authors used this setting to avoid the necessary rigour: they just do not define terms like "search" which they use hundreds of times. This allows for a lot of hand-waving, like the following sentence on p. 174:<br><br>"We note, however, the choice of an algorithm along with its parameters and initialization imposes a probability distribution over the search space"<br><br>That unsubstantiated claim is essential for their following proofs on "The Search for a Search"!<br><br>And then there are details like this one:<br><br>p. 130: "For the Cracker Barrel puzzle [we got] an endogenous information of I = 7.15 bits"<br>p. 138: "We return now to the Cracker Barrel puzzle. We showed that the endogenous information [...] is I = 7.4 bits"<br><br>I tried to solve this conundrum, but I came up with I = 7.8 bits. I contacted the authors, but got no reply.</blockquote>Not surprisingly, I gave it only two stars. <h3>Some Details on the Cracker Barrel Puzzle</h3> A more complete quote from p. 130 is: <blockquote>For the Cracker Barrel puzzle, all of the 15 holes are filled with pegs and, at random, a single peg is removed. This starts the game. Using random initialization and random moves, simulation of four million games using a computer program resulted in an estimated win probability p = $0.007\,0$ and an endogenous information of $$I_\Omega = − \log_2\,p\;=\;7.15\,bits.$$</blockquote> They didn't calculate the correct value, but they simulated the puzzle 4,000,000. A simulation is the most easy programmable way to get a result - but how good is it? It should be <i>pretty good</i>: performing one simulation is a Bernoulli trial with a probability of success $p_t$, the theoretical probability to win a single game by chance. Repeating 4,000,000 Bernoulli trials leads to a binomial experiment $B(4,000,000; p_t)$, so $\sigma = 0.000\,042$ for $p_t$ - that's why stating four positions after the decimal point isn't overconfident: assuming that there is no systemic error, then the probability that the actual value $p_t$ lies within $0.007\,00 \pm 0.000\,05$ is $77\%$.<p>Giving three significant digits for $I_\Omega$ oversells the power of their experiment slightly: this implies that they expect $p_t$ to be in the interval $[0.007\,067;0.007\,065]$ with a reasonable probability - but the probability is at best about $44\%$. <p>Confining themselves to only two significant digits on p. 138: $I_\Omega = 7.4\;bits$ yields much more reliable results: again, assuming that there is nothing systematically wrong with their calculation, they can say that $p_t$ is in $[0.005\,72;0.006\,30]$ with a probability of more than $99.999\,99\%$! Well done...<p>Or not: it is very improbably that both values are correct. Very, very, very, very - using the most favourite estimations, then the second result should only occur with a probability of less than $10^{-98}$ if the first experiment was correctly implemented. It is even worse the other way around: $10^{-112}$. <p><b>Which value is correct?</b><p>Not surprising the answer: both are wrong - the three authors somehow botched the implementation of even the easiest way to approach the question - a simulation. How can I be so cock-sure? I simulated it myself - 4,000,000 times - and got a value of $p = 0.004\,5$. Then, I calculated the theoretical value by enumerating all possible games and their respective probabilities: again, $p = 0.004\,5$. Then, I published part of my code at <a href="http://theskepticalzone.com/wp/prof-marks-gets-lucky-at-cracker-barrel/">The Sceptical Zone</a>, and thankfully, <a href="http://theskepticalzone.com/wp/prof-marks-gets-lucky-at-cracker-barrel/comment-page-1/#comment-210273">Roy</a> and <a href="http://theskepticalzone.com/wp/prof-marks-gets-lucky-at-cracker-barrel/comment-page-1/#comment-210455">Corneel</a> also implemented a simulation - which got compatible results. Lastly, <a href="http://boundedtheoretics.blogspot.de">Tom English</a> programmed <a href="http://theskepticalzone.com/wp/prof-marks-gets-lucky-at-cracker-barrel/comment-page-1/#comment-210264">the problem much more cleverly</a>, getting exactly the same results as I (I just had to wait for mine much longer...)<p>Why didn't the authors do the same? DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-45712215125613960782018-01-29T15:18:00.002-08:002018-01-29T15:19:48.950-08:00The Search Problem of William Dembski, Winston Ewert, and Robert Marks<div style="font-weight: 600;font-size: 70%;margin-bottom: 2.5em"><div><em><a href="http://www.worldscientific.com/worldscibooks/10.1142/9974#t=toc">Introduction to Evolutionary Informatics</a>,</em> by Robert J. Marks II, the “<a href="http://superscholar.org/features/20-most-influential-christian-scholars/">Charles Darwin of Intelligent Design</a>”; William A. Dembski, the “<a href="https://billdembski.com/inteldes.htm">Isaac Newton of Information Theory</a>”; and Winston Ewert, the “<a href="http://boundedtheoretics.blogspot.com/2011/06/revised-id-thesis-describes-plagiarism.html">Charles Ingram of Active Information</a>.” World Scientific, 332 pages. </div><div style="margin-top: 0.4em"><a href="https://lccn.loc.gov/2016014235">Classification</a>: Engineering mathematics. Engineering analysis. (<a href="https://www.loc.gov/aba/cataloging/classification/lcco/lcco_t.pdf">TA347</a>)<br><a href="https://lccn.loc.gov/2016014235">Subjects</a>: Evolutionary computation. Information technology–Mathematics.<sup><a href="#fn1" id="ref1">1</a></sup></div></div><i><b>Search</b></i> is a central term in the work of Dr. Dr. William Dembski jr, Dr. Winston Ewert, and Dr. Robert Marks II (DEM): it appears in the title of a couple of papers written by at least two of the authors, and it is mentioned hundreds of times in their textbook "<i>Introduction to Evolutionary Informatics</i>". Strangely - and in difference from the other central term <i><b>information</b></i>, it is not defined in this textbook, and neither is <i><b>search problem</b></i> or <i><b>search algorithm</b></i>. Luckily, dozens of examples of searches are given. I took a closer look to find out what DEM see as the <i>search problem</i> in the "<i>Introduction to Evolutionary Informatics</i>" and how their model differs from those used by other mathematicians and scientists. <a name='more'></a><h2>A Smörgåsbord of Search Algorithms</h2>In their chapter 3.8 "<i>A Smörgåsbord of Search Algorithms</i>", DEM present <table><th colspan=2 style="text-align:center">Table 3.2. A list of some search algorithms.</th><tr><td><ul><li>active set method<sup>38</sup></li><li>adaptive coordinate descent<sup>39</sup></li><li>alpha–beta pruning<sup>40</sup></li><li>ant colony optimization<sup>41</sup></li><li>artificial immune system optimization<sup>42</sup></li><li>auction algorithm<sup>43</sup></li><li>Berndt–Hall–Hall–Hausman algorithm<sup>44</sup></li><li>blind search</sup></li><li>branch and bound<sup>45</sup></li><li>branch and cut<sup>46</sup></li><li>branch and price<sup>47</sup></li><li>Broyden–Fletcher–Goldfarb–Shanno (BFGS) method<sup>48</sup></li><li>Constrained optimization by linear approximation (COBYLA)<sup>49</sup></li><li>conjugate gradient method<sup>50</sup></li><li>CMA-ES (covariance matrix adaptation evolution strategy)<sup>51</sup></li><li>criss-cross algorith <sup>52</sup></li><li>cross-entropy optimization<sup>53</sup></li><li>cuckoo search<sup>54</sup></li><li>Davidon’s variable metric method<sup>55</sup></li><li>differential evolution<sup>56</sup></li><li>eagle strategy<sup>57</sup></li><li>evolutionary programs<sup>58</sup></li><li>evolutionary strategies</li><li>exhaustive search</li><li>Fibonacci search<sup>59,60</sup></li><li>firefly algorithm<sup>61</sup></li><li>Fletcher–Powell method <sup>62</sup></li><li>genetic algorithms<sup>63</sup></li></ul></td><td><ul><li>glowworm swarm optimization<sup>64</sup></li><li>golden section search<sup>65,66</sup></li><li>gradient descent<sup>67</sup></li><li>great deluge algorithm<sup>68</sup></li><li>harmony search<sup>69</sup></li><li>imperialist competitive algorithm<sup>70</sup></li><li>intelligent water drop optimization<sup>71</sup></li><li>Karmarkar’s algorithm<sup>72</sup></li><li>Levenberg–Marquardt algorithm<sup>73</sup></li><li>Linear, Quadratic, Integer and Convex Programming<sup>74</sup></li><li>Nelder–Mead method<sup>75</sup></li><li>Newton–Raphson method<sup>76</sup></li><li>one-at-a-time search<sup>77</sup></li><li>particle swarm optimization<sup>78</sup></li><li>pattern search<sup>79</sup></li><li>POCS (alternating projections onto convex sets) <sup>80</sup></li><li>razor search<sup>81</sup></li><li>Rosenbrock methods <sup>82</sup></li><li>sequential unconstrained minimization technique (SUMT)<sup>83</sup></li><li>shuffled frog-leaping algorithm<sup>84</sup></li><li>simplex methods<sup>85</sup></li><li>simulated annealing<sup>86</sup></li><li>social cognitive optimization<sup>87</sup></li><li>stochastic gradient search<sup>88</sup></li><li>stochastic hill climbing<sup>89</sup></li><li>Tabu search<sup>90</sup></li><li>Tree search<sup>91</sup></li><li>Zionts–Wallenius method<sup>92<sup></li></ul></td></tr></table>A smörgåsbord indeed - and for me it is absolutely not clear how this list was constructed: DEM just write "<i>[a]n incomplete list of search algorithms<sup>37</sup> is provided in Table 3.2.</i>" and give as a footnote David Knuth's third volume of the "Art of Computing Programming: Sorting and Searching". But obviously, this list is not taken from the book, as<ol><li>Knuth definition of a search covers only a finite search-space:<blockquote>"In general, we shall suppose that a set of N records has been stored, and the problem is to locate the appropriate one. As in the case of sorting, we assume that each record includes a special field called its <i>key</i>."</blockquote><li>some of the methods were developed after 1973, the year Knuth's book was published according to DEM</li></ol>I assume it never hurts to mention David Knuth. Fortunately, the footnotes in the table (which is listed at the end of the chapter) are a little bit more to the point. To save jumping back and forth, I added the given source to every item in the list in a second column. I looked up some of them and I tried to find out which kind of sort problem the authors of the paper have in mind - this, I put into the third column<sup><a href="#fn2" id="ref2">2</a></sup>. <table><tr><th style="text-align:center">method</th><th style="text-align:center">source</th><th style="text-align:center">search problem</th></tr> <tbody><tr border-bottom="border-width: 2px; border-color: green"><td>active set method </td><td style="background:lightgrey">J. Nocedal and S. Wright, <i>Numerical Optimization</i> (Springer Science & Business Media, 2006).</td><td>optimization problem: $\min_{x \in \mathbb{R}^n}f(x)$ subject to $\begin{cases}c_i(x) = 0, &i \in \mathcal{E}\\c_i(x) \ge 0,& i \in \mathcal{I} \end{cases}$</td> </tr> <tr><td>adaptive coordinate descent </td><td>I. Loshchilov, M. Schoenauer, and M. Sebag, “Adaptive Coordinate Descent.” In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (ACM, 2011), pp. 885–892.</td><td>(separable) continuous optimization problems</td> </tr> <tr><td>alpha–beta pruning </td><td>Donald E. Knuth and Ronald W. Moore, <i>“An analysis of alpha-beta pruning.”</i> Artif Intel, 6(4), pp. 293–326 (1976).</td><td>"searching a large tree of potential continuations" (p. 234)</td> </tr> <tr><td>ant colony optimization </td><td>M. Dorigo, V. Maniezzo, and A. Colorni, <i>“Ant system: optimization by a colony of cooperating agents.”</i> IEEE Transactions on Systems, Man, and Cybernetics — Part B, 26(1), pp. 29–41 (1996).</td><td>stochastic combinatorial optimization (<a href="http://ieeexplore.ieee.org/document/484436/">here</a>)</td> </tr> <tr><td>artificial immune system optimization </td><td>Leandro N. de Castro and J. Timmis, <i>Artificial Immune Systems: A New Computational Intelligence Approach</i> (Springer, 2002), pp. 57– 58.</td><td></td> </tr> <tr><td>auction algorithm </td><td>Dimitri P. Bertsekas, <i>“A distributed asynchronous relaxation algorithm for the assignment problem.”</i> Proceedings of the IEEE International Conference on Decision and Control, pp. 1703–1704 (1985).</td><td></td> </tr> <tr><td>Berndt–Hall–Hall–Hausman algorithm </td><td>Ernst R. Berndt, Bronwyn H. Hall, Robert E. Hall, and Jerry A. Hausman, <i>“Estimation and inference in nonlinear structural models.”</i> Annals of Economic and Social Measurement, 3(4), pp. 653–665 (1974).</td><td>non-linear least squares problems</td> </tr> <tr><td>blind search</td><td></td><td></td> </tr> <tr><td>branch and bound </td><td>Patrenahalli M. Narendra and K. Fukunaga, <i>“A branch and bound algorithm for feature subset selection.”</i> IEEE Transactions on Computers, 100(9), pp. 917–922 (1977).</td><td></td> </tr> <tr><td>branch and cut </td><td>M. Padberg and G. Rinaldi, <i>“A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems.”</i> SIAM Rev, 33(1), pp. 60–100 (1991).</td><td></td> </tr> <tr><td>branch and price </td><td>Cynthia Barnhart, Ellis L. Johnson, George L, Nemhauser, Martin W.P. Savelsbergh, and Pamela H. Vance, <i>“Branch-and-price: Column generation for solving huge integer programs.”</i> Operations Research, 46(3), pp. 316–329 (1998).</td><td></td> </tr> <tr><td>Broyden–Fletcher–Goldfarb–Shanno (BFGS) method </td><td style="background:lightgrey">J. Nocedal and Stephen J. Wright, <i>Numerical Optimization</i>, 2nd edition (Springer-Verlag, Berlin, New York, 2006).</td><td></td> </tr> <tr><td>Constrained optimization by linear approximation (COBYLA) </td><td>Thomas A. Feo and Mauricio G.C. Resende, <i>“A probabilistic heuristic for a computationally difficult set covering problem.”</i> Op Res Lett, 8(2), pp. 67–71 (1989).</td><td></td> </tr> <tr><td>conjugate gradient method </td><td>A.V. Knyazev and I. Lashuk, <i>“Steepest descent and conjugate gradient methods with variable preconditioning.”</i> SIAM J Matrix Anal Appl, 29(4), pp. 1267–1280 (2007).</td><td>linear system with a real symmetric positive definite matrix of coefficients A</td> </tr> <tr><td>CMA-ES (covariance matrix adaptation evolution strategy) </td><td>Y. Akimoto, Y. Nagata, I. Ono, and S. Kobayashi. <i>“Bidirectional relation between CMA evolution strategies and natural evolution strategies.”</i> Parallel Problem Solving from Nature, PPSN XI, pp. 154–163 (Springer, Berlin Heidelberg, 2010).</td><td></td> </tr> <tr><td>criss-cross algorithm </td><td>Dick den Hertog, C. Roos, and T. Terlaky, <i>“The linear complimentarity problem, sufficient matrices, and the criss-cross method.”</i> Linear Algebra Appl, 187, pp. 1–14 (1993).</td><td></td> </tr> <tr><td>cross-entropy optimization </td><td>R.Y. Rubinstein, <i>“Optimization of computer simulation models with rare events.”</i> Eur J Ops Res, 99, pp. 89–112 (1997).<p style="background:lightgrey"> R.Y. Rubinstein and D.P. Kroese, <i>The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning</i> (Springer-Verlag, New York, 2004).</p></td><td> given a set $\mathcal{X}$, and an $\mathbb{R}-$valued function on $\mathcal{X}$, determine $\max_{\textbf{x} \in \mathcal{X}} S(\textbf{x})$ (<a href="https://people.smp.uq.edu.au/DirkKroese/ps/tutslides.pdf">here p.4</a>)</td> </tr> <tr><td>cuckoo search </td><td>X.S. Yang and S. Deb, <i>“Cuckoo search via Lévy flights.”</i> World Congress on Nature & Biologically Inspired Computing (NaBIC 2009). IEEE Publications, pp. 210–214. arXiv:1003.1594v1.</td><td></td> </tr> <tr><td>Davidon’s variable metric method </td><td>W. C. Davidon, “Variable metric method for minimization.” AEC Research Development Rept. ANL-5990 (Rev.) (1959).</td><td></td> </tr> <tr><td>differential evolution </td><td>P. Rocca, G. Oliveri, and A. Massa, <i>“Differential evolution as applied to electromagnetics.”</i> Antennas and Propagation Magazine, IEEE, 53(1), pp. 38–49 (2011).</td><td></td> </tr> <tr><td>eagle strategy </td><td>Xin-She Yang and Suash Deb, <i>“Eagle strategy using Lévy walk and firefly algorithms for stochastic optimization.”</i> Nature Inspired Cooperative Strategies for Optimization (NICSO 2010) (Springer Berlin Heidelberg, 2010), pp. 101–111.</td><td></td> </tr> <tr><td>evolutionary programs </td><td>Jacek M. Zurada, R.J. Marks II and C.J. Robinson; Editors, <i>Computational Intelligence: Imitating Life</i> (IEEE Press, 1994).<br> M. Palaniswami, Y. Attikiouzel, Robert J. Marks II, D. Fogel, and T. Fukuda; Editors,<i> Computational Intelligence: A Dynamic System Perspective</i> (IEEE Press, 1995).</td><td></td> </tr> <tr><td>evolutionary strategies</td><td></td><td></td> </tr> <tr><td>exhaustive search</td><td></td><td></td> </tr> <tr><td>Fibonacci search</td><td>David E. Ferguson, <i>“Fibonaccian searching.”</i> Communications of the ACM, 3(12), p. 648 (1960).<br> J. Kiefer, <i>“Sequential minimax search for a maximum.”</i> Proceedings of the American Mathematical Society, 4(3), pp. 502–506 (1953).</td><td></td> </tr> <tr><td>firefly algorithm </td><td>Xin-She Yang, <i>“Firefly algorithms for multimodal optimization.”</i> In Stochastic Algorithms: Foundations and Applications (Springer Berlin Heidelberg, 2009), pp. 169–178.</td><td></td> </tr> <tr><td>Fletcher–Powell method </td><td>R. Fletcher and M.J.D. Powell, <i>“A rapidly convergent descent method for minimization.”</i> Computer J. (6), pp. 163–168 (1963).</td><td></td> </tr> <tr><td>genetic algorithms </td><td>David E. Goldberg, <i>Genetic Algorithms in Search Optimization and Machine Learning</i> (Addison Wesley, 1989).<br> R. Reed and R.J. Marks II, <i>“Genetic Algorithms and Neural Networks: An Introduction.”</i> Northcon/92 Conference Record (Western Periodicals Co., Ventura, CA, Seattle WA, October 19–21, 1992), pp. 293–301.</td><td></td> </tr> <tr><td>glowworm swarm optimization </td><td>K.N. Krishnanand and D. Ghose.<i> “Detection of multiple source locations using a glowworm metaphor with applications to collective robotics.”</i> Proceedings of the 2005 IEEE Swarm Intelligence Symposium (SIS 2005), pp. 84–91 (2005).</td><td></td> </tr> <tr><td>golden section search</td><td>A. Mordecai and Douglass J. Wilde. <i>“Optimality proof for the symmetric Fibonacci search technique.”</i> Fibonacci Quarterly, 4, pp. 265–269 (1966).<br> A. Mordecai and Douglass J. Wilde. <i>“Optimality proof for the symmetric Fibonacci search technique.”</i> Fibonacci Quarterly, 4, pp. 265–269 (1966).</td><td></td> </tr> <tr><td>gradient descent </td><td style="background:lightgrey">Jan A. Snyman, <i>Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms</i> (Springer Publishing, 2005).</td><td>constrained optimization problem $minimize_{w.r.t. \mathbf{x}}f(\mathbf{x})$, $\mathbf{x} = [x_1, x_2,\ldots, x_n]^T \in \mathbb{R}^n$ subject to $\begin{cases}g_j(\mathbf{x}) \le 0, & j=1,2, \ldots, m \\ h_j(\mathbf{x})=0,& j=1,2,\ldots,r\end{cases}$</td> </tr> <tr><td>great deluge algorithm </td><td>Gunter Dueck, <i>“New optimization heuristics: the great deluge algorithm and the record-to-record travel.”</i> J Comp Phys, 104(1), pp. 86–92 (1993).</td><td></td> </tr> <tr><td>harmony search </td><td>Zong Woo Geem, <i>“Novel derivative of harmony search algorithm for discrete design variables.”</i> Applied Mathematics and Computation, 199, (1), pp. 223–230 (2008).</td><td></td> </tr> <tr><td>imperialist competitive algorithm </td><td>Esmaeil Atashpaz-Gargari and Caro Lucas, “Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition.” 2007 IEEE Congress on Evolutionary Computation (CEC 2007), pp. 4661– 4667 (2007).</td><td></td> </tr> <tr><td>intelligent water drop optimization </td><td>Shah-Hosseini Hamed, <i>“The intelligent water drops algorithm: a natureinspired swarm-based optimization algorithm.”</i> Int J Bio-Inspired Comp, 1(1/2), pp. 71–79 (2009).</td><td></td> </tr> <tr><td>Karmarkar’s algorithm </td><td>Karmarkar Narendra, <i>“A new polynomial-time algorithm for linear programming.”</i> Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing, pp. 302–311 (1984).</td><td></td> </tr> <tr><td>Levenberg–Marquardt algorithm </td><td>Kenneth Levenberg, <i>“A Method for the Solution of Certain Non-Linear Problems in Least Squares.”</i> Quart App Math, 2, pp. 164–168 (1944).</td><td></td> </tr> <tr><td>Linear, Quadratic, Integer and Convex Programming </td><td style="background:lightgrey">Alexander Schrijver, <i>Theory of Linear and Integer Programming</i> (John Wiley & Sons, 1998).<br> Yurii Nesterov, Arkadii Nemirovskii, and Yinyu Ye, <i>“Interior-point polynomial algorithms in convex programming.”</i> Vol. 13. Philadelphia Society for Industrial and Applied Mathematics (1994).</td><td>given a subset $\Pi \subset \Sigma^* \times \Sigma^*$, where $\Sigma$ is some alphabet, then the search problem is: given string $z \in \Sigma^*$, find a string $y$ such that $(z,y) \in \Pi$, or decide that no such string $y$ exists.</td> </tr> <tr><td>Nelder–Mead method </td><td>K.I.M. McKinnon, <i>“Convergence of the Nelder–Mead simplex method to a non-stationary point.”</i> SIAM J Optimization, 9, pp. 148–158 (1999).</td><td></td> </tr> <tr><td>Newton–Raphson method </td><td style="background:lightgrey">E. Süli and D. Mayers, <i>An Introduction to Numerical Analysis</i> (Cambridge University Press, 2003).</td><td></td> </tr> <tr><td>one-at-a-time search </td><td>A.H. Boas,<i> “Modern mathematical tools for optimization,”</i> Chem Engrg (1962).</td><td></td> </tr> <tr><td>particle swarm optimization </td><td>J. Kennedy and R. Eberhart, <i>“Particle Swarm Optimization.”</i> Proceedings of IEEE International Conference on Neural Networks IV, pp. 1942–1948 (1995).</td><td></td> </tr> <tr><td>pattern search </td><td>A. W. Dickinson, <i>“Nonlinear optimization: Some procedures and examples.”</i> Proceedings of the 19th ACM National Conference (ACM, 1964), pp. 51–201.</td><td></td> </tr> <tr><td>POCS (alternating projections onto convex sets) </td><td>Robert J. Marks II, <i>Handbook of Fourier Analysis & its Applications</i> (Oxford University Press, 2009).</td><td></td> </tr> <tr><td>razor search </td><td>J.W. Bandler and P.A. Macdonsdd,<i> “Optimization of microwave networks by razor search.”</i> IEEE Trans. Microwave Theory Tech., 17(8), pp. 552–562 (1969).</td><td></td> </tr> <tr><td>Rosenbrock methods </td><td>H.H. Rosenbrock, <i>“An automatic method for finding the greatest or least value of a function.”</i> Comp. J., 3, pp. 175–184 (1960).</td><td></td> </tr> <tr><td>sequential unconstrained minimization</td><td></td><td></td> </tr> <tr><td>technique (SUMT) </td><td>John W. Bandler, <i>“Optimization methods for computer-aided design.”</i> IEEE Transactions on Microwave Theory and Techniques, 17(8), pp. 533–552 (1969).</td><td></td> </tr> <tr><td>shuffled frog-leaping algorithm </td><td>Muzaffar Eusuff, Kevin Lansey, and Fayzul Pasha, <i>“Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization.”</i> Engineering Optimization, 38(2), pp. 129–154 (2006).</td><td></td> </tr> <tr><td>simplex methods </td><td>M.J. Box, <i>“A new method of constrained optimization and a comparison with other methods.”</i> Computer J., (8), pp. 42–52 (1965).<br> J.A. Nelder and R. Mead,<i> “A simplex method for function minimization.”</i> Computer J., 7, pp. 308–313 (1965).</td><td></td> </tr> <tr><td>simulated annealing </td><td>S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, <i>“Optimization by simulated annealing.”</i> Science, 220(4598), pp. 671–680 (1983).</td><td></td> </tr> <tr><td>social cognitive optimization </td><td>X.-F. Xie, W. Zhang, and Z. Yang, <i>“Social cognitive optimization for nonlinear programming problems.”</i> Proceedings of the First International Conference on Machine Learning and Cybernetics, 2, pp. 779–783 (Beijing, 2002).</td><td></td> </tr> <tr><td>stochastic gradient search </td><td style="background:lightgrey">James C. Spall, <i>Introduction to Stochastic Search and Optimization</i> (2003).</td><td>Find the value(s) of a vector $\mathbf{\theta} \in \Theta$ that minimize a scalar-valued $loss function$ $L(\mathbf{\theta})$ or: Find the value(s) of $\mathbf{\theta} \in \Theta$ that solve the equation $\mathbf{g}(\mathbf{\theta}) = \mathbf{0}$ for some vector-valued function $\mathbf{g}(\mathbf{\theta})$</td> </tr> <tr><td>stochastic hill climbing </td><td>Brian P. Gerkey, Sebastian Thrun, and Geoff Gordon, <i>“Parallel stochastic hillclimbing with small teams.”</i> Multi-Robot Systems. From Swarms to Intelligent Automata, Volume III, pp. 65–77. (Springer Netherlands, 2005).</td><td></td> </tr> <tr><td>Tabu search </td><td>F. Glover, <i>“Tabu Search — Part I.”</i> ORSA J Comput, 1(3), pp. 190–206 (1989). <i>“Tabu Search — Part II”</i>, ORSA J Comput, 2(1), pp. 4–32 (1990).</td><td></td> </tr> <tr><td>Tree search </td><td>Athanasios K. Sakalidis, <i>“AVL-Trees for Localized Search.”</i> Inform Control, 67, pp. 173–194 (1985).<br> R. Seidel and C.R. Aragon, <i>“Randomized search trees.”</i> Algorithmica, 16(4–5), pp. 464–497 (1996).</td><td></td> </tr> <tr><td>Zionts–Wallenius method </td><td>S. Zionts and J. Wallenius, <i>“An interactive programming method for solving the multiple criteria problem.”</i> Manage Sci, 22(6), pp. 652–663 (1976).</td><td></td> </tr></tbody></table> Often the quoted texts are scientific papers: those expect their readers to be accustomed to the general framework of their specific problems, and will not define the term "search problem" from scratch. Instead, they will just mention the specific problem which they tackle - like <i>statistical combinatorial optimization</i>.<p> But there are some textbooks, too: I tried to look them up and to quote what the authors of those define as their <i>search problem</i>. Nocedil and Snyman both describe the classical optimization problem: here, the search space is a subset of an $n$-dimensional vector space $V$ over $\mathbf{R}$ - as $V$ is restricted by some (in)equations. Finding the target means minimizing an $R$-valued function on this set.<p> On the other hand, Macready and Wolpert - in their classical paper "<i>No Free lunch Theorems for Optimization</i>"<sup><a href="#fn1" id="ref1">1</a></sup> - look at two finite sets $\mathcal{X}$ and a sortable $\mathcal{Y}$ and wish to minimize a function $f: \mathcal{X} \rightarrow \mathcal{Y}$ by finding the $x \in \mathcal{X}$ such that $f(x)$ is minimal.<p> What have all these approaches in common? A set to search on and a function which can be optimized. In most cases, the range of the function is $\mathbb{R}$, $\mathbb{Z}$, or $\mathbb{N}_0$, but for some problems (as mentioned in section 3.7.2. <i>Pareto optimization and optimal sub-optimality</i> of <i>Introduction into Evolutionary Informatics</i>), another partially ordered set will be used. I will ignore this for the time and just choose an ordered set for my definition of a optimization problem which should cover virtually all the cases discussed above: <blockquote style="background:wheat"><div style="text-align:center"><i><b>General Optimization Problem</b></i></div>given:<ul><li>a set $\Omega$</li><li>an ordered set $\mathcal{Y}$ and</li><li>a function $f: \Omega \rightarrow \mathcal{Y}$</li></ul>find $x \in \Omega$ such that $f(x) = \min f$</blockquote> As said before, optimizing and searching are virtually the same, but to stress the character of a search I introduce a target - something, which is mentioned in all the searches of DEM. So, my search problem is: <blockquote style="background:wheat"><div style="text-align:center"><i><b>General Search Problem</b></i></div>given:<ul><li>a set $\Omega$</li><li>an ordered set $\mathcal{Y}$</li><li>a target $T \subset \Omega$ and </li><li>a function $f: \Omega \rightarrow \mathcal{Y}$ such that $T = \{\tau \in \Omega | f(\tau)=\min f \}$</li></ul>find $x \in T$</blockquote>Nothing substantial has changed, the definition just became a little more verbose. I am quite sure that most authors of the papers on the table would accept this as a good attempt of a definition - but is it the search problem which DEM have in mind? <p>On page 48, they provide an example of a search credited to <a href="https://en.wikipedia.org/wiki/Walter_Bradley_(engineer)">Walter Bradley</a>: <blockquote><i>Kirk is an armadillo foraging for grubs when he is bitten by a spider that makes him blind. Kirk wants to return to his armadillo hole, but is disoriented. He knows, though, that his hole is at the lowest elevation in the immediate area, so he balls up and rolls downhill to his hole. When Kirk does this, he is not explicitly seeking his hole. His surroundings are fortuitously designed to take him there. Kirk’s target is thus implicit in the sense it is not specifically sought, but is a result of the environment’s action on him. He can bounce off of trees and be kicked around by playful kids. And repeated trials of rolling down the hill might take drastically different paths. But ultimately, Kirk will end up in his hole at the bottom of the hill. Kirk reaches his home because of information he acquires from his environment. The environment must be designed correctly for this to happen.</i></blockquote>Here, $\Omega$ is Kirk's habitat, $f$ is given as the elevation. What is surprising is that DEM make a distinction between the minimum of the function $f$ and Kirk's intended target $T$, his borrow hole. Luckily, both coincide, but DEM imply that this is not necessarily the case!<p> Next, they revisit their "pancake search example": here, the taste of the pancake as a function depends smoothly on a variety of factors like amount of ingredients, oven temperature, baking time, etc. - the possible combinations of which make up $\Omega$. On this $\Omega$, a cook looks for the best taste by optimizing the taste function. Now, they restrict $\Omega$ by additional conditions to $\Omega'$, such that the original extreme of $f$ does not lie in the new restricted set. <p> For the definitions of optimization/search problem above, this does not pose a problem: there is now the set $\Omega'$ to search on, looking for the optimum of $f|_{\Omega'}$. Though the new solution will taste worse than the original one, the new target is the solution of the new restricted problem.<p> Not so for DEM: "<i>If, however, the environment is constrained in a negative way, the target may never be found even if it was available prior to the alteration.</i>"<p> That is the great difference between the problems which all other scientists discuss and the ones of DEM: DEM have decoupled the optimum of the function and the target, arriving quite another version of a search problem: <blockquote style="background:wheat"><div style="text-align:center"><i><b>DEM's Search Problem</b></i></div>given:<ul><li>a set $\Omega$</li><li>an ordered set $\mathcal{Y}$</li><li>a target $T \subset \Omega$ and </li><li>a function $f: \Omega \rightarrow \mathcal{Y}$</li></ul>find $x \in T$</blockquote> <h2>The Problems with DEM's Search Problem</h2>First there is of course the problem of applicability: it is not clear how any of DEM's results is relevant for the problems in the table as those concern fundamentally different problems.<p>Then there is a problem of procedure: for an algorithm for a search or optimization, generally some information about $\Omega$ is given and (a finite number of) values of $f$ can be obtained. If $T$ is independent of $f$, how is it ever possible to say that a target was hit? This additional information can only be given <i>ex cathedra</i> afterwards! <p>Not every one of the search algorithms stated in the table will always identify the target, but in many cases, this is possible - at least theoretically: if possible, an exhaustive search will always give you the target. Not so for DEM: even if you have calculated $f$ for all elements of $\Omega$ and found the optimum, this does not have to be the intended target which still has to be revealed. <h2>Why do DEM use their definition?</h2>I would like to answer this question using <a href="https://en.wikipedia.org/wiki/Weasel_program">Dawkins's weasel</a>. Then <ul><li>$\Omega$ is the set of strings consisting from 28 letters chosen from the alphabet ABCDEFGHIJKLMNOPQRZ plus * as a sign indicating a space</li><li>$T=$<span style="font-family:monospace">METHINKS*IT*IS*LIKE*A*WEASEL</span></li><li>$f$ is given by the number of correct letters - a number from 0 to 28.</li></ul>Imagine someone has programmed an algorithm using $f$ which will find the target in 100% of all runs. The big question: How will it fare for the target string <span style="font-family:monospace">I*REALLY*DO*NOT*LIKE*WEASELS</span>? <ol><li>My answer would be <i>fantastic</i>: if <span style="font-family:monospace">I*REALLY*DO*NOT*LIKE*WEASELS</span> is the target, then it is the optimum of $f$, so $f$ for finding this phrase is the number of common letters with <span style="font-family:monospace">I*REALLY*DO*NOT*LIKE*WEASELS</span>... <li>DEM's answer would be <i>abysmal</i>: though the target is <span style="font-family:monospace">I*REALLY*DO*NOT*LIKE*WEASELS</span>, $f$ still is defined as the number of common letters with <span style="font-family:monospace">METHINKS*IT*IS*LIKE*A*WEASEL</span>. The algorithm would result in <span style="font-family:monospace">METHINKS*IT*IS*LIKE*A*WEASEL</span></li></ol>The advantage for DEM is stated on p. 173 "We note, however, the choice of an algorithm along with its parameters and initialization imposes a probability distribution over the search space." Indeed, it does in their case - and it will not work with my definition. This probability distribution may appear absolutely counterintuitive to any practitioner of optimization problems, but it is the basic building block for many of DEM's most important results. <h2>How does DEM's search problem work for evolution?</h2>Some interesting characters make a cameo in DEM's textbook: not only Bob and Monica, the pirates X, Y, and Z, Melody and Maverick, but also God and Abraham. In this spirit I would like to invent a dialogue between God and Darwin's Bulldog:<br><table><tr><td>Bulldog:</td><td>"The horse is a marvellous creature: fully adapted to its niche, really, survival of the fittest at play"</td></tr><tr><td>God: </td><td>"Oh know, that one is a total disaster - it may function better than any other creature in its environment, but I was aiming for pink unicorns"</td></tr></table>For short: I think that DEM's model does not work for the usual optimization and search problems in mathematics. It is even worse as a model applied to the real world. <h2>Perhaps these are all strawmen?</h2>It could be that I have erected an elaborate strawman, and that the search problem which I attributed to DEM has nothing to do with their ideas. In this case, it should be easy for DEM - or their apologists - to come forward with their definition. Or perhaps - if I am right - they may just wish to explain why their model is not horrible.</p> <hr></hr><sup id="fn1">1. Still thankful for the nice header, Tom!<a href="#ref1" title="Jump back to footnote 1 in the text.">↩</a></sup><br><sup id="fn2">2. Obviously, the work is not completed yet. I will look up more in the future - and I will be grateful for any contribution to this project!<a href="#ref2" title="Jump back to footnote 2 in the text.">↩</a></sup><br><sup id="fn3">3. David H. Wolpert, William G. Macready <i>"No Free Lunch Theorems for Optimization"</i>, IEEE Transactions on Evolutionary Computation Vol. 1, No. 1, April 1997, p. 68<a href="#ref3" title="Jump back to footnote 3 in the text.">↩</a></sup>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com1tag:blogger.com,1999:blog-1689592451067041352.post-71000707533417045542018-01-18T05:13:00.002-08:002018-01-18T05:16:22.997-08:00Prof. Marks gets lucky at Cracker Barrel<div style="font-weight: 600;font-size: 70%;margin-bottom: 2.5em"><div><em><a href="http://www.worldscientific.com/worldscibooks/10.1142/9974#t=toc">Introduction to Evolutionary Informatics</a>,</em> by Robert J. Marks II, the “<a href="http://superscholar.org/features/20-most-influential-christian-scholars/">Charles Darwin of Intelligent Design</a>”; William A. Dembski, the “<a href="https://billdembski.com/inteldes.htm">Isaac Newton of Information Theory</a>”; and Winston Ewert, the “<a href="http://boundedtheoretics.blogspot.com/2011/06/revised-id-thesis-describes-plagiarism.html">Charles Ingram of Active Information</a>.” World Scientific, 332 pages. </div><div style="margin-top: 0.4em"><a href="https://lccn.loc.gov/2016014235">Classification</a>: Engineering mathematics. Engineering analysis. (<a href="https://www.loc.gov/aba/cataloging/classification/lcco/lcco_t.pdf">TA347</a>)<br><a href="https://lccn.loc.gov/2016014235">Subjects</a>: Evolutionary computation. Information technology–Mathematics.<sup><a href="#fn1" id="ref1">1</a></sup></div></div> <div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-LBzLOfk-DRA/WmCcN3pd02I/AAAAAAAAK_g/qWhzeduKwbQK4wljm2dfN8EsIezIpX0gwCLcBGAs/s1600/cb.1.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://2.bp.blogspot.com/-LBzLOfk-DRA/WmCcN3pd02I/AAAAAAAAK_g/qWhzeduKwbQK4wljm2dfN8EsIezIpX0gwCLcBGAs/s1600/cb.1.png" data-original-width="200" data-original-height="200" /></a></div>Yesterday, I looked again through "Introduction to Evolutionary Informatics", when I spotted the <i>Cracker Barrel puzzle</i> in section 5.4.1.2 <i>Endogenous information of the Cracker Barrel puzzle</i> (p. 128). The rules of this variant of a triangular peg-solitaire are described in the text (or can be found at wikipedia's article on the <a href="https://en.wikipedia.org/wiki/Peg_solitaire">subject</a>). The humble authors then describe a simulation of the game to calculate how probable it is to solve the puzzle using moves at random: <blockquote>A search typically requires initialization. For the Cracker Barrel puzzle, all of the 15 holes are filled with pegs and, at random, a single peg is removed. This starts the game. Using random initialization and random moves, simulation of four million games using a computer program resulted in an estimated win probability p = 0.0070 and an endogenous information of $$I_\Omega = -\log_2 p = 7.15 bits.$$ Winning the puzzle using random moves with a randomly chosen initialization (the choice of the empty hole at the start of the game) is thus a bit more difficult than flipping a coin seven times and getting seven heads in a row</blockquote> Naturally, I created such an simulation in R for myself: I encoded all thirty-six moves that could occur in a matrix <code>cb.moves</code>, each row indicating the jumping peck, the peck which is jumped over, and the place on which the peck lands. And here is the little function which simulates a single random game: <blockquote><code>cb.simul <- function(pos){</br> # pos: boolean vector of length 15 indating position of pecks</br> # a move is allowed if there is a peck at the start position & on the field which is </br> # jumped over, but not at the final position</br> allowed.moves <- pos[cb.moves[,1]] & pos[cb.moves[,2]] & (!pos[cb.moves[,3]])</br> # if now move is allowed, return number of pecks left</br> if(sum(allowed.moves)==0) return(sum(pos))</br> # otherwise, chose an allowed move at random</br> number.of.move <- ((1:36)[allowed.moves])[sample(1:sum(allowed.moves),1)]</br> pos[cb.moves[number.of.move,]] <- c(FALSE,FALSE,TRUE)</br> return(cb.simul(pos))</br>}</br></blockquote></code>I run the simulation 4,000,000 times, changing the start position at random. But as a result, my estimated win probability was $p_e=0.0045$ - only two thirds of the number in the text. How can this be? Why were Prof. Marks et.al. so much luckier than I? I re-run the simulation, checked the code, washed, rinsed, repeated: no fundamental change. So, I decided to take a look at all possible games and on the probability with which they occur. The result was this little routine:<blockquote><code>cb.eval <- function(pos, prob){</br> #pos: boolean vector of length 15 indicating position of pecks</br>#prob: the probability with which this state occurs # a move is allowed if there is a peck at the start position & on the field which is </br> #jumped over, but not at the final position</br> allowed.moves <- pos[cb.moves[,1]] & pos[cb.moves[,2]] & (!pos[cb.moves[,3]])</br> if(sum(allowed.moves)==0){</br>#end of a game: prob now holds the probability that this game is played nr.of.pecks <- sum(pos)</br>#number of remaining pecks cb.number[nr.of.pecks] <<- cb.number[nr.of.pecks]+1</br>#the number of remaining pecks is stored in a global variable cb.prob[nr.of.pecks] <<- cb.prob[nr.of.pecks] + prob</br>#the probability of this game is added to the appropriate place of the global variable </br> return()</br> }</br> for(k in 1:sum(allowed.moves)){</br>#moves are still possible, for each move the next stage will be calculated d <- pos</br> number.of.move <- ((1:36)[allowed.moves])[k]</br> d[cb.moves[number.of.move,]] <- c(FALSE,FALSE,TRUE)</br> cb.eval(d,prob/sum(allowed.moves))</br> }</br> }</br> </code></blockquote>I now calculated the probabilities for solving the puzzle for each of the fifteen possible starting positions. The result was $$p_s=0.0045 .$$This fits my simulation, but not the one of our esteemed and humble authors! What had happened? <h2>An educated guess</h2> <div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-a0_CZJHeqsY/WmCcWc6gPAI/AAAAAAAAK_o/C1mr6j7D8kwXWjuxPPNkZfrXAAQROKyfQCLcBGAs/s1600/cb.2.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://4.bp.blogspot.com/-a0_CZJHeqsY/WmCcWc6gPAI/AAAAAAAAK_o/C1mr6j7D8kwXWjuxPPNkZfrXAAQROKyfQCLcBGAs/s1600/cb.2.png" data-original-width="200" data-original-height="200" /></a></div>I found it odd that the authors run 4,000,000 simulations - 1.000,000 or 10,000,000 seem to be more commonly used numbers. But when you look at the puzzle, you see that it was not necessary for me to look at all fifteen possible starting positions - whether the first peck is missing in position 1 or position 11 does not change the quality of the game: you could rotate the board and perform the same moves. Using symmetries, you find that there are only four essentially different starting positions. the black, red, and blue group with three positions each, and the green group with six positions. For each group, you get a different probability of success <table><tr><td>group</td><td style="color:black">black</td><td style="color:green">green</td><td style="color:red">red</td><td style="color:blue">blue</td></tr><td>prob. of choosing this group</td><td>.2</td><td>.4</td><td>.2</td><td>.2</td></tr><td>prob. of success</td><td>.00686</td><td>.00343</td><td>.00709</td><td>.001726</td></tr></table>One quite obvious explanation for the result of the authors is that they did not run one simulation using a random starting position for 4,000,000 times, but simulated for each of the four groups the game 1,000,000 times. Unfortunately they either did not cumulate their results, but took only the one of the results of the black and the <span style="color:red">red</span> group (or both), or they only thought they switched starting positions from one group of simulations to the next, but indeed always used the black or the <span style="color:red">red</span> one. <h2>Is it a big deal?</h2>It is easily corrigible: instead of "For the Cracker Barrel puzzle, all of the 15 holes are filled with pegs and, at random, a single peg is removed." they could write "For the Cracker Barrel puzzle, all of the 15 holes are filled with pegs and, one peck at the tip of the triangle is removed." If the book was actually used as a textbook, the simulation of the Cracker Barrel puzzle is an obvious exercise. I doubt that it is used that way anywhere, so no pupils were annoyed. It is somewhat surprising that such an error occurs: it seems that the program was written by a single contributor and not checked. That seems to have been the case in previous publications, too. Perhaps the authors thought that the program was too simple to be worthy of the full attention - and the more complicated stuff is properly vetted. OTOH, it could be a pattern.... Well, it will certainly be changed in the next edition. DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-3850227404876317932018-01-08T15:46:00.001-08:002018-01-11T12:53:55.923-08:00UD in 2017Just a few pics: <table><tr><td><a href="https://1.bp.blogspot.com/-ciQs4JOndxA/WlVBjooyo3I/AAAAAAAAK9o/MRbQSaiCyFAz84Xv_34X-a_oR0ZJWVEVgCLcBGAs/s1600/UD-2017-03.png" imageanchor="1" ><img border="0" src="https://1.bp.blogspot.com/-ciQs4JOndxA/WlVBjooyo3I/AAAAAAAAK9o/MRbQSaiCyFAz84Xv_34X-a_oR0ZJWVEVgCLcBGAs/s1600/UD-2017-03.png" data-original-width="1200" data-original-height="800" /></a></td></tr><tr><td><a href="https://2.bp.blogspot.com/-AO_IdwnMI8o/WlQCESOd__I/AAAAAAAAK9Y/vxPBbkMlFPwgHVIBxz9odnrzvyrmqHYPQCLcBGAs/s1600/TSZ-UD-2017-54.png" imageanchor="1" ><img border="0" src="https://2.bp.blogspot.com/-AO_IdwnMI8o/WlQCESOd__I/AAAAAAAAK9Y/vxPBbkMlFPwgHVIBxz9odnrzvyrmqHYPQCLcBGAs/s1600/TSZ-UD-2017-54.png" data-original-width="1280" data-original-height="1405" /></a></td></tr><tr><td><a href="https://2.bp.blogspot.com/-vUWGZGqivuM/WlfOkuzBk1I/AAAAAAAAK-Y/QRxNAATWohEI_cV9F0weteZVPxOD4sCbACLcBGAs/s1600/UD-2017-55.png" imageanchor="1" ><img border="0" src="https://2.bp.blogspot.com/-vUWGZGqivuM/WlfOkuzBk1I/AAAAAAAAK-Y/QRxNAATWohEI_cV9F0weteZVPxOD4sCbACLcBGAs/s1600/UD-2017-55.png" data-original-width="1200" data-original-height="750" /></a></td></tr></table>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com2tag:blogger.com,1999:blog-1689592451067041352.post-56116732394983820522017-07-17T16:22:00.001-07:002017-07-17T16:26:25.601-07:00 A letter to Winston Ewert<i>Winston Ewert, Wiliam Dembski, and Robert Marks have written a new book <b>"Introduction to Evolutionary Informatics"</b> Fair to say, I do not like it very much - so I wrote a letter to Winston Ewert, the most accessible of the "humble authors"...</i> <hr /> Dear Winston,<br /> congratulations for publishing your first book! It took me some time to get to read it (though I'm always interested in the output of the Evo Lab). Over the last couple of weeks I've discussed your oeuvre on various blogs. I assume that some of you are aware of the arguments at UncommonDescent and TheSkepticalZone, but as those are not peer reviewed papers, the debates may have been ignored. Fair to say, I'm not a great fan of your new book. I'd like to highlight my problems by looking into two paragraphs which irked me during the first reading: In your section about "Loaded Die and Proportional Betting", you write on page 77:<blockquote><i>The performance of proportional betting is akin to that of a search algorithm. For proportional betting, you want to extract the maximum amount of money from the game in a single bet. In search, you wish to extract the maximum amount of information in a single query. The mathematics is identical"</i></blockquote>This is at odds with the previous paragraphs: proportional betting doesn't optimize a single bet, but a sequence of bets - as you have clearly stated before. I'm well aware of Cover's and Thomas's "Elements of Information Theory", but I fail to say how their chapter on "Gambling and Data Compression" is applicable to your idea of a search. I tried to come up with an example, but if I have to search two equally sized subsets $\Omega_1$ and $\Omega_2$, and the target is to be found in $\Omega_1$ with a probability bigger than to be found in $\Omega_2$, proportional betting isn't the optimal way to go! Does proportional betting really extract the maximum of information in a single guess? <p />Then there is this following paragraph on page 173: <blockquote><i>One’s first inclination is to use an S4S search space populated by different search algorithms such as particle swarm, conjugate gradient descent or Levenberg-Marquardt search. Every search algorithm, in turn, has parameters. Search would not only need to be performed among the algorithms, but within the algorithms over a range of different parameters and initializations. Performing an S4S using this approach looks to be intractable. We note, however, the choice of an algorithm along with its parameters and initialization imposes a probability distribution over the search space. Searching among these probability distributions is tractable and is the model we will use. Our S4S search space is therefore populated by a large number of probability distributions imposed on the search space.</i></blockquote> Identifying/representing/translating/imposing a search and a probability distribution is central to your theory. It's quite disappointing that you are glossing over it in your new book! While you give generally a quite extensive bibliography, it is surprising that you do not quote any mechanism which translates the algorithm in a probability distribution. <p />Therefore I do not know whether you are thinking about the mechanism as described in "Conservation of Information in Search: Measuring the Cost of Success": this one results in every exhaustive search finding its target. Or are you talking about the "representation" in "A General Theory of Information Cost Incurred by Successful Search": here, all exhaustive searches will do on average at best as a single guess (and yes, I think that this in counter-intuitive). As you are talking about $\Omega$ and not any augmented space, I suppose you have the latter in mind...<p /> But if two of your own "representations" result in such a difference between probabilities ($1$ versus $1/|\Omega|$), how can you be comfortable with making such a wide-reaching claim like "each search algorithm imposes a probability distribution over the search space" without further corroboration? Could you - for example - translate the damping parameters of the Levenberg-Marquardt search into such a probability distribution? I suppose that any attempt to do so would show a fundamental flaw in your model: the separation between the optimum of the function and the target....<p /> I'd appreciate if you could address my concerns - at <a href="https://uncommondescent.com/informatics/who-thinks-introduction-to-evolutionary-informatics-should-be-on-your-summer-reading-list/">UD</a>, <a href="http://theskepticalzone.com/wp/introduction-to-evolutionary-informatics/">TSZ</a>, or my <a href="dieben.blogspot.com">blog</a>.<p />Thanks,<br />Yours Di$\dots$ Eb$\dots$<p />P.S.: I have to add that I find the bibliographies quite annoying: why can't you add the number of the page if you are citing a book? Sometimes the terms which are accompanied by a footnote cannot be found at all in the given source! It is hard to imagine what the "humble authors" were thinking when they send their interested readers on such a futile search! DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-23663265266908735872016-02-02T02:53:00.000-08:002016-02-02T04:15:53.770-08:00Some Pies for "The Skeptical Zone"<table><tbody><tr><td><a href="http://2.bp.blogspot.com/-zzoTTT7hwfY/VrB8M9zlIbI/AAAAAAAAFfU/E1qngnyJR2I/s1600/TSZ-2015-21.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-zzoTTT7hwfY/VrB8M9zlIbI/AAAAAAAAFfU/E1qngnyJR2I/s320/TSZ-2015-21.png" width="300"/></a></td><td><a href="http://2.bp.blogspot.com/-gyd9BBljoFg/VrB8YEUlIuI/AAAAAAAAFfc/sPH3HkBhTXg/s1600/TSZ-2015-22.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-gyd9BBljoFg/VrB8YEUlIuI/AAAAAAAAFfc/sPH3HkBhTXg/s320/TSZ-2015-22.png" width="300"/></a></td></tr><tr><td>In 2015, there some 45,000 comments were made at <a href="http://theskepticalzone.com/wp/">The Skeptical Zone</a>. Here are the top ten of the commentators (just a quantitative, not a qualitative judgement.) I'll stick to the color scheme for all of figures in this post... </td><td>"The Skeptical Zone" has a handy "reply to"-feature, which allows you to address a previous comments (with or without inline quotation.) It is used to various degree - and though some don't use it at all, nearly 50% of all comments were replies.</td></tr></tbody></table><a name='more'></a><table><tbody><tr><td><a href="http://2.bp.blogspot.com/-ZbitwEjXS5I/VrB8vgp2d4I/AAAAAAAAFfk/cGlAFEyxNc0/s1600/TSZ-2015-23.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-ZbitwEjXS5I/VrB8vgp2d4I/AAAAAAAAFfk/cGlAFEyxNc0/s320/TSZ-2015-23.png" width="300"/></a></td><td><a href="http://3.bp.blogspot.com/-dh5GPdgPcpg/VrB82e1JRqI/AAAAAAAAFfs/qalLy7TGBCg/s1600/TSZ-2015-24.png" imageanchor="1"><img border="0" src="http://3.bp.blogspot.com/-dh5GPdgPcpg/VrB82e1JRqI/AAAAAAAAFfs/qalLy7TGBCg/s320/TSZ-2015-24.png" width="300"/></a></td></tr><tr><td>While the previous figure showed who made replies, this one shows who receives them. </td><td>Editors at "The Skeptical Zone" are also allowed to make postings and create new threads.</td></tr><tr><td><a href="http://2.bp.blogspot.com/-6h_wZSU0v4g/VrB87dVELuI/AAAAAAAAFf0/e0tmzdpT028/s1600/TSZ-2015-25.png" imageanchor="1"><img border="0" src="http://2.bp.blogspot.com/-6h_wZSU0v4g/VrB87dVELuI/AAAAAAAAFf0/e0tmzdpT028/s320/TSZ-2015-25.png" width="300"/></a></td><td><a href="http://3.bp.blogspot.com/--5YCuTDXKqU/VrB9AuQcn0I/AAAAAAAAFf8/kMzqXDZMSqo/s1600/TSZ-2015-26.png" imageanchor="1"><img border="0" src="http://3.bp.blogspot.com/--5YCuTDXKqU/VrB9AuQcn0I/AAAAAAAAFf8/kMzqXDZMSqo/s320/TSZ-2015-26.png" width="300"/></a></td></tr><tr><td>How popular are these threads? Here the number of comments editors gathered with there threads.</td><td>Quite another question: A comment can be a short remark, a well-thought argument, or just a orgy of copying-and-pasting. How much text did the commentators write? Here is the length of the plain texts given in the comments - again, just a quantitative, not a qualitative deliberation.</td></tr><tr><td></td><td></td></tr></tbody></table><table><tbody><tr><td><a href="http://4.bp.blogspot.com/--tQ1roFZbYo/VrCJOt-mK0I/AAAAAAAAFgM/yhY-hEDRNMU/s1600/TSZ-2015-13.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/--tQ1roFZbYo/VrCJOt-mK0I/AAAAAAAAFgM/yhY-hEDRNMU/s400/TSZ-2015-13.png" width="600"/></a></td></tr><tr><td>This figure gives an impression of how many comments were attracted over time by threads sorted by the editors who had created them.</td></tr></tbody></table><table><tbody><tr><td><a href="http://1.bp.blogspot.com/-Il6x7dgGsUc/VrCKTY6NrmI/AAAAAAAAFgU/MAFLGG0jb-4/s1600/TSZ-2015-08-50.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-Il6x7dgGsUc/VrCKTY6NrmI/AAAAAAAAFgU/MAFLGG0jb-4/s400/TSZ-2015-08-50.png" width="600"/></a></td></tr><tr><td>And here is the network of those who created - or received - at least 50 replies.</td></tr></tbody></table>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com2tag:blogger.com,1999:blog-1689592451067041352.post-42547932844362647692016-01-27T04:55:00.002-08:002016-01-27T06:01:29.095-08:00"Uncommon Descent" and "The Skeptical Zone" in 2015Since 2005, <a href="">Uncommon Descent</a> (UD) - founded by William Dembski - has been <i>the</i> place to discuss intelligent design. Unfortunately, the moderation policy has always been one-sided (and quite arbitrary at the same time!) Since 2011, the statement "You don't have to participate in UD" is not longer answered with gritted teeth only, but with a real alternative: Elizabeth Liddl's <a href="">The Skeptical Zone</a> (TSZ). So, how were these two sites doing in 2015? <h2>Number of Comments 2005 - 2015</h2> <table><tbody><tr><th>year</th><th>2005</th><th>2006</th><th>2007</th><th>2008</th><th>2009</th><th>2010</th><th>2011</th><th>2012</th><th>2013</th><th>2014</th><th>2015 </th></tr><tr><th>UD </th><td> 8,400</td><td>23,000</td><td>22,400</td><td>23,100</td><td>41,100</td><td>24,800</td><td>41,400</td><td>28,400</td><td>42,500</td><td>53,700</td><td>53,100 </td></tr><tr><th>TSZ </th><td> - </td><td> - </td><td> - </td><td> - </td><td> - </td><td> - </td><td> 2,200</td><td>15,100</td><td>16,900</td><td>20,400</td><td>45,200 </td></tr></tbody></table>In 2015, there were still 17% more comments at UD than at TSZ. <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-RQtLnJjKuZk/Vqh-mSBFMdI/AAAAAAAAFdA/0qsC8_VGvUg/s1600/UD-2015-02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-RQtLnJjKuZk/Vqh-mSBFMdI/AAAAAAAAFdA/0qsC8_VGvUg/s400/UD-2015-02.png" /></a></div><a name='more'></a>Though UD is still going strong, there is a slight downside trend (yellow line) in the daily number of comments. <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Spf-9A4gAog/Vqh_VKcy5AI/AAAAAAAAFdI/SoZuoOhxAuE/s1600/TSZ-2015-02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-Spf-9A4gAog/Vqh_VKcy5AI/AAAAAAAAFdI/SoZuoOhxAuE/s400/TSZ-2015-02.png" /></a></div>The upside trend at TSZ is much stronger, but is fuelled by the very weak participation in the first couple of months of 2015. This can be seen when comparing the number of comments on a monthly base, too: <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-7ALoNLTbvs8/VqiAG5OCTdI/AAAAAAAAFdU/tcuoTN1o5Zg/s1600/UD-2015-04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-7ALoNLTbvs8/VqiAG5OCTdI/AAAAAAAAFdU/tcuoTN1o5Zg/s400/UD-2015-04.png" /></a></div> <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-IE0xisG94Yc/VqiAQstMVqI/AAAAAAAAFdc/u2Y_dcTrogA/s1600/TSZ-2015-04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-IE0xisG94Yc/VqiAQstMVqI/AAAAAAAAFdc/u2Y_dcTrogA/s400/TSZ-2015-04.png" /></a></div> There are many ways how both sites interact with each other: the editors on both blogs may react to the same event, rising the number of comments on both sites. Or an editor, disgruntled with one site, may take his energy to the other one. Overall there is a slightly negative correlation (adj. R²=.256) between the number of comments per week: <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-jEBX4NQZtk4/VqiO8CFWRcI/AAAAAAAAFds/KbO6RTIeFy0/s1600/UD-2015-06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-jEBX4NQZtk4/VqiO8CFWRcI/AAAAAAAAFds/KbO6RTIeFy0/s400/UD-2015-06.png" /></a></div> There is one big difference between both sites: the number of posts. On TSZ, there have been 265 threads with comments, while this number was 1741 at UD (there were another 200 without any comments). Therefore, the number of comments per thread is smaller at UD than at TSZ: <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8SGLaiLJ7AE/Vqi0IwJvFNI/AAAAAAAAFe8/tUydFEu-6SE/s1600/UD-2015-05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-8SGLaiLJ7AE/Vqi0IwJvFNI/AAAAAAAAFe8/tUydFEu-6SE/s400/UD-2015-05.png" /></a></div> At UD, most of the posts (16% or 271 out of 1741) gathered between five and eight comments (or 1,700 - 3.2% - of the 53,100 total comments in 2015), while at TSZ, most of the threads (20% or 56 out of 265) have between 65 and 128 comments (or 5000 - 11% - of the 45,200 comments) <div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ND827mAvEBk/Vqi0T82jBEI/AAAAAAAAFfE/yMHFCqBASXE/s1600/TSZ-2015-05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ND827mAvEBk/Vqi0T82jBEI/AAAAAAAAFfE/yMHFCqBASXE/s400/TSZ-2015-05.png" /></a></div> This difference is shown in this stream of comments. With the notable exception of the thread <a href="http://www.uncommondescent.com/intelligent-design/mystery-at-the-heart-of-life/">Mystery at the Heart of Life</a>, even the busiest posts aren't active for longer than a month at UD: <div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-mG3tDbcNga8/VqiTm2Se07I/AAAAAAAAFek/0ov3XN-fwbc/s1600/UD-2015-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-mG3tDbcNga8/VqiTm2Se07I/AAAAAAAAFek/0ov3XN-fwbc/s400/UD-2015-03.png" /></a></div> In fact, an average an article at UD will get comments over an period of 5.3 days. This average is 23.7 days for TSZ. Certainly eternal threads like <a href="http://theskepticalzone.com/wp/wine-cellar/">Moderation Rules</a> and <a href="http://theskepticalzone.com/wp/noyau/">Noyau</a> play a role here, but other mainly philosophical topics are discussed over great periods of time, too. <div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-nemqhtZkIQE/VqiTtiwF5HI/AAAAAAAAFes/7RYAgV08ET0/s1600/TSZ-2015-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-nemqhtZkIQE/VqiTtiwF5HI/AAAAAAAAFes/7RYAgV08ET0/s400/TSZ-2015-03.png" /></a></div> My personal favourites of 2015 unfortunately got very few comments: Winson Ewert's offer to <a href="http://www.uncommondescent.com/intelligent-design/ask-dr-ewert/">Ask Dr. Ewert</a> at UD, Tom English's excellent reply <a href="http://theskepticalzone.com/wp/a-question-for-winston-ewert/">A Question for Winston Ewert</a> at TSZ, and then <a href="http://www.uncommondescent.com/intelligent-design/dr-ewert-answers/">Dr. Ewert Answers</a>, again at UD - which were commented less than eighty times <i>in total</i>. I had hoped for a discussion about the mathematical aspects of Intelligent Design (see my posts). Unfortunately, the design-side didn't show any interest in anything other but an token interaction. Another chance missed. <p><b>Note:</b> UD and TSZ both use <a href="https://wordpress.org/about/">WordPress</a>, so they should have numerous ways to get statistics for their sites. I could look only from the outside, crawling the threads and comments. Though I'm fairly sure to got the all the visible data, I cannot guarantee to paint the real picture absolutely accurately. DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-60305824044870699292016-01-26T06:37:00.000-08:002016-01-26T06:37:20.910-08:00The "Discovery Institute" trembles before the mighty powers of DiEbLog!Just kidding. It isn't. But they published some of the pages the absence of which I had criticized in my previous <a href="http://dieben.blogspot.de/2016/01/omg-discovery-institute-is-comitting.html">post</a>: John G. West wrote an article on <a href="http://www.evolutionnews.org/2016/01/dennis_prager_w102539.html">Dennis Prager Was Right: Atheists Are More Open-Minded on ID than Some United Methodist Officials</a>, in which he included further pages from the poll which the Discovery Institute (DI) had ordered on the subject of being snubbed by the United Methodist Church. <p>I assume that this little blog mainly flies under the RADAR of the DI, but they most probably follow astutely the very amusing <a href="https://sensuouscurmudgeon.wordpress.com/">Sensuous Curmudgeon</a>, where I raised the <a href="https://sensuouscurmudgeon.wordpress.com/2016/01/25/a-poll-on-discovery-institute-vs-methodists/#comment-98792">problem earlier</a>. <p> So, as I have guessed there <i>was</i> a question Q9, regarding the religious beliefs of the participants of the study. Why did the DI need an extra day to put a spin on the answers to this questions? Did they think it to be especially juicy, so that they were able to get yet another article from it? Or were they annoyed that one third of the participants of the poll identified themselves as agnostic or atheists?<p> Let's wait and see for Q8 - the question for the degree of education. Perhaps some scientists named Steve were involved, that result could be unpleasant... DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-78145700799221295132016-01-26T01:11:00.001-08:002016-01-26T04:23:23.165-08:00OMG - The Discovery Institute is Committing Censorship!!!11!!1!<i><b>Does the <a href="http://www.discovery.org/">Discovery Institute</a> (DI) want to keep its much coveted <a href="http://www.evolutionnews.org/2016/01/for_darwin_day_102536.html">Censor of the Year Award</a> for itself this year?</b></i><p> If you are interested in this kind of things, you will have noticed the tantrum <a href="http://www.discovery.org/p/18http://www.discovery.org/p/18">John G. West</a> and his friends are collectively throwing over at <a href="http://www.evolutionnews.org/">Evolution News & views</a> (EN&V) because they were somewhat rebuffed by the <a href="http://www.umc.org/">United Methodist Church (UMC)</a>. Here is some background as it presents itself to me (EN&V's viewpoint may differ): The UMC is holding its ''General Conference'' once every four years. In May 2016, it will be taking place at the ''Oregon Convention Center''. ''Sponsors and exhibitioners'' may rent booths at the center to present themselves to the estimated 6,500 participants of the event. The DI was willing to pay the <a href="https://www.signup4.net/Upload/GENE17A/201619E/GC2016%20Exhibitor%20%20Sponsor%20PolicyProcedures.pdf">900 Dollar - 1200 Dollar fee to become an exhibitioner</a>, but their application was turned down. There may have been various problems, but unfortunately for them, it did not seem to match the fourth criterium for eligibility: <blockquote><b>Proven Business Record:</b> Purchasers must have a proven business record with their products/services/resources. Exhibits are not to provide a platform to survey or test ideas; rather, to provide products/services/resources which are credible and proven.</blockquote> It is fair to say that the DI has not recovered from this blow yet- over the last eight days, there have been at least fourteen articles been published on this matter at EN&V. One of the highlights was this <a href="http://www.evolutionnews.org/2016/01/new_poll_most_a102538.html">New Poll: Most Americans Turn Thumbs Down on United Methodist Ban on Intelligent Design</a>: The DI spent the money it has saved on the booth to have a survey performed by <a href="https://www.surveymonkey.com">SurveyMonkey</a>. It asked: <blockquote>The United Methodist Church recently banned a group from renting an information table at the Church’s upcoming general conference because the group supports intelligent design—the idea that nature is the product of purposeful design rather than an unguided process. Some have criticized the ban as contrary to the United Methodist Church’s stated commitment to encourage “open hearts, open minds, open doors.” Rate your level of agreement or disagreement with the following statements:</blockquote><blockquote>1. The United Methodist Church should not have banned an intelligent design group from renting an information table at its conference.</blockquote><blockquote>2. The United Methodist Church’s ban on the intelligent design group seems inconsistent with the Church’s stated commitment to encourage “open hearts, open minds, open doors.”</blockquote>What surprised me: thought the question was obviously leading, still 30% didn't agree with the first statement and 22% didn't agree with the second one! Or, as the DI describes it: <blockquote>More than 70% of the 1,946 respondents to the nationwide survey agreed that “the United Methodist Church should not have banned an intelligent design group from renting an information table at its conference.” More than 78% of respondents agreed that “the United Methodist Church’s ban on the intelligent design group seems inconsistent with the Church’s stated commitment to encourage ‘open hearts, open minds, open doors.’”</blockquote>But here is the cinch: Though EN&V announced that the "full report" can be downloaded from <a href="http://www.discovery.org/scripts/viewDB/filesDB-download.php?command=download&id=11931">here</a>, it is obvious from the pagination that <b> at least two pages are missing!</b> <p><span style="font-size:large;color:red;font-weight:bolder;background-color:yellow">Enter panic mode: OMG! The Discovery Instituted is censoring its report! What are they covering up? Are they beating puppies? Like Darwin! They should get their own Censorship Award!!!!11!!1</span><p>The truth is a little bit less sinister: Survey Monkey asks you about your age (Q11), your gender (Q12), your income (Q13), your party affiliation (Q10) and the region you are living in (Q14). What is surprisingly missing are questions about your religious orientation and your education. These two characteristics are of obvious interest for a poll like this one - so, I am guessing that the questions Q8 and Q9 were about these matters. Maybe the results did not please the DI and thus, were omitted from the final report. <p><span syle="font-size:small">Edit: Instead of trying to claim that it was meant to be ironic, I just corrected an embarrassing spelling mistake in the headline...</span>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com6tag:blogger.com,1999:blog-1689592451067041352.post-56935040994043601752015-05-31T03:01:00.000-07:002015-06-01T05:16:52.548-07:00Uncommon Descent in Numbers - 2nd edition<a href="http://dieben.blogspot.com/2012/04/uncommon-descent-in-numbers.html">Three years ago</a>, I put up some pictures showing the number of comments and threads at <a href="">Uncommon Descent</a>. Now seems to be a good occasion to up-date some of this information. <h2>1. Google Trends</h2><a href="http://www.google.de/trends/explore#q=Uncommon%20Descent&cmpt=q&tz=">Look for yourself</a>: The phrase <i>Uncommon Descent</i> was most searched for in 2008. After that, everybody had bookmarked the site, so further googling became unnecessary. The same holds true for <a href="http://pandasthumb.org/">The Panda's Thumb</a> - <a href="http://www.google.de/trends/explore#q=Uncommon%20Descent%2C%20Pandas%20thumb&cmpt=q&tz=">both sites are equally popular...</a><h2>2. Threads per Month</h2><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-cQ-79GxANqw/VWrIywhutCI/AAAAAAAAEnk/_ktz1onhpCI/s1600/ud-monthly-threads.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; width: 50%"><img border="0" width="475" src="http://4.bp.blogspot.com/-cQ-79GxANqw/VWrIywhutCI/AAAAAAAAEnk/_ktz1onhpCI/s1600/ud-monthly-threads.png" /></a></div>The number of new threads per month peaked in 2011, but is still on a high level - though it seems to be decreasing. What makes all the difference is "News" - a.k.a. Denyse O'Leary - adding her <i>news items</i>. While in 2011/2012, those often were left uncommented, since 2013, they attract the attention of her fellow editors (though I got the impression that some commentators use them for their <i>off-topic</i>-remarks, while others just cannot let the copious factual inaccuracies stand uncommented.) <a name='more'></a><h2>3. Threads per Author and Year</h2><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-ECLzzw3_gpI/VWrKz3QPcMI/AAAAAAAAEnw/-CwEcD_1PUk/s1600/ud-threads-per-year.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" width="475" src="http://1.bp.blogspot.com/-ECLzzw3_gpI/VWrKz3QPcMI/AAAAAAAAEnw/-CwEcD_1PUk/s1600/ud-threads-per-year.png" /></a></div>Over the last four-and-a-half years, Denyse O'Leary contributed the majority of new threads (as "O'Leary" and "News"). Cornelius Hunter uses Uncommon Descent regularly to rise the attention for his blog, while the president and chief-enforcer Barry Arrington delights us more and more with his insights. <h2>4. Edits per month</h2><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-kRPJz-5SOcY/VWrNrnonYcI/AAAAAAAAEn8/shg0UQlZhEY/s1600/ud-monthly-comments.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" width="475" src="http://4.bp.blogspot.com/-kRPJz-5SOcY/VWrNrnonYcI/AAAAAAAAEn8/shg0UQlZhEY/s1600/ud-monthly-comments.png" /></a></div>The public interest in Uncommon Descent may by decreasing, but the interest in debate isn't. It peeked in Nov 2014 with nearly 9,300 comments in a single month, discussing topics like <a href="http://www.uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/">An attempt at computing dFSCI for English language</a>, <a href="http://www.uncommondescent.com/atheism/heks-suggests-a-way-forward-on-the-ks-bomb-argument/">HeKS suggests a way forward on the KS “bomb” argument</a>, and <a href="http://www.uncommondescent.com/?s=Evolution+driven+by+laws%3F+Not+random+mutations%3F">Evolution driven by laws? Not random mutations?</a>. This spike was probably a result of the <i>general amnesty</i>, which allowed free contribution without throttling by the moderation queue (see next section.) <h2>5. Editors per month</h2><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-CZB-XnNOx0g/VWrP16dM6QI/AAAAAAAAEoI/DF7RVK1epZ8/s1600/ud-monthly-editors.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" width="475" src="http://1.bp.blogspot.com/-CZB-XnNOx0g/VWrP16dM6QI/AAAAAAAAEoI/DF7RVK1epZ8/s1600/ud-monthly-editors.png" /></a></div>In Oct 2014, Barry Arrington announced a <a href="http://www.uncommondescent.com/intelligent-design/ud-announces-general-amnesty/">general amnesty</a> for all banned editors, a step which perhaps didn't increase the number of commentators per month as much as hoped. Furthermore, the policy was quickly (and silently) revoked, and the banning returned to a "normal" level. <h2>6. Mathematics at Uncommon Descent</h2><a href="http://www.uncommondescent.com/">Uncommon Descent</a> was founded by William A. Dembski, the "Isaac Newton of Information Theory". Though it is the premier blog in favour of <i>intelligent design</i>, there isn't much mathematics happening over there. One practical reason for this is that not only that there is no $\LaTeX$ extension, Uncommon Descent doesn't allow anything but ascii in the comments, even a Ω will be replaced by a "?" when the comment appears - and basic html tags like <sup></sup> or <sub></sub> cannot be used, neither. But there aren't any mathematicians in the current list of authors - though, when William A. Dembski edited <i>Uncommon Descent</i> regularly, even he addressed questions of a mathematical nature very seldom. Hopefully, this will change with <a href="http://www.uncommondescent.com/intelligent-design/dr-ewert-answers/">Dr. Winson Ewert...</a><h2>7. Personal Note</h2>I started editing <i>Uncommon Descent</i> in 2008, and have contributed some 500 edits. For most if the time, I tried to contribute to the mathematical aspects of <i>intelligent design</i>. You have to be quite determined to do so: until Berry Arrington's general amnesty, my comments didn't appear directly, but had to be vetted by one of the moderators - a process which could take days! What it took to get even some indisputable facts to be recognized by the "other side" can be seen in this thread: <a href="http://www.uncommondescent.com/intelligent-design/evolutionary-informatics-lab-website-receives-facelift/">Evolutionary Informatics Lab website receives facelift</a>... Currently, I'm blocked: I had asked about the disappearance of the numerous comments of <i>Aurelio Smith</i>. I did so three times in a row, as I thought it was a technical glitch which made my question disappear - but it turned out to be design, or better: the will of the designer. Perhaps it is fitting that <a href="http://www.uncommondescent.com/intelligent-design/ask-dr-ewert/">this is my last conversation at</a> <i>Uncommon Descent</i>: <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-yYlOUz6nB7c/VWraxmtDW9I/AAAAAAAAEoY/DMO8IB8boDE/s1600/Bildschirmfoto%2Bvom%2B2015-05-30%2B14%253A16%253A07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" width="475" src="http://2.bp.blogspot.com/-yYlOUz6nB7c/VWraxmtDW9I/AAAAAAAAEoY/DMO8IB8boDE/s1600/Bildschirmfoto%2Bvom%2B2015-05-30%2B14%253A16%253A07.png" /></a></div><h2>8. Shout-out to <i>kairosfocus</i></h2>The <i>crime</i> that got me blocked was asking about Aurelio Smith's comments. I know that your line of reasoning is <i>"He was blocked, therefore he must be guilty of a nefarious crime"</i>, but as so often, you are wrong. <p><b>Update:</b>I wrote an email to <a href="http://bankruptcylawyer4denver.com/">Barry Arrington</a>, linking to my blog and telling him, that I'd like to interact with Winston Ewert on <i>Uncommon Descent</i>. Shortly after, Barry Arrington informed me that my email-address was taken from the block list.DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com6tag:blogger.com,1999:blog-1689592451067041352.post-63096796948243729882015-05-25T00:59:00.001-07:002015-05-31T06:49:55.154-07:00The Natural Probability on M(Ω)<p>Two weeks ago, Dr. Winston Ewert announced at <a href="http://www.uncommondescent.com/intelligent-design/ask-dr-ewert/">Uncommon Descent</a> a kind of <i>open mike</i>. He put up a page at <a href="https://www.google.com/moderator/#15/e=21afd2&t=21afd2.40">Google Moderator</a> and asked for questions. Unfortunately, not many took advantage of this offer, but I added three questions from the top of my head. The experience made me revisit the paper <a href="http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0002">A General Theory of Information Cost Incurred by Successful Search</a> again, and when I tried - as usual - to construct simple examples, I run into further questions - so, here is another one:</p><p>In their paper, the authors W. Dembski, W. Ewert, and R. Marks (DEM) talk about something they call the <i>natural probability</i>:<blockquote>Processes that exhibit stochastic behavior arise from what may be called a <i>natural probability</i>. The natural probability characterizes the ordinary stochastic behavior of the process in question. Often the natural probability is the uniform probability. Thus, for a perfect cube with distinguishable sides composed of a rigid homogenous material (i.e., an ordinary die), the probability of any one of its six sides landing on a given toss is 1/6. Yet, for a loaded die, those probabilities will be skewed, with one side consuming the lion’s share of probability. For the loaded die, the natural probability is not uniform.</blockquote>This <i>natural probability</i> on the search space translates through their idea of lifting to the space of measures $\mathbf{M}(\Omega)$:<blockquote>As the natural probability on $\Omega$, $\mu$ is not confined simply to $\Omega$ lifts to $\mathbf{M}(\Omega)$, so that its lifting, namely $\overline{\mu}$, becomes the natural probability on $\mathbf{M}(\Omega)$ (this parallels how the uniform probability $\mathbf{U}$, when it is the natural probability on $\Omega$, lifts to the uniform probability $\overline{\mathbf{U}}$ on $\mathbf{M}(\Omega)$, which then becomes the natural probability for this higher-order search space).</blockquote>As usual, I look at an easy example: a loaded coin which always shows head. So $\Omega=\{H,T\}$ and $\mu=\delta_H$ is the natural measure on $\Omega$. What happens on $\mathbf{M}(\Omega)= \{h\cdot\delta_H + t\cdot\delta_T|0 \le h,t \le 1; h+t=1 \}$? Luckily, $$(\mathbf{M}(\{H,T\}),\mathbf{U}) \cong ([0,1],\lambda).$$ Let's jump the hoops:<ol><li>The Radon-Nikodym derivative of $\delta_H$ with respect to $\mathbf{U}$ is $f(H) = \frac{d\delta_H}{d\mathbf{U}}(H) = 2$, $f(T) = \frac{d\delta_H}{d\mathbf{U}}(T) = 0$</li><li>Let $\theta \in \mathbf{M}(\{H,T\})$, i.e., $\theta= h\delta_H + t\delta_T$. Then$$\overline{f}{(\theta)} = \int_{\Omega} f(x)d\theta(x)$$ $$=f(H)\cdot\theta(\{H\}) + f(T) \cdot\theta(\{T\})$$ $$=2 \cdot h$$</li></ol>Here, I have the density of my natural measure on $\mathbf{M}(\Omega)$ with regard to $\overline{\mathbf{U}}$, $$d\overline{\delta_H}(h\cdot\delta_H + t\cdot\delta_T) = 2 \cdot h \cdot d\overline{\mathbf{U}}(h\cdot\delta_H + t\cdot\delta_T).$$ But what is it good for? For the uniform probability, DEM showed the identity $$\mathbf{U}=\int_{\mathbf{M}(\Omega)}\theta d\overline{\mathbf{U}} .$$ Unfortunately, for $\int_{\mathbf{M}(\Omega)}\theta d\overline{\delta_H}$, I get nothing similar: $$\int_{\mathbf{M}(\Omega)}\theta d\overline{\delta_H} = \frac{2}{3}\delta_H + \frac{1}{3}\delta_T$$</p><p>So, again, what does this mean? Wouldn't the Dirac delta function be a more <i>natural</i> measure on $\mathbf{M}(\Omega)$?</p><p>I hope that Dr. Winston Ewert reacts to all of the questions before <i>Google Moderator</i> shuts down for good on June 30, 2015...</p>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-1973537133363914592015-05-11T10:46:00.002-07:002015-05-31T06:50:28.871-07:00Five Years of "The Search for a Search"<p><span style="font-weight:700">The <a href="https://www.fujipress.jp/JACIII/index.html"><i>Journal of Advanced Computational Intelligence and Intelligent Informatics</i></a> published the paper <a href="http://evoinfo.org/publications/search-for-a-search/">The Search for a Search: Measuring the Information Cost of Higher Level Search</a> of William A. Dembski and Robert J. Marks II (DM) in its July edition in 2010. With the five year jubilee of the publication coming, it seems to be appropriate to revisit a pet peeve of mine... </span></p><div class="separator" style="clear: both; text-align: center;"><a href="http://commons.wikimedia.org/wiki/File:Shell_game_in_Berlin.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-A-I3PUsXKfo/VVXGRJcIVII/AAAAAAAAEYg/4l1v2c9NftQ/s320/800px-Shell_game_in_Berlin.jpg" /></a></div>(Shell game performed on Karl-Liebknecht-Straße in Berlin, photograph by <a href="http://commons.wikimedia.org/wiki/User:E.asphyx">E.asphys</a>) <p>Imagine a shell game. You have observed the con artist for a while, and now you know:<ol><li>The pea ends up under each of the three shells (left, middle, and right) with the same probability, i.e., $$P(Pea=left)=P(Pea=middle)=P(Pea=right)=1/3$$</li><li>If the pea ends up under the left or the middle shell, you are able to track its way. So, in these cases, you will find the pea with probability 1 $$P(Finding\,Pea|Pea=left)=P(Finding\,Pea|Pea=middle)=1$$</li><li>However, if the pea ends up under the right shell, in 999 times out 1000, you make a mistake during your tracking and be convinced that it is under the left or the middle shell - the probability of finding this pea is 1/1000$$P(Finding\,Pea|Pea=right)=1/1000$$</li></ol></p><p>You are invited to play the game. Should you use your knowledge (method $M_1$), or should you chose a shell at random (method $M_2)$?<a name='more'></a> Let's calculate the average probability for finding the pea using your knowledge $$AM_1= P(Pea=left) \cdot P(Finding\,Pea|Pea=left)$$ $$+ P(Pea=middle) \cdot P(Finding\,Pea|Pea=middle)$$ $$+ P(Pea=right) \cdot P(Finding\,Pea|Pea=right)$$ $$AM_1 = \frac{1}{3} \cdot 1 + \frac{1}{3} \cdot 1 + \frac{1}{3} \cdot \frac{1}{1000} = \frac{2001}{3000} \approx \frac{2}{3} $$</p><p>What is the average probability when choosing a shell at random? As the pea is in the left, middle or right position with the same probability, we get $$AM_2 = \frac{1}{3} $$</p><p>So, $M_1$ or $M_2$? The answer seem to be obvious: you should stick to the first method, as it wins twice as often. Not so fast, say the Drs Dembski and Marks. You should calculate the average of the active information - the <i>active entropy</i>: $$H_1 = \frac{1}{3} \cdot \log_2 \frac{1}{1/3} + \frac{1}{3} \cdot \log_2 \frac{1}{1/3} +\frac{1}{3} \cdot \log_2 \frac{1/1000}{1/3} $$ $$= \log_2(\sqrt[3]{1 \cdot 1 \cdot \frac{1}{1000}}) - \log_2(\sqrt[3]{\frac{1}{3}\cdot \frac{1}{3} \cdot \frac{1}{3}})$$ $$=-\log_2(10) + \log_2(3)\approx -1.737$$ And their conclusion (p. 477): as on average the calculation results in negative active information, <span style="font-weight:700">the search performance is rendered worse than random search.</span> This is obviously false.</p><p>What went wrong? To calculate the overall performance, it is appropriate to use the arithmetic mean, as done for $AM_1$ and $AM_2$. By calculating the <i>active entropy</i>, you are <i>de facto</i> comparing the geometric means of your method and of the random method. The geometric mean favors equidistribution, thereby preferring the inferior random method over your more successful method, which tends to fail in only one case (pea under the third shell). This is a usual scenario: you often find algorithms which do well on most of the cases, but fail in some - often especially manufactured - instances.</p><p><span style="font-weight:700;background:lightgrey">Conclusion: </span><span style="background:lightgrey"">The average of the active information is not a good method to describe the performance of a search. If you think otherwise, maybe I can interest you in buying a bridge?</span></p>(edited to clarify the search process)DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com6tag:blogger.com,1999:blog-1689592451067041352.post-22336636268104051452014-09-28T13:06:00.000-07:002014-09-28T13:12:43.968-07:00Conservation of Information in Evolutionary Search - Talk by William Dembski - part 5<i>For an introduction to this post, take a look <a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">here</a>. As I ended <a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_28.html">part 4</a> quite abruptly, this section starts in the middle of things....</i> <h3>Part 4: 45' 00" - 52' 50"</h3> <h4>Topics: What is Conservation of Information? Example continued.</h4> <p style="background-color:lightgrey"><b>William Dembski:</b> These tickets have probability 1/2, 1/2, 1/2, 1/2, and this one ticket has probability 1. If I happen to get this ticket, I have probability 1/2 of choosing curtain 1, but it is also probability 1/9 of getting that ticket. When you run the numbers, at the end of the day, by using these tickets, I'm not better of than I was originally. It is still only a probability of 1/3 of finding curtain 1, of finding the prize there. Once one factors in how did I limit myself to these tickets in the first place. Going from this whole space to this, that is information intensive. I have ruled out certain possibilities, that incurs an information cost. As I said, the cost is 5/9. It is really just an accounting thing. That is what conservation of information is. Once you factor in the information that it takes to get the search, get a search which has improved the probability for finding your original target, we haven't gained anything. It is called Conservation of Information, as the problem can even get worse. At this case, we have broken even, we are back to 1/3 for the probability of getting the prize, but let's say, you really want to improve the probability, you want to guarantee that you get that prize with this tickets. Well, then you have got only one ticket that will work for you.</p><a href="http://3.bp.blogspot.com/-wau_sZh7xDo/VChTriQgMNI/AAAAAAAACkg/6MhnCN9AEHU/s1600/vlcsnap-2014-09-28-20h29m59s60.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-wau_sZh7xDo/VChTriQgMNI/AAAAAAAACkg/6MhnCN9AEHU/s320/vlcsnap-2014-09-28-20h29m59s60.png" /></a><a name='more'></a><p style="background-color:lightgrey"><b>William Dembski:</b> This one. If you get this ticket, you are guaranteed to say "Curtain 1", and you get the prize behind it, but this is one of nine possible tickets. Once you have factored that is, your probability of doing a search for this ticket, and then, with this ticket, find the prize, ends up being 1/9, so you are actually going down. The reason that it is called conservation of information is that conservation is the best you can do, that you can break even. Often times, with these search-for-a-search spaces, they grow exponentially, and your probability of finding the target by going to the search-for-a-search ends up being worse than doing a blind search on the original space. Let me give you one last example, and then we can open this up for some questions. That example: </p><a href="http://1.bp.blogspot.com/-pFp8ZgRmZJI/VChWQPJh-PI/AAAAAAAACks/D6-7B0QIZ7M/s1600/vlcsnap-2014-09-28-20h39m39s221.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-pFp8ZgRmZJI/VChWQPJh-PI/AAAAAAAACks/D6-7B0QIZ7M/s320/vlcsnap-2014-09-28-20h39m39s221.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Find some buried treasure. You have this huge island which is very, very big, so that exhaustive search is impossible. The query limit is very small, there are only a few places that you can check. How do you find the treasure which is hidden inside? You go to a map room. </p><a href="http://2.bp.blogspot.com/-M2JHLyEStJ4/VChXp3zTQAI/AAAAAAAACk4/8LDJS7QziDk/s1600/vlcsnap-2014-09-28-20h45m36s208.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-M2JHLyEStJ4/VChXp3zTQAI/AAAAAAAACk4/8LDJS7QziDk/s320/vlcsnap-2014-09-28-20h45m36s208.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> This map room is actually a bar in Cleveland, but let's imagine that it is a room with maps. What you are going to do is to find a map that has got an X marking where that treasure is. You have displaced the problem of finding the treasure to finding the map in the map room which will take you to the treasure. But how do you know that the map is the right map.</p><a href="http://1.bp.blogspot.com/-NmPaAXvZphY/VChYu2NxUbI/AAAAAAAAClA/IEJ9T6bqJgI/s1600/vlcsnap-2014-09-28-20h45m36s208.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-NmPaAXvZphY/VChYu2NxUbI/AAAAAAAAClA/IEJ9T6bqJgI/s320/vlcsnap-2014-09-28-20h45m36s208.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> What if there are lot of maps. And you are Randy McNelly. For every place with an X mark, there will be another map with another X marked. The problem of finding the treasure on the island now becomes displaced to finding the search-for-the-search, finding the right map. And the problem is, when you try to represent this mathematically, the search-for-the-search is much less tractable than the original search problem, because, I skipped over a few slides, but these are actually theorems which we have proven on conservation of information. You represent the search-for-a-search, and you find that the information problem has actually intensified. </p><a href="http://4.bp.blogspot.com/-ML-g1fghe2s/VChafVEawdI/AAAAAAAAClM/3IeQ8l2Q164/s1600/vlcsnap-2014-09-28-20h59m03s87.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-ML-g1fghe2s/VChafVEawdI/AAAAAAAAClM/3IeQ8l2Q164/s320/vlcsnap-2014-09-28-20h59m03s87.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> With the search-for-a-search, searches are as real as the things being searched. I think that is what the Darwinists like Richard Dawkins fail to recognize. By handing us a Darwinian search, when it works, it works because it has been carefully crafted, fine-tuned to work. That is what he bets on. </p><a href="http://3.bp.blogspot.com/-Wtqkm-lN-KA/VChbqq3M_fI/AAAAAAAAClU/9bUNOvIjwQQ/s1600/vlcsnap-2014-09-28-21h03m41s192.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-Wtqkm-lN-KA/VChbqq3M_fI/AAAAAAAAClU/9bUNOvIjwQQ/s320/vlcsnap-2014-09-28-21h03m41s192.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Let me just finally speak to the question that came up about targets. I had a correspondence with Dawkins. This goes back fourteen years, we have been playing with these ideas for a long time. I was challenging him on his METHINKS IT IS LIKE A WEASEL example, and he wrote: "In real life of course, the criterion for optimisation is not an arbitrarily chosen distant target but SURVIVAL. It's as simple as that. This is non-arbitrary." What is survival? In which context does survival happen? Let us say biology does have targets. </p><a href="http://1.bp.blogspot.com/-J-5A1yJjD3k/VChdG_MfiSI/AAAAAAAAClg/_koYLH2ChYw/s1600/vlcsnap-2014-09-28-21h09m56s120.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-J-5A1yJjD3k/VChdG_MfiSI/AAAAAAAAClg/_koYLH2ChYw/s320/vlcsnap-2014-09-28-21h09m56s120.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Actually, it is not that simple. The targets that biology presents us with are teleological systems/agents. If you will, the teleology of evolutionary search is to produce teleology. (James Shapiro might refer to these as systems that do their own "natural genetic engineering.") I'd say that even Dawkins' makes a tacit admission of targets in biological evolution. </p><a href="http://3.bp.blogspot.com/-2TY5Izq6BbM/VCheGFoX7cI/AAAAAAAAClo/O3VTwa_cjuE/s1600/vlcsnap-2014-09-28-21h14m22s4.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-2TY5Izq6BbM/VCheGFoX7cI/AAAAAAAAClo/O3VTwa_cjuE/s320/vlcsnap-2014-09-28-21h14m22s4.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> This is also from his "Blind Watchmaker": "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is ... the ability to propagate genes in reproduction." That is the [???], but it is still specified in advance, that is the teleology he even admits to. Let me give you one other statement of the conservation of information:</p><a href="http://4.bp.blogspot.com/-_yp1J6rhAtQ/VChfU-Vw1WI/AAAAAAAAClw/Js7wMx-cCLo/s1600/vlcsnap-2014-09-28-21h19m19s202.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-_yp1J6rhAtQ/VChfU-Vw1WI/AAAAAAAAClw/Js7wMx-cCLo/s320/vlcsnap-2014-09-28-21h19m19s202.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> To increase the probability of success of a search from p to q requires a search for a search, where the higher level search incurs an information cost of at least p/q. This means that the probability of finding a search with probability of success q in no more than p/q, which in turn means that the probability of finding the original target by first finding the successful search and the applying that search is less than or equal to p.<br /> The search-for-a-search requires that there is an information cost [???] This implies a regress. I can do a search for the search for the search and so on. At every point you have not [???] the probabilities. When you work everything out, when you do all the commutative operations [?], I have this search-for-a-search, I get this search, and with this search I get a certain probability to find the target. When you do that, and you can regress back as far as you want, the probabilities never get any better. If anything, the information cost does either stay constant or it becomes worse. Which then raise a question: if evolutionary processes, evolutionary search, is not able to create new information but only redistributes already existing information - that is what the conservation of information shows - what then is the ultimate source of that information. I just leave it with that. So, thank you, and we have got a few minutes for questions. </p> Here endeth the lesson. The following Q&A section was harder to understand, but I'll try my best - and I will think of some questions which should have been asked. But back to the example. Obviously, <br>$\frac{1}{9}$=$P("Choosing\,Curtain\,1"|"using\,the\,first\,ticket")\cdot P("using\,the\,first\,ticket") \le$ $P("Choosing\,Curtain\,1"|"using\,the\,first\,ticket")\cdot P("using\,the\,first\,ticket")$ + $ P("Choosing\,Curtain\,1"|"using\,the\,first\,ticket")\cdot P("using\,another\,ticket")$ = $P("Choosing\,Curtain\,1")$=$\frac{1}{3}$ <br>Again, just a thought: imagine nine guys, each one having one of the tickets. Will those who win the prize without holding ticket 1 be less the winner? Repeat the game a couple of times, and each time, someone wins with his ticket, another contestant with the same ticket enters the game. After ten generations, you have more than 1000 holding ticket 1, more than 150 having another ticket mentioning curtain 1, and only 4 guys still having a ticket which always loses....<br> DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-70886427410835791462014-09-28T07:33:00.000-07:002014-09-28T10:58:10.578-07:00Conservation of Information in Evolutionary Search - Talk by William Dembski - part 4<i>For an introduction to this post, take a look <a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">here</a>. </i> <h3>Part 4: 31' 25" - 45' 00"</h3> (<i> I had to pause at 45', there is such an elementary mistake in Dembski's math, it was just to funny...</i>) <h4>Topics: What is Conservation of Information?</h4> <p style="background-color:lightgrey"><b>William Dembski:</b> Now let us get to the heart of things "Conservation of Information". What is that conservation? Let me put on the next slide. </p><a href="http://1.bp.blogspot.com/-IxBmm2RavZo/VCeuiOC1vHI/AAAAAAAACis/6KsnRNIsspw/s1600/vlcsnap-2014-09-28-08h43m35s202.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-IxBmm2RavZo/VCeuiOC1vHI/AAAAAAAACis/6KsnRNIsspw/s320/vlcsnap-2014-09-28-08h43m35s202.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> This is probably the most gem-packed slide in this talk. I want to make a distinction between -what I call - probable and improbable events, and probable and improbable searches. An improbable event is just something that is high in improbability: flip a coin a thousand times, get a thousand heads in a row. Highly improbable. It happens: if you believe in a multi-universe, then there is a universe where this is happening, where someone like me is speaking, my double-ganger flips a coin over the next hour and sees 1000 heads in a row. Probable and improbable search, that is where what is the probability that a search is successful. It is not so much asking whether it actually succeeds, it is not concerned with the result. It is concerned with the probability distribution associated with the search. This is an important distinction because so many intelligent design arguments look for a discontinuity in the evolutionary process. We look for highly improbable events. Such as the intelligent design people: you get for instance Thomas Nagel's "Mind and Cosmos". He is basically looking at probabilistic miracles. Think how the origin of life undercuts a materialistic understanding of biology. So he is looking into improbable events. That is what we do when we try to find evidence for a discontinuity. What I'm doing in this talk is saying, look, I'm going to give you evolution, give you common ancestry, all of that. That is no problem. What I'm interested though is the probability of success for a search.</p><p style="background-color:lightblue"><b>member of the audience:</b> What are we searching for?</p><p style="background-color:lightgrey"><b>William Dembski:</b> It is whatever the target happens to be. </p><a name='more'></a><p style="background-color:lightblue"><b>member of the audience:</b> [???] Can you give an example? [???]</p><p style="background-color:lightgrey"><b>William Dembski:</b> I think that is what I would challenge you on. Actually, you are jumping ahead. I will address this a little bit later. Someone like Richard Dawkins will say that the problem this METHINKS IT IS LIKE A WEASEL example is that it introduces a target, while real biology does not give us targets - and then he takes that back. I will give you a quote from that later. But I would say that the target in biology is teleology. Biological systems are teleological systems, teleological agents, that is what they produce, that is what needs to be explained. If you want to put it in terms of philosophy: there is a natural kind that becomes the target, and that is teleological agents. In fact, one of my good friends and colleagues also is here, James Barham [?], if you want to talk with him , that would be good. Give me a moment, because I want to speak to that, it will really come up. <br />In the computational context, it is never a problem, you are trying to solve something. Even the people who are writing these AVIDA and ev programs: for instance, in AVIDA, if you saw the article in "Nature" back in 2003 where they were arguing that this program was evolving irreducibly complex systems, they were specifically trying to get Boolean operators of a certain complexity. That was what they were rewarding. That was their target. What I describe to you now is Conservation of Information in a theoretical [???]. What we then do is we go and we look at these actually evolving systems - usually <i>in silico</i> - and then show where the information was put in. We have a theory, and then we show how the theory applies to these specific cases. Give me a moment - I know what you are asking. This is commonly how evolution is built, that there is supposed to be absent teleology. In fact, what I think that they do is, they are slipping it in. <br />Improbable search. Think of it this way: You have a got disease and two procedures you can take to get well. One has a higher probability, but maybe is more expensive. Which procedure do you want to use? You want to use the hight probability. The actual outcomes may vary: someone who takes the low-probability procedure, it may be successful, he may get lucky. And the high-probability one, he may be unlucky. But the concern is: how likely is the search to find the target. That is what we are interested in in science. Getting lucky is not a good scientific explanation. If you are doing a needle-in-the-haystack problem, try to find that needle, what are you going to do? You try to find a better search which does not make it a needle-in-the-haystack, that provides you with a high probability. That is what Dawkins does. <br />In METHINKS IT IS LIKE A WEASEL, he does not solve it with randomly shaking out scrabble pieces - that would be $28^{27}$, that would be $10^{40}$, that would be your waiting time on average to get to that target sequence. That becomes your waiting time, waiting time and probability are interchangeable. That would be your average waiting time to get there. Because he substitutes for blind search his Darwinian search, he gets there much faster. But the question then is: what justifies him substituting that search?... The sense that I'm getting of my presentation is that time is running, and I think, this is a good place to come in with this. <br />But what Dawkins does is essentially, he says: Look, there is this blind search that is hopeless, it is needle-in-the-haystack, a highly improbable search. What I'm going to do - and that is why Darwin is so great - I give you a high probability search that is going to get you there. Then he says, see, Darwin has solved all our problems. Now I think we have somebody on faculty here who has a blog "Why Evolution is True". I'd say it should be probably renamed "How Evolution is True", because the question "why evolution is true", why does this work so well, what did Dawkins do to give us this search, this Darwinian search which is supposed to work, why does it work? Because he infused it with information. That is why it works. That is where I'm going with it... <br />So, the distinction between probable and improbable search. We can think then of a p-search as a search that has probability p of finding the target. Next, consider that a search can itself be an object of search. What did Dawkins do with METHINKS IT IS LIKE A WEASEL? He did a search for the search. He gave us a search which then with high probability found the target sequence he is after. <br/>This is something people in optimization do, one name for it is "hyper heuristics". You are looking at heuristics, searches, and then it is about how you choose among your heuristic. Or, if you are choosing among heuristics, you are doing a search for a search. We abbreviate that as S4S. Conservation of Information - usually abbreviated as CoI - this is probably the purpose of this talk, it is as clear a statement as you can get: If you have $p < q$ and you want to improve a p-search to a q-search - the p in Dawkins weasel is about $1:10^{40}$ - now you are going to improve this to, well, if you allow yourself 50 or 60 queries, then q is to be close to 1, that improvement requires a $p/q$-search for a q-search. What you have done is: the search for the search has become difficult. If p is very small, and q is large, than $p/q$ becomes pretty small. The search for a search becomes difficult, the search for a good search becomes difficult. If you think of Dawkins weasel, the unimodal distribution is one of many other unimodal distributions. </p><a href="http://4.bp.blogspot.com/-alkGc6nWXfQ/VCf6dOLuJ4I/AAAAAAAACi8/3mb6JqENTFM/s1600/vlcsnap-2014-09-28-14h06m06s52.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-alkGc6nWXfQ/VCf6dOLuJ4I/AAAAAAAACi8/3mb6JqENTFM/s320/vlcsnap-2014-09-28-14h06m06s52.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Let me give you an example. You have got an Easter Egg hunt. Standard Easter Egg. An Easter Egg that is well hidden, but it is hidden in a huge field. Blind search is being highly unlikely to find you that Easter Egg. What you are going to want is a directed search, a search which is [assisted?]. Blind search would be a lot of sampling, you may try to do an exhaustive search but you are not able to exhaust things, because your query limit does not allow you to exhaust the search space. </p><a href="http://1.bp.blogspot.com/-KbAzw93PIY8/VCf8MjKulUI/AAAAAAAACjI/Qu0HQF-O1Dw/s1600/vlcsnap-2014-09-28-14h16m16s85.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-KbAzw93PIY8/VCf8MjKulUI/AAAAAAAACjI/Qu0HQF-O1Dw/s320/vlcsnap-2014-09-28-14h16m16s85.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> So you are going to do a directed search. What does a directed search look like? You are walking along the field, and somebody is telling you "warm", "warmer", "cold", "warmer", "warmer", "hot", "you are burning up" - and there it is. That sort of direction - "warm", "warmer", "hot" - what is that? That is information. It is information that is going into the search. Here is the question: Where is this information source? Does the information source know where it is? Is it a search for the information search? Perhaps not a search for the information search. The information source knows the answer, but the process - in this case me meandering about - is getting information. I am doing a search. Let me give you another angle on conservation of information, because I have described information as something that increases the probability [???]. Usually, you are doing negative-logarithmic transformation and then you turn information in something that becomes additive and looks more like money, which is convenient. But let us going to think of it probabilisticly. But we do pay to increase probabilities all the time. </p><a href="http://1.bp.blogspot.com/-kOHmkRUMLRs/VCgBsC-BBcI/AAAAAAAACjY/9eIhqd_XFpI/s1600/vlcsnap-2014-09-28-14h40m03s161.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-kOHmkRUMLRs/VCgBsC-BBcI/AAAAAAAACjY/9eIhqd_XFpI/s320/vlcsnap-2014-09-28-14h40m03s161.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> If I'm playing a lottery, the more lottery tickets, the more likely I am to win.</p><a href="http://3.bp.blogspot.com/-uA5ywWToDnk/VCgCfGAX8BI/AAAAAAAACjg/gISODrd3uYw/s1600/vlcsnap-2014-09-28-14h43m13s15.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-uA5ywWToDnk/VCgCfGAX8BI/AAAAAAAACjg/gISODrd3uYw/s320/vlcsnap-2014-09-28-14h43m13s15.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> But, this is in the case of a fair lottery (unlike the lotteries that the state runs), where everything what was payed in gets payed out under proper probabilistic principles, by buying more tickets, I will increase my probability of winning the lottery. But have I increased my expected gain? No. I can pay more to increase the probability of winning, but in the end, I did not gain anything. Conservation of information works like that. Let me give you perhaps the simplest example, and actually do the numbers for you. <p><a href="http://4.bp.blogspot.com/-O_FNTSH7fLQ/VCgETLBaySI/AAAAAAAACjs/mgGe1mOalQM/s1600/vlcsnap-2014-09-28-14h50m56s160.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-O_FNTSH7fLQ/VCgETLBaySI/AAAAAAAACjs/mgGe1mOalQM/s320/vlcsnap-2014-09-28-14h50m56s160.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> We all remember "Let's Make a Deal" with Monty Hall. </p><a href="http://4.bp.blogspot.com/-BJwYJTdM3uQ/VCgF6jppmcI/AAAAAAAACj4/HJITGY43cGQ/s1600/vlcsnap-2014-09-28-14h58m06s79.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-BJwYJTdM3uQ/VCgF6jppmcI/AAAAAAAACj4/HJITGY43cGQ/s320/vlcsnap-2014-09-28-14h58m06s79.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> There are three curtains with a prize behind one of the curtains. Let us say the prize is behind curtain 1. What is the probability of winning? I'm going to do this search. I have got one opportunity. That is my query limit. One opportunity, so I have got a probability of 1/3 to win this thing. But now let's say that someone comes to me and gives me a ticket:</p> <a href="http://3.bp.blogspot.com/-m4AR1Hg4bhQ/VCgHSpp1A1I/AAAAAAAACkE/jZ7eGFoouss/s1600/vlcsnap-2014-09-28-15h03m39s220.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-m4AR1Hg4bhQ/VCgHSpp1A1I/AAAAAAAACkE/jZ7eGFoouss/s320/vlcsnap-2014-09-28-15h03m39s220.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b>It is one of these tickets. This ticket (1,1) will say "it is behind curtain 1", this one (1,2) will say "it is behind curtain 1 or curtain 2 with equal probability". From the nine possible tickets, these five will increase my probability of getting to curtain 1 and thus winning the prize. But the thing is: only five of these tickets! p is 1/3, that is the original probability, I'm now trying to bump it up to 1/2, that is q, but the actually probability of finding one of these tickets is less than that, it is 5/9, the probability is going down, it is less than p/q. This is typical for these search-for-a-search situations: </p><a href="http://3.bp.blogspot.com/-9psR3ptNfFE/VCgKgSaskHI/AAAAAAAACkQ/aFn64Mn4lMY/s1600/vlcsnap-2014-09-28-15h17m26s145.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-9psR3ptNfFE/VCgKgSaskHI/AAAAAAAACkQ/aFn64Mn4lMY/s320/vlcsnap-2014-09-28-15h17m26s145.png" /></a> <p>So, Dr. Dr. William Dembski does the numbers for us, for this, the simplest of all examples. $p = 1/3$ and $q= 1/2$. What? Wasn't q the probability of finding the prize while using our search strategy, i.e., $P(Choosing\,curtain\,1|Using\,one\,of\,the\,five\,tickets)$? But that is not $1/2$ as he says, it is actually $\frac{4}{5}\cdot \frac{1}{2} + \frac{1}{5} \cdot 1 = \frac{3}{5}$! And therefore, $p/q$ = $\frac{1}{3} / \frac{3}{5} = \frac{5}{9}$, exactly the probability of finding a circled ticket. No surprise here, that is how conditional probabilities work:</p>$p = \frac{1}{3}$=$P(Choosing\,curtain\,1)$ = $P(Choosing\,curtain\,1|Using\,one\,of\,the\,five\,tickets) \cdot P(Using\,one\,of\,the\,five\,tickets)$ + $P(Choosing\,curtain\,1|Using\,one\,of\,the\,other\,tickets) \cdot P(Using\,one\,of\,the\,other\,tickets)$ = $ \frac{3}{5} \cdot \frac{5}{9} + 0 \cdot \frac{4}{9}$=$q \cdot \frac{5}{9}$ <p>This error is so elementary that the audience wasn't able to spot it...</p><p>I have to agree with Dembski, though: <i>This is typical for these search-for-a-search-situations</i></p> DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-77470443692128201082014-09-27T13:54:00.002-07:002014-09-30T22:32:06.246-07:00Conservation of Information in Evolutionary Search - Talk by William Dembski - part 3<i>For an introduction to this post, take a look <a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">here</a>. There is some interaction with the audience (15'30" - 18'00") which I wasn't able to understand fully. Any help is appreciated! </i> <h3>Part 3: 12' 45" - 31' 25"</h3> <h4>Topics: What is an <i>evolutionary</i> search?</h4> <p style="background-color:lightgrey"><b>William Dembski:</b> Now let's add this next term <i>evolutionary</i>. What does evolutionary - when we put it in front of search - add to the discussion? I think it changes one key aspect here. Whereas we were looking at some query feedback, now this query feedback takes the form of fitness: how good is it? Query feedback can be quite general. Maybe the query feedback is nothing, when we examine it. Or maybe the query feedback may just say "I'm in the target" or "I'm not in the target". That would be very simple. Fitness is going to give some sort of range of values that ideally identify how close am I to the target. </p><a href="http://1.bp.blogspot.com/-ZW9NJUrFikw/VCamL1L2awI/AAAAAAAACf0/8JJ3l9nMziM/s1600/vlcsnap-2014-09-27-13h55m25s27.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-ZW9NJUrFikw/VCamL1L2awI/AAAAAAAACf0/8JJ3l9nMziM/s320/vlcsnap-2014-09-27-13h55m25s27.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> There are examples of evolutionary search. There is the Dawkins' weasel example from his book "The Blind Watchmaker", that is the one I'm going to focus on here. Then there are various - what I would regard as - embellishments of that, because I don't think that there is anything fundamentally new about them. There is MSU's Avida program, Tom Ray's Tierra, Schneider's ev. What is at the heart of these programs that these are computer programs which mimic - try to mimic - Darwinian evolutionary processes. What are they supposed to show? That is interesting. Look at the history of this field of evolutionary computing and there is a reason why people wanted to do evolution in the computer. That is because the computer would allow evolution to be done in real time, because we cannot really see it in real time in the wild. </p><a name='more'></a><a href="http://2.bp.blogspot.com/-Pv0WT-IKpOA/VCamvm2aPFI/AAAAAAAACf8/Aq847k0fqpQ/s1600/vlcsnap-2014-09-27-13h59m41s23.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-Pv0WT-IKpOA/VCamvm2aPFI/AAAAAAAACf8/Aq847k0fqpQ/s320/vlcsnap-2014-09-27-13h59m41s23.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Nils Barricelli in 1962: "The Darwinian idea that evolution takes place by random hereditary changes and selection has from the beginning been handicapped by the fact that no proper test has been found to decide whether such evolution was possible and how it would develop under controlled conditions". </p><a href="http://1.bp.blogspot.com/-g54X20MijDA/VCanMmBfBQI/AAAAAAAACgE/BdiPvtfqdV8/s1600/vlcsnap-2014-09-27-14h01m32s110.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-g54X20MijDA/VCanMmBfBQI/AAAAAAAACgE/BdiPvtfqdV8/s320/vlcsnap-2014-09-27-14h01m32s110.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> J. L. Crosby says substantially the same thing in '67. </p><a href="http://1.bp.blogspot.com/-3MRATPxCbXI/VCancHGrvvI/AAAAAAAACgM/-cfgd6dOTwY/s1600/vlcsnap-2014-09-27-14h02m38s8.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-3MRATPxCbXI/VCancHGrvvI/AAAAAAAACgM/-cfgd6dOTwY/s320/vlcsnap-2014-09-27-14h02m38s8.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Heinz Pagels in a popular book in 1989 wrote "The only way to see evolution in action is to make computer models because in real time these changes take aeons, and experiment is impossible". <br />Now, there is Richard Lenski at Michigan who - I think - has run 30 - 40,000 generations of E. coli, which probably corresponds to a million or so years of primate evolution. But I'd say that he has not seen a whole lot of changes, at the end of the game E. coli is still E. coli. So if you want to see some massive saltations, I think what Heinz Pagels says does still apply. </p><p style="background-color:lightblue"><b>member of the audience:</b> Can I ask you something?</p><p style="background-color:lightgrey"><b>William Dembski:</b> Yes.</p><p style="background-color:lightblue"><b>member of the audience:</b> The Times had a very interesting article, very recently, exactly about this point. It was about a book that was written by Peter and Rosemary Grant. They looked at finches. And the claim is that they actually did exert evolution in forty years of time. They were basically looking at the evolution of finches in the Galapagos Islands. So, can you speak to it?</p> <p style="background-color:lightgrey"><b>William Dembski:</b> Finch beak variation, yes, in this case it was [???], they saw some. There were some changes which Richard Lenski saw in E. coli, but I think what is supposed to make evolution interesting is not how finches' beaks vary, but how you get beaks in the first place, how you get birds in the first place. That is the sort of evolution that I think these people who are talking about evolution <i>in silico</i> are thinking about: that we can really speed it up, so that we can see some of these big, impressive evolutionary changes.</p><p style="background-color:lightblue"><b>member of the audience:</b> So small evolutionary changes don't bother you.</p><p style="background-color:lightgrey"><b>William Dembski:</b> It's not a question of bothering me. They are there. I mean, the evidence for them is clear. I think there is even evidence for large-scale evolutionary changes. The question is: what is driving them? For Darwinians, it is natural selection. For Non-Darwinian, those mechanisms seem to be insufficient. </p><p style="background-color:lightgrey"><b>William Dembski:</b> You are standing up [???]</p><p style="background-color:lightblue"><b>member of the audience:</b> [???] About two plants growing together and there cells fusing. [???] You get new species. And that's how we make new species in real-time. So, evolution can occur. There is a 1954 [???] Scientific American [???] cataclysmic variation [???]</p><p style="background-color:lightgrey"><b>William Dembski:</b> That happens only in plants. I don't know of any case like that in animals.</p><p style="background-color:steelblue"><b>Leo Kadanoff:</b> Okay, you made your point. [???] Go ahead. I'd build my argument for example [???] two plants [???].</p><p style="background-color:lightgrey"><b>William Dembski:</b> You might argue better by using other plants. Let's look at this example. I don't know how many of you have read the book "The Blind Watchmaker". This is an example that [???] worked countlessly, even in literature trying to justify the power of Darwinian processes to create information. Underscore that word "create", because that what it is about: is it creating or is it shuffling about already existing information.</p><a href="http://4.bp.blogspot.com/-Y3mFnmadjUY/VCatpo7btFI/AAAAAAAACgw/dlz_L-oeJoY/s1600/vlcsnap-2014-09-27-14h26m34s241.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-Y3mFnmadjUY/VCatpo7btFI/AAAAAAAACgw/dlz_L-oeJoY/s320/vlcsnap-2014-09-27-14h26m34s241.png" /></a> <p style="background-color:lightgrey"><b>William Dembski:</b> Let's look at this example in vantage of search - I had these seven key components. What is that example. What you are trying to do, you take a random string of 28 letters and spaces - that's the reference class, that's the search space: letter and spaces. So there are $27^{28}$ possibilities. Start out with a random sequence - that is the initialization. Your target is "METHINKS_IT_IS_LIKE_A_WEASEL", this is a line from Shakespeare's Hamlet. You have a fitness that is going to measure how many letters correspond in a given sequence to the target sequence, so, that is basically a Hamming measure. You are going to have an update rule which is going to say "take an existing sequence and then - one possibility would be - generate 50 offspring by some sort of random mutation process and then take the one that is closest, and that becomes the next one", so that becomes the update rule. Stop criterion is "you stop when you hit the target sequence". And then the query limit is going to be whatever your computational resources allow. The thing is, with this setup, you are going to evolve to this final target sequence very, very quickly.</p><p style="background-color:lightgrey"><b>William Dembski:</b> I'm just trying to give you a sense that all the components are there in this example. The fitness function in this case is a unimodal fitness where basically you are counting the distance letter by letter from the target sequence. For instance, here we have a score of 27, because you have a "J" where there should be a space. So, when the "J" disappears, then we are actually there. That's the example. I will talk about it a bit more. </p><a href="http://4.bp.blogspot.com/-dnN0oH00s2Q/VCazHWt_bZI/AAAAAAAAChA/1ZB_hQavdv0/s1600/vlcsnap-2014-09-27-14h52m00s229.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-dnN0oH00s2Q/VCazHWt_bZI/AAAAAAAAChA/1ZB_hQavdv0/s320/vlcsnap-2014-09-27-14h52m00s229.png" /></a> <p style="background-color:lightgrey"><b>William Dembski:</b> I'm throwing that in as a type of digression. There is a kind of lunatic vitality to this example. I keep seeing it in places, and people keep challenging me on the Internet because I come back to this example as though this somehow misses something fundamental or that it is to simplified. But in fact this example just keeps getting reworked. Most recently - I thank a member of the audience for pointing that out to me - Michael Yarus in his 2010 book [???]. The target phrase for him is NOTHING IN BIOLOGY MAKES SENSE EXCEPT IN THE LIGHT OF EVOLUTION. There is a popular book by Jeffrey Satinover, in the "The Quantum Brain" MONKEYS WROTE SHAKESPEARE. Bern-Olaf Küppers in the 1990s, his target phrase was EVOLUTION THEORY. This type of example, where you are evolving symbol strings to some target, keeps getting used in the evolutionary literature to justify biological evolution. That is where we want to go with this. The question is: evolutionary search as I've described it to you, this is widely done, in some ways it is part of computational intelligence, in the sense of evolutionary computing, genetic algorithms, even falls under operation research as some kind of optimization procedure. How does this compare to real life evolution? Now, there are people who think that actually the computational case does provide justification for real life. </p><a href="http://1.bp.blogspot.com/-hdIkYShjbNg/VCa3j_TJL6I/AAAAAAAAChM/ADxbDaGoaJA/s1600/vlcsnap-2014-09-27-15h11m03s247.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-hdIkYShjbNg/VCa3j_TJL6I/AAAAAAAAChM/ADxbDaGoaJA/s320/vlcsnap-2014-09-27-15h11m03s247.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Robert Pennock for instance, who worked on this AVIDA program, he says: "I do scientific research on experimental evolution and evolutionary design using evolving computer organisms, including work showing how evolutionary mechanism can produce the kinds of complex features creationists say is impossible... My colleagues and I have demonstrated experimentally that a Darwinian mechanism can discover irreducibly complex systems." I think he is overstating his case, there are some details his leaves behind. The thing to get from this is that he is using what is happening in computational evolutionary searches to justify biological evolution. </p><a href="http://3.bp.blogspot.com/-CDfZefQM0ls/VCa4zO0D5dI/AAAAAAAAChU/p4S6mHtG-Gc/s1600/vlcsnap-2014-09-27-15h16m27s245.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-CDfZefQM0ls/VCa4zO0D5dI/AAAAAAAAChU/p4S6mHtG-Gc/s320/vlcsnap-2014-09-27-15h16m27s245.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Ken Miller in his 2008 book "Only a Theory" - he is a biologist at Brown University - says what is needed to drive biological evolution (that is the question he poses): "Just three things: selection, replication, and mutation... Where the information 'comes from' is , in fact, from the selective process itself." I would say that this is actually the received view, that the Darwinian mechanism is able to produce all these nifty things that you see, that all this biological information can be handed over to Darwinian mechanisms, and there we go. I want to address this from the vantage of what I call the "Conservation of Information", but before I do this, I want to create some doubts for you that this can be the whole story. Not by invoking anything like "Conservation of Information", but by actually going back to somebody at the time of Darwin who was looking at the logic of induction, and raised a method of induction, that actually - I think - undercuts this kind of Darwinian mechanism to produce, to create biological information.</p><a href="http://1.bp.blogspot.com/-9kdbEd_bUd0/VCcJjHNEGbI/AAAAAAAAChk/n6JfgvhXKos/s1600/vlcsnap-2014-09-27-20h59m03s55.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-9kdbEd_bUd0/VCcJjHNEGbI/AAAAAAAAChk/n6JfgvhXKos/s320/vlcsnap-2014-09-27-20h59m03s55.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> This is Mill's method of difference. He formulated this in his "System of Logic" in 1843. It run to eight editions, the last edition was 1882, so he is a contemporary of Darwin. Mill's method of difference shows that the Darwinian mechanism by itself cannot generate biological information. How does that work?</p><a href="http://2.bp.blogspot.com/-2N1Vfe59oVc/VCcKoUP4KlI/AAAAAAAAChs/uncAY0vSFog/s1600/vlcsnap-2014-09-27-21h05m31s162.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-2N1Vfe59oVc/VCcKoUP4KlI/AAAAAAAAChs/uncAY0vSFog/s320/vlcsnap-2014-09-27-21h05m31s162.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> The method of difference says: "To explain a difference in effects, one must identify a difference in causes." What does that mean? </p><a href="http://1.bp.blogspot.com/-dwAU-UtQK_Y/VCcLf-cawhI/AAAAAAAACh0/51s7rXlHxN0/s1600/vlcsnap-2014-09-27-21h09m02s104.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-dwAU-UtQK_Y/VCcLf-cawhI/AAAAAAAACh0/51s7rXlHxN0/s320/vlcsnap-2014-09-27-21h09m02s104.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Common causes cannot explain differences in effects. Imagine, here is a difference in effect: Slowed reflexes versus ordinary reflexes.</p><a href="http://3.bp.blogspot.com/-AT7ewg28sM8/VCcM-BVKC8I/AAAAAAAACiA/5rC4Gdb0-ms/s1600/vlcsnap-2014-09-27-21h13m15s85.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-AT7ewg28sM8/VCcM-BVKC8I/AAAAAAAACiA/5rC4Gdb0-ms/s320/vlcsnap-2014-09-27-21h13m15s85.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Watching television, combing hair, o-oh, consuming alcohol. Alcohol is the difference maker. One person consumed it, the other person didn't. You have people watching television or not watching television, that is not making any difference. The difference maker which accounts for the slowed reflexes versus the ordinary reflexes is consuming the alcohol. Now let's look at the Darwinian mechanism.</p><a href="http://4.bp.blogspot.com/-6wMkpxPTZwo/VCcPp_QukNI/AAAAAAAACiM/sGjSwTAQR7k/s1600/vlcsnap-2014-09-27-21h26m54s62.png" imageanchor="1" ><img border="0" src="http://4.bp.blogspot.com/-6wMkpxPTZwo/VCcPp_QukNI/AAAAAAAACiM/sGjSwTAQR7k/s320/vlcsnap-2014-09-27-21h26m54s62.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> We have replication, heritability, random variation, natural selection, all these basic components of the Darwinian mechanism. When you run a Darwinian mechanism, if you are a Darwinist, then you would say in a cellular context it is going to produce, we are going to see a lot of interesting evolution. But there are cases - for instance, Sol Spiegelman had an experiment back in the sixties in which he looked at polynucleotide synthesis and found instead of these evolving polynucleotides becoming more and more complex and more interesting, in fact, they tended towards simplicity, where the replicators would replicate as quick as possible. What supposed to make evolution interesting is that we go from monad to man, right? It is not that we go from cave-fish or cave-fishes that have working eyes to cave-fishes with eye-knobs, because in a case of <i>use it or lose it</i>, in this dark environment they have lost it and now they have eye-knobs. That is evolution, but that is not interesting evolution. It is how you these eyes in the first place, how you get the beaks in the first place, how you get the birds.</br>Cellular automata: You can have cellular automata that follow Darwinian principles and never go anywhere. And artificial life, [???] the same thing. You can have cases of interesting evolution and evolution that goes in a simplifying direction, that goes nowhere, with all these features. If this is the case, if the Darwinian mechanism is common to cases where you have interesting evolution and evolution that is not going anywhere, then something besides the Darwinian mechanism must being involved. That is the logic. It seems to me that this should be uncontroversial. </p><a href="http://3.bp.blogspot.com/-ybPPec2D5Kc/VCcWaMtwwKI/AAAAAAAACic/YSBW794eiLE/s1600/vlcsnap-2014-09-27-21h55m26s2.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-ybPPec2D5Kc/VCcWaMtwwKI/AAAAAAAACic/YSBW794eiLE/s320/vlcsnap-2014-09-27-21h55m26s2.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> But Stuart Kauffmann, a complexity theorist who is not Darwinian, and not an Intelligent Design guy like me, has seen this problem. I think he puts it very well in his book "Investigations". He says: "In the absence of any knowledge, or constraint, on the fitness landscape, on average, any search procedure is as good as any other." <br />This is a no-free-lunch theorem, which actually really upset people. Jon Holland and the evolutionary [???] community back in the nineties - I have a colleague who was there on one of their meetings when this happened. <br />"But life uses mutation, recombination, and selection. These search procedures seem to be working quite well. Your typical bat or butterfly has managed to get itself evolved and seems a rather impressive entity.... If mutation, recombination, and selection only work well on certain kinds of fitness landscapes, yet most organisms are sexual, and hence use recombination, and all organisms use mutation as a search mechanism, where did these well-wrought fitness landscapes come from, such that evolution manages to produce the fancy stuff around us?... No one knows"<br />When I pose this to Darwinians, they often say: "Well, it is just the environment. That is where we get the fitness." I will revisit that. I think Kauffmann asked the right question here, it is a question that many people do not even see is a question. Let's go back: there are seven key components of our evolutionary search. Question is: where is the information coming from? We do this in a computational context, this is usually where it is, it is put there in the fitness, it is put in the update rule. My friend Bob Marks had a colleague at Boeing who called himself a himself a "penalty function artist". If you had the right penalty function, the optimization problem was solved. What is a penalty function? That is basically the inverse of a fitness. [???] That is usually where it comes in. Where does the information come in in this METHINKS IT IS LIKE A WEASEL? It came in obviously in setting up the fitness. You have a unimodal fitness function which measures how close you are to this METHINKS IT IS LIKE A WEASEL target phrase. You could have set up a fitness for any other phrase, for gibberish, and it would have evolved there. It was by choosing that fitness that you got it to evolve where it did. By the way, there are about $10^{40}$ ($27^{28}$) sequences of length 28 having 27 possible characters. Any idea how many unimodal Hamming-distance fitness landscapes there are over that space? It is the same: $10^{40}$. For every possible element there you got a unimodal fitness landscape. What he has done there is to say "I evolve this thing to the target sequence", but what he has not told you is "In doing that, I had a fitness landscape which I have carefully adapted". The search for the target phrase became the search for the right unimodal fitness landscape. This is a expression Paul Nelson - a good friend and colleague of mine - gave to me, which I use over and over again: "Filling one hole by digging an other". </p> A longer excerpt this time, and one which include a few gems, though I apologize for not getting everything which was said. My thoughts: <ul><li>"At the end of the game E. coli is still E. coli." Yes, William Dembski really did say this.</li><li>The audience seems to have expected to be confronted with a creationist like Ken Ham, but Dembski has not problem with evolution, neither on the small scale nor on the large scale.</li><li>However, he uses the term "interesting evolution" as a kind of straw-man: things have to get more complex and considerably diverse. Is the creation of a tiny bit of information by a Darwinian process unproblematic for him? I doubt it...</li><li>I'm not totally convinced by Dembski's application of the method of differences, he seems to ignore the influence of chance altogether: neglecting the influence of chance, two guys playing Russian Roulette should end up both dead and both alive...</li></li>Flogging the WEASEL takes an awful amount of time. Why does he not talk about a the Traveling Salesman Problem? Because the information "smuggled in" cannot be detected? Who searched for the fitness landscape?</li><li>BTW, at 30'15'', there is an <i>impressive</i> animation illustrating how big the number 40 is....</li></ul> <ul><li><a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">Overview</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in.html">Part 1: Introduction, What is information?</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_26.html">Part 2: What is is a search?</a></li></ul> DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com4tag:blogger.com,1999:blog-1689592451067041352.post-73815751830770811622014-09-26T13:48:00.000-07:002014-09-27T04:46:57.877-07:00Conservation of Information in Evolutionary Search - Talk by William Dembski - part 2<i>For an introduction to this post, take a look <a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">here</a>. This is quite a short section, with some annotations from me.</i> <h3>Part 2: 09' 40" - 12' 45''</h3> <h4>Topics: What is a search?</h4> <p style="background-color:lightgrey"><b>William Dembski:</b> We talked about information. Let's now look at that second key term "Search". What is a search. There are seven key components in a search.</p><a href="http://1.bp.blogspot.com/-O3sYo1_ExOo/VCW9s7xnTtI/AAAAAAAACfc/zjfYbJtw-3s/s1600/vlcsnap-2014-09-25-23h20m25s132.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-O3sYo1_ExOo/VCW9s7xnTtI/AAAAAAAACfc/zjfYbJtw-3s/s320/vlcsnap-2014-09-25-23h20m25s132.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> You have a search space, you have a target - we are looking for something in the search space. There is initialization - where do we start off? There is a query limit - how many things in the search space can we check out? There is query feedback - when we have checked out, when we have located some item - what is it telling us about itself in terms of how it relates to the target? There is an update rule - once we have queried something, what do we query next? And then finally a stop criterion - when do we stop? How do we know that we have done enough? This is very general.</p><a href="http://2.bp.blogspot.com/-NqtPH7ee674/VCW-ZCVnZsI/AAAAAAAACfk/2ZDRWjI2zww/s1600/vlcsnap-2014-09-25-23h30m01s3.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-NqtPH7ee674/VCW-ZCVnZsI/AAAAAAAACfk/2ZDRWjI2zww/s320/vlcsnap-2014-09-25-23h30m01s3.png" /></a><a name='more'></a><p style="background-color:lightgrey"><b>William Dembski:</b> Let me say something about the query limit, because that will always be involved. Fact is, even though there may be multiple universes, our own universe is very small, there is not a whole lot computational power in it. The best supercomputers now are operating in petaflops, $10^{15}$ to $10^{16}$, there are less than $10^{18}$ seconds in the year, no research group that I know has ever operated for more than $10^2$ or one hundred years. The number of researchers seems to be bounded by $10^{10}$. Actually, those numbers I gave you add up to $10^{45}$. So, <i>m</i> for all practical purposes is always to be bounded by $10^{40}$, I think that is save to say. If you are unhappy with that, if you are a really theory based person thinking what is the absolute limit, Seth Lloyd, a quantum computational theorist at MIT, sets the absolute computational limit of the universe to $10^{120}$. That is the most computations that can ever be done. A computation is going to be involved in search, that is the assumption that I make. Especially if you are representing search <i>in silico</i> [???] about the limit [???] anything that we are looking at in our live-time, even with Moorse's law.</p><p style="background-color:lightgrey"><b>William Dembski:</b> These are the seven key components of search. There is a connection with information, obviously: in finding a target, a search produces information. It gets to the target and rules out things that are not in the target, and thereby realizes one possibility to the exclusion of others. So searches produce information in the sense I have just described. </p> <ul><li>William Dembski's definition of a search differs crucially from the definitions of virtually everybody else - for whom searches just build a subset of optimization problems. Dembski and his collaborators separate the target and the feedback. While everybody else is trying to find the optimum of a function (e.g., the characteristic function of a subset $T$ of the searchspace $\Omega$), and will say that elements in the inverse image of the optimum are in the target, this kind of feedback isn't enough for Dembski: you may have found an element of $\Omega$ with an optimal feedback, but this may or may not lie in the target. In a game of Hangman with Dembski, guessing the letters F and O for a three letter word and writing them down as a solution, you may <i>think</i> that FOO is the solution, but even after writing the word out, Dembski would inform you that the real target was BAR. Or in evolutionary terms: some Darwin Finch may have quite a good beak for his purpose, and his species may flourish, but in <i>real</i>, his niche should be occupied by a unicorn.</li><li>More interestingly, the given seven elements of a search are quite different from the description of a search in their paper "A General Theory of Information Cost Incurred by Successful Search", which Dembski announced as one of the three key theoretical publications on CoI! At least, the new elements of a search don't sound as pompously as the former arrangement of the initiator, the terminator, the inspector, the navigator, the nominator, and the discriminator. Now, the search-space $\Omega$ and the target $T$ made the list, the initialization is the former initiator, the query limit $m$ and the stop criterion are the terminator, query feedback is the inspector, and the update rule seems to supplant navigator and nominator. Most importantly, the discriminator is gone. <br> I had an interesting exchange with Winston Ewert - one of the authors of the paper - at my blog and at a thread at Uncommon Descent: <a href="http://www.uncommondescent.com/intelligent-design/questioning-information-cost/">Questioning Information Cost</a>. In fact, I think that was one of the most fruitful discussions I had with a proponent of Intelligent Design for quite a while. <br>Winston Ewert was able to clear up some of my misconceptions on their concept, and replace them with new objections. One of my main problems was that in their model even exhaustive searches not necessarily find the target, in fact, on average, <a href="http://dieben.blogspot.de/2013/07/dembskis-ewerts-and-markss-concept-of.html">all exhaustive searches perform only as good as a single random guess.</a> <br>I can only assume that Dembski, Marks, and Ewert finally recognized that this <i>is</i> indeed a problem for their framework, and perhaps have dropped the poor discriminator unceremoniously. At last, that would answer my question <i>I’d like to know whether this “general framework” is still in use</i> in my exchange with Ewert with a <i>no</i>.</li><li>I don't think much of those calculations of computational limits of the universe. Combinatorics lead to big numbers without great fuss: There are $52! \approx 8.07 \times 10^{62}$ ways to arrange a single deck of cards, many more than can be computed using Dembski's limit of $10^{40}$. With two identical decks, I can find $\frac{104!}{2^{52}} \approx 2.29 \times 10^{150}$ ways to arrange them, more than Seth Lloyd's limit of $10^{120}$. And still, card games are played - even solitaire...</li></ul> Previous: <a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in.html">Part 1 - Introduction, What is information?</a>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-62072911821947630282014-09-25T13:47:00.000-07:002014-09-27T04:47:34.428-07:00Conservation of Information in Evolutionary Search - Talk by William Dembski - part 1<i>For an introduction to this post, take a look <a href="http://dieben.blogspot.de/2014/09/william-dembskis-talk-at-university-of.html">here</a>.</i> <h3>Part 1: 00' 00" - 09' 40''</h3> <h4>Topics: Introduction, What is information?</h4> <p style="background-color:steelblue"><b>Leo Kadanoff:</b> [???] He went on to broader interests in subjects including information theory, philosophy and parts of biology. The best write-up I could find about him was the Discovery Institute's write-up on the web: "mathematician philosopher William A. Dembski is senior fellow with the Discovery Institute. He has taught at the Northwestern University, the University of Notre Dame, and the University of Dallas. He has done postdoctoral work in mathematics at MIT, in physics in Chicago, and in computer science at Princeton. He is a graduate of the University of Illinois, of the University of Chicago, and of Princeton.<br>His fields include mathematics, physics and philosophy, as well as theology. We probably hear only a fraction of those interests today in his talk about the "Creation of Information in Evolutionary Search".</p><a href="http://2.bp.blogspot.com/-H2DZFUuE3Mc/VCRy4_nzvqI/AAAAAAAACd8/AFE13maGoKM/s1600/vlcsnap-2014-09-25-21h49m48s32.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-H2DZFUuE3Mc/VCRy4_nzvqI/AAAAAAAACd8/AFE13maGoKM/s320/vlcsnap-2014-09-25-21h49m48s32.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Okay, well, Leo, it is a pleasure to be back here. Leo was my adviser back in 87/88, along with Patrick Billingsley and [???]. The topic is actually "Conservation of Information in Evolutionary Search. I want to speak about that</p><p style="background-color:steelblue"><b>Leo Kadanoff:</b> I said creation! [???]</p><p style="background-color:lightgrey"><b>William Dembski:</b> I'm called a creationist enough, so I make that distinction when I can. What I will describe is the work that I have done with the Evolutionary Informatics Lab - this is their website.</p><a name='more'></a><a href="http://2.bp.blogspot.com/-MXcqy50lNM4/VCRzZv6EVtI/AAAAAAAACeE/67l8NvNmUyQ/s1600/vlcsnap-2014-09-25-21h50m05s205.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-MXcqy50lNM4/VCRzZv6EVtI/AAAAAAAACeE/67l8NvNmUyQ/s320/vlcsnap-2014-09-25-21h50m05s205.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> The key person there who runs the lab is Robert Marks. He was for twenty-five years on the faculty of the University of Washington. His field was computational intelligence, he is one of the creators of that field which includes evolutionary computing, neural networks, and fuzzy logic. So, he has been at Baylor for about ten years and we started collaborating about a decade ago but it really came to head about 2007 and we have been publishing since about 2009 in this area. So, what I will describe is really in this talk the theoretical work which came out of these three papers.</p><a href="http://2.bp.blogspot.com/-oAqAKhrGMYs/VCR0WD9ln4I/AAAAAAAACeQ/c6sQLTSyTek/s1600/vlcsnap-2014-09-25-21h50m19s83.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-oAqAKhrGMYs/VCR0WD9ln4I/AAAAAAAACeQ/c6sQLTSyTek/s320/vlcsnap-2014-09-25-21h50m19s83.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> "Conservation of Information in Search: Measuring the Cost of Success", that was a IEEE publication, then the next paper "The Search for a Search", that was a Japanese journal on computational intelligence, and the last that is [???], that was a conference proceeding. So, anyway, what I would like to do is talk about, just go through the key-words in the titles. Let's start with information.</p><a href="http://2.bp.blogspot.com/-ey0tVwiMbTA/VCR1GKxPZZI/AAAAAAAACeY/7xo3XjpbWJw/s1600/vlcsnap-2014-09-25-21h50m35s247.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-ey0tVwiMbTA/VCR1GKxPZZI/AAAAAAAACeY/7xo3XjpbWJw/s320/vlcsnap-2014-09-25-21h50m35s247.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> What is information? We live in the information age, right? </p><a href="http://1.bp.blogspot.com/-zuF1J1d6Kgc/VCR1XSLmBdI/AAAAAAAACeg/T5dZHFNiOWc/s1600/vlcsnap-2014-09-25-21h50m42s62.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-zuF1J1d6Kgc/VCR1XSLmBdI/AAAAAAAACeg/T5dZHFNiOWc/s320/vlcsnap-2014-09-25-21h50m42s62.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> But the statement that I came across years ago - actually in a philosophy course - which to me really puts it best is the following quote from a philosopher at MIT Robert Stalnaker, that is in his book "Inquiry", 1984, "To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth." This for me has captured what is most crucial about information. So, if you want a definition here is how I would define it: "Information is the realization of one possibility to the exclusion of others within a reference class of possibilities" [???] I want to round this up. </p><a href="http://3.bp.blogspot.com/-PW7aGQb7g7o/VCR2Fa-Ap0I/AAAAAAAACeo/6SO1iKwTPJM/s1600/vlcsnap-2014-09-25-21h50m51s144.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-PW7aGQb7g7o/VCR2Fa-Ap0I/AAAAAAAACeo/6SO1iKwTPJM/s320/vlcsnap-2014-09-25-21h50m51s144.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> I just want to add: it is one thing to say, "okay, this is what information is", but if you want to do science, especially if you want to do exact science, you got to have to measure information. And how do you measure information? Well, you measure it by probabilities. The smaller the probability, the greater the information. Now, information theory adds to that, it takes the log, it usually does logarithmic transformation of probabilities, it takes averages, that is very common in communication theory, [???] it does other transformations as well, integrals, powers and things like that. But at its core, information is measured in probabilities, so let me say something about that: but before I elaborate on the definition of some measurements, I want to give you another way of thinking about information as a decision.</p><a href="http://2.bp.blogspot.com/-SUK4V619C0E/VCR2luquwsI/AAAAAAAACew/LnlPCFD5JRg/s1600/vlcsnap-2014-09-25-21h51m01s245.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-SUK4V619C0E/VCR2luquwsI/AAAAAAAACew/LnlPCFD5JRg/s320/vlcsnap-2014-09-25-21h51m01s245.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Decision and homicide come from the same Latin word, they come from "caedere", <i>to kill, to slay or to cut off</i>. Just as a homicide kills somebody, a decision withdraws options, rules out possibilities. The reason I give this is, I'm trying to massage your intuitions, but a decision is something active. Often, when we think of information we point to something, we say there is an item of information. There is a sense in which items of information have validity, but information fundamentally I think is more of a verb than a noun. I show this in my next slide. We think of information as a decision, then information becomes in the first instance of [???] an act rather than an item. That's when we speak about an item of information we keep in mind the act that produced it. Let's give you some examples...</p><a href="http://2.bp.blogspot.com/-w_5_G07Ctz0/VCR3EKUJQfI/AAAAAAAACe4/riwkJ1H89DU/s1600/vlcsnap-2014-09-25-21h51m16s138.png" imageanchor="1" ><img border="0" src="http://2.bp.blogspot.com/-w_5_G07Ctz0/VCR3EKUJQfI/AAAAAAAACe4/riwkJ1H89DU/s320/vlcsnap-2014-09-25-21h51m16s138.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> Let's say I tell you <i>it is raining outside</i>. What have I done? Well, I've excluded that is not raining outside. So I have actually given you some information. If I say <i>it is raining outside or it is not raining outside</i>, have I given you any information? Well, I haven't ruled anything out. But what is the reference class there? It is the weather, it is the weather that is outside. Now, what if I put that in quotes <i>"it is raining outside"</i>. Now it is a symbol string, that is being communicated across a communication channel. In that case the reference class is going to be other symbol strings that might be competing with it. In that case <i>"It is raining outside or it is not raining outside"</i> - now with the quotes - becomes another symbol string, that could be put across a communication channel.</p><p style="background-color:lightgrey"><b>William Dembski:</b> It would actually contain more information, because it is longer, it is more improbable, it is harder to reproduce the same symbol string. So what constitutes information is going to be in a sense context [???], context is the reference class in which you are considering it. If I say <i>"it is raining outside"</i>, what about measuring that probability? If I say that in Chicago - it rains here some, maybe with a certain probability. If I tell you in the Sahara desert <i>"it is raining outside"</i>, that is going to be much more improbable, there will be much more information conveyed in that. In terms of the measurement of information, this is how information theorists do it: think of - for instance - a poker hand. If I tell you "this is a hand which has a pair", or "two pairs", there are a lot of different poker hands, about 2.5 million poker hands. But if I tell you "Royal flush", that narrows it down quite a bit. The range of possibilities becomes more constricted, it is more improbable and there is more information. We are doing some basics here, but this is at a more general level than you would be getting it in an information theory book, which tends to look at symbols, strings, and trying to get them [???] across a communication channel. Now, what is communication in that case? </p> <a href="http://1.bp.blogspot.com/-bv910xjMwuM/VCR4AW36h3I/AAAAAAAACfE/2ahKDmqikho/s1600/vlcsnap-2014-09-25-21h51m38s107.png" imageanchor="1" ><img border="0" src="http://1.bp.blogspot.com/-bv910xjMwuM/VCR4AW36h3I/AAAAAAAACfE/2ahKDmqikho/s320/vlcsnap-2014-09-25-21h51m38s107.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> I would define communication as the coincidence or correlation of two acts or items of information. Look at Shannon's original diagram in his "Mathematical Theory of Communication" from 1949, you have basically a source and a receiver, and then you have some act of information here which will be mirrored in some way over there. We do this all the time: we see this sort of set-up when I am sending an email communication, there will be some simple strings from my keyboard, that are getting coded in a certain way, and there will be some transport protocols, and there will be use of error correction, and it will be moved until it ends up on your computer. This process is happening several times, there will be multiple - if you will - acts of information that are going to happen. </p><a href="http://3.bp.blogspot.com/-ZmxWOCVgzcA/VCR4hi50NBI/AAAAAAAACfM/5ruPo5RGw3o/s1600/vlcsnap-2014-09-25-21h51m48s202.png" imageanchor="1" ><img border="0" src="http://3.bp.blogspot.com/-ZmxWOCVgzcA/VCR4hi50NBI/AAAAAAAACfM/5ruPo5RGw3o/s320/vlcsnap-2014-09-25-21h51m48s202.png" /></a><p style="background-color:lightgrey"><b>William Dembski:</b> It is interesting to look at the history: Shannon's original concern in coming up with the communication of information was the transmission of intelligence. That is an exact quote, released [???]. I think that was even in his undergraduate papers. </p> In my opinion, there are some problems already in this part of the talk. Some can only be spotted with some knowledge of William Dembski's publications, others should be spotted by an audience just generally interested in information theory, e.g.: <ul><li>William Dembski is talking only about information of Shannon's type. This seems to be a very narrow approach.</li><li>William Dembski is well aware of the problems with his paper "The Search for a Search: Measuring the Cost of Success", see for example Tom English's <a href="http://boundedtheoretics.blogspot.de/2012/05/theorem-that-never-was-diversionary.html">The theorem that never was: Diversionary “erratum” from Dembski and Marks</a>. Dembski knows that there is no valid proof for one of his main theorems in this paper (his grandiosely named <a href="http://dieben.blogspot.de/2009/10/horizontal-no-free-lunch-theorem.html">Horizontal No Free Lunch Theorem</a>), but he chose to ignore this fact, even delete an erratum without further comment. And then he presents this paper to a less informed audience as one of the three "Key Publications on CoI"!</li><li>And one amusing thought: "It is raining outside". Who creates this information? The intelligent observer William Dembski or the unintelligent weather in Chicago, which realized the possibility of raining?</li></ul> Next: <a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_26.html">Part 2 - What is a search?</a>DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com8tag:blogger.com,1999:blog-1689592451067041352.post-16290983073481602902014-09-25T06:30:00.000-07:002014-10-02T02:32:06.889-07:00William Dembski's talk at the University of ChicagoInvited by <a href="http://en.wikipedia.org/wiki/Leo_Kadanoff">Leo Kadanoff</a>, William Dembski spoke on Aug 15, 2014 at the University of Chicago's "Computations in Science" seminar. Jerry A. Coyne - a professor in the department of ecology and evolution at the same university - <a href="http://whyevolutionistrue.wordpress.com/2014/08/10/why-dembski-is-speaking-at-the-university-of-chicago/">questioned the judgement of the seminar's organizers</a>. Afterwards, the Discovery Institute <a href="http://www.evolutionnews.org/2014/08/dembski_speaks088961.html">was very pleased</a> with its paladin William Dembski.<blockquote style="background-color:lightgrey">"The talk itself and the Q&A afterward, which were at a pretty high level, went very well."</blockquote>, and they loved a concluding remark by Leo Kadanoff:<blockquote style="background-color:lightgrey">I think the ball is in the court of people who believe in evolution. They have to deal with these questions. ...Bill has made his case and we should all go home and think.</blockquote> At William Dembski's former blog <a href="http://www.uncommondescent.com">Uncommon Descent</a>, a video of the talk-cum-questions was posted on <a href="http://www.uncommondescent.com/intelligent-design/bill-dembskis-university-of-chicago-talk-august-15-2014/">Sep 14, 2014</a>: <p><iframe src="//www.youtube.com/embed/MN74Vn-R5fg?rel=0" allowfullscreen="allowfullscreen" frameborder="0" height="315" width="560"></iframe></p> This video has gotten very little resonance. To make it easier to access, I have created a transcript, which I will publish on this blog in a short series of posts. Obviously, the usual caveats apply: I'm not a native speaker, but I tried my best to understand and reproduce the talk as truthfully as possible. I apologize in advance for my errors, which inevitably have occurred, and I'm grateful for any correction. <h3>How "official" is the video?</h3>The question arose: who actually taped the talk? Some student, who then put it up on youtube? I think that it is a work of members of the Discovery Institute: <ol><li>The <a href="https://www.youtube.com/channel/UCsykkjQAZYuZqbGQ-5XdKsw">youtube channel <i>MissIngaNiball</i></a> on which the video is presented seems to belong to Robert Marks (<a href="http://en.wikipedia.org/wiki/Robert_J._Marks_II">wikipedia</a>, <a href="http://americanloons.blogspot.de/2011/10/255-robert-j-marks-ii.html">American Loons</a>), or at least a member of his family (in which case a predilection for feeble puns would be hereditary). </li><li>Two stills taken from the video are credited to Paul Nelson (<a href="http://en.wikipedia.org/wiki/Paul_Nelson_%28creationist%29">wikipedia</a>, <a href="http://americanloons.blogspot.de/2012/02/293-paul-nelson.html">American Loons</a>)in the Discovery Institute's <a href="http://www.evolutionnews.org/2014/08/dembski_speaks088961.html">article.</a></li></ol><h3>Dembski's talk: Part 1 - 5</h3><ul><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in.html">Part 1: Introduction, What is information?</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_26.html">Part 2: What is is a search?</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_27.html">Part 3: What is is an <i>evolutionary</i> search?</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_28.html">Part 4: What is Conservation of Information?</a></li><li><a href="http://dieben.blogspot.de/2014/09/conservation-of-information-in_58.html">Part 5: What is Conservation of Information? Example continued</a></li></ul> DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-51217034073517005842013-07-14T01:25:00.000-07:002013-07-14T02:52:24.262-07:00Dembski's, Ewert's and Marks's Concept of a Search Applied to Exhaustive SearchesAt <a href="http://www.uncommondescent.com/intelligent-design/questioning-information-cost/#comment-46337">Uncommon Descent</a>, Winston Ewert, co-author of the paper <a href="http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0002">A General Theory of Information Cost Incurred by Successful Search</a>, writes:<blockquote><i>"The search is defined to be a six-tuple consisting of the initiator, terminator, inspector, navigator, nominator, and discriminator. The paper studies the question of picking a search at random, and that would imply picking each of the six components at random. We did not consider it necessary to specifically state that each individual component was also selected at random. That would seem to be implied.</i></blockquote>So, let $\Omega = \{\omega_1, \omega_2, \dots, \omega_N\}$ be our finite search space with $N$ elements. We are looking for a single element $\omega_k$, so we try to maximize the fitness function $f = \chi_{\omega_k}$. To keep everything finite, we don't allow repetitions, i.e., in our search each place can only be visited once. This is - as Macready and Wolpert observed - always possible by keeping a look-up table and thus doesn't change the set-up. Therefore, our search is completed in at most $N$ steps.<br>(BTW: The claim that <i>"each of the six components [is picked] at random"</i> seems not to apply to the <i>inspector</i>: this is a fixed function for a search - in our case, the <i>inspector</i> returns the value of the fitness function. Of course, you can say that we pick the <i>inspector</i> at random out of the set of the one possible <i>inspector</i>.)<br> Let's take a look at all the searches which are ended by their <i>terminator</i> only after the $N$-s step, i.e., the subset of all exhaustive searches. The price question: <b>What is the probability to find the target in such an exhaustive search?</b> Until now, everyone looking at such problems would have thought that this probability is <b>one</b>: we certainly visited $\omega_k$ and spotted that the function $f$ takes it maximum there. But in the world of Dembski, Ewert, and Marks it is not, as a random <i>discriminator</i> takes its toll - and <i>discriminators</i> aren't obliged to return the target if it was found and identified...<br>Counterintuitive? That is a flattering description: the <i>discriminator</i>'s purpose seems to be to turn even a search which is successful by all human standards into a guess to fit the <i>idée fixe</i> that each search can be "represented" by a measure on the search space.<br><b>Addendum:</b> We can drop the condition of not having repetitions in our searches and just look at those searches which are terminated only after the whole search space was visited: <i>terminators</i> with this property exist. Such searches may have length $N$, but can be much longer. The result is the same: the probability of finding the target during a complete enumeration of the search space is (much) less than one. I have to ask: <b>What good is a model in which an exhaustive search doesn't fare much better than a single guess?</b> DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com4tag:blogger.com,1999:blog-1689592451067041352.post-10100160132199269422013-07-13T03:34:00.000-07:002013-07-13T04:03:13.954-07:00Questioning Information Cost - A reply to Winston EwertOver at <a href="UncommonDescent">Uncommon Descent</a>, Winston Ewert (one of the three authors of the paper <a href="http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0002">A General Theory of Information Cost Incurred by Successful Search</a>) answers in the article "<a href="http://www.uncommondescent.com/intelligent-design/questioning-information-cost/">Questioning Information Cost</a> to "a number of questions and objections to the paper" I raised. He states fives points, which I will address in this post. Obviously, I'll give my reply at <i>Uncommon Descent</i>, too, but their format doesn't allow for mathematical formulas, so it is easier to make a first draft here. I thank Winston Ewert for his answers, but I'd appreciate some further clarifications. <blockquote><i>Firstly, Dieb objects that the quasi-Bayesian calculation on Page 56 is incorrect, although it obtains the correct result. However, the calculation is called a quasi-Bayesian calculation because it engages in hand-waving rather than presenting a rigorous proof. The text in question is shortly after a theorem and is intended to explicate the consequences of that theorem rather than rigorously prove its result. The calculation is not incorrect, but rather deliberately oversimplified.</i></blockquote>Fair enough. So it's not a <i>quasi-Bayesian calculation</i>, but a <i>Bayesian quasi-calculation</i>. I will amend my post (<a href="http://dieben.blogspot.de/2013/07/please-show-all-your-work-for-full.html">Please show all your work for full credit...</a>) by Winston Ewert's explanation. <blockquote><i>Secondly, Dieb objects that many quite different searches can be constructed which are represented by the same probability measure. However, if searches were represented as a mapping from the previously visited points to a new point (as in Wolpert and Macready’s original formulation), algorithms which derive the same queries in different ways will be represented the same way. Giving multiple searches the same representation is neither avoidable nor inherently problematic.</i></blockquote>The problem is that Dembski's, Ewert's and Marks's construction of the representation does not only depend on the discriminator (see the next point), but on the target, too. Take $\Omega = \{1,2,3,4\}$ and two searches with two steps: <ul><li>The first search consist just of two random guesses, i.e., at each step, one of the numbers is given with probability $1/4$.</li><li>The second search has two guesses, too. But at the first step, $1$ is taken with probability $7/16$ and each other number with $3/16$, while at the second step, one is omitted from the guess and each other number it guessed with a probability of $1/3$.</li></ul>These two searches are quite different: the first may produce a query $(1,1)$ with probability $1/16$, while the second never will. Now take a discriminator $\Delta$ which returns the target if it is in the query and otherwise another element in the query at random. Such a discriminator seems to be quite natural and it is certainly within the range of the definition on pages 35--36. <br>Now, the distribution which $\Delta$ infers on $\Omega$ depends on the target: if we are looking for $\{1\}$, we get:<ul><li>First search: $\mu_{\{1\}}^1$ given by $\mu_{\{1\}}^1(\{1\}) = 7/16$, $\mu_{\{1\}}^1(\{2\})= \mu_{\{1\}}^1(\{3\})= \mu_{\{1\}}^1(\{4\})=3/16$</li><li>Second search: $\mu_{\{1\}}^2 = \mu_{\{1\}}^1$</li></ul>These are two algorithms which don't derive the same queries albeit in different ways, but nonetheless they will be represented the same way!<br>In fact, if our target is $\{2\}$, we get other distributions:<ul><li>First search: $\mu_{\{2\}}^1$ given by $\mu_{\{2\}}^1(\{2\}) = 7/16$, $\mu_{\{2\}}^1(\{1\})= \mu_{\{2\}}^1(\{3\})= \mu_{\{2\}}^1(\{4\})=3/16$</li><li>Second search: $\mu_{\{2\}}^2\{1\} = 14/96$, $\mu_{\{2\}}^2\{2\} = 44/96$, $\mu_{\{2\}}^2\{3\} = \mu_{\{2\}}^2\{4\}= 19/96$.</li></ul> Frankly, this seems to be "inherently problematic". <blockquote><i>Thirdly, Dieb objects that a search will be biased by the discriminator towards selecting elements in the target, not a uniform distribution. However, Dieb’s logic depends on assuming that we have a good discriminator. As the paper states, we do not assume this to be the case. If choosing a random search, we cannot assume that we have a good discriminator (or any other component). The search for the search assumes that we have no prior information, not even the ability to identify points in the target.</i></blockquote>This seems to be a little absurd. Shouldn't your representation work for any discriminator - even a good one? If we are following Wolpert's and Macready's formulation, a blind search means that we try to maximize a characteristic function. So, the natural discriminator should return this maximum if it is found in a query. If it doesn't, we build a discriminator which does: we have the output of the <i>inspector</i>, so why not use it? If you are telling us that the output of the inspector may be false, then I'd use another inspector, one which gives us the output of the fitness function. If you say now that the output of the fitness function may be dubious, I'd say "tough luck: I maximize this function whether the function is right or wrong - what else is there to do?". These added layers of entities which have a hidden knowledge about the target which isn't inherent to the fitness function seem to be superfluous. <blockquote><i>Fourthly, Dieb doesn’t see the point in the navigator’s output as it is can be seen as just the next element of the search path. However, the navigator produces information like a distance to the target. The distance will be helpful in determining where to query, but it does not determine the next element of the search path. So it cannot be seen as just the next element of the search path.</i></blockquote>So, what is the difference between the <i>inspector</i> and the <i>navigator</i>? The navigator may take the output of the inspector into account, but nonetheless one could conflate both into a single pair of values - especially as you allow "different forms" for the inspector. So you could get rid of the third row of the search matrix. <blockquote><i>Fifthly, Dieb objects that the inspector is treated inconsistently. However, the output of the inspector is not inconsistent but rather general. The information extracted by the inspector is the information relevant to whether or not a point is in the target. That information will take different forms depending on the search, it may be a fitness value, a probability, a yes/no answer, etc.</i></blockquote>Sorry, I may have been confused by the phrase <i>"The inspector $O_{\alpha}$ is an oracle that, in querying a search-space entry, extracts information bearing on its probability of belonging to the target $T$"</i>: if we look at the Dawkins's Weasel and take the Hamming-distance as the fitness function, each returned value other than $0$ tells us that the <i>probability of belonging to the target $T$</i> for an element is zero itself, whether it is "METHINKS IT IS LIKE A WEASER" or "AAAAAAAAAAAAAAAAAAAAAAAAAAAA". I understand that you want to avoid the notion of proximity to a target, but your phrasing is misleading, too. Have you any example of a problem where the inspector returns a probability other than 0 or 1? In your examples, it seems to be always the output of a fitness function. <blockquote><i>The authors of the paper conclude that Dieb’s objections derive from misunderstanding our paper. Despite five blog posts related to this paper, we find that Dieb has failed to raise any useful or interesting questions. Should Dieb be inclined to disagree with our assessment, we suggest that he organize his ideas and publish them as a journal article or in a similar venue.</i></blockquote>It's always possible that I've misunderstood certain aspects of the paper. I would be grateful if you helped to clear up such misunderstanding. I hope that my comments above count as useful and at least a little bit interesting. I'm preparing an article, as I've promised earlier, but the work is quite tedious, and any clarification of the matters above. Furthermore, I'd like to know whether this "general framework" is still in use, or whether you have tried another way of representing searches as measures. Again, thank you Winston Ewert! DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com2tag:blogger.com,1999:blog-1689592451067041352.post-68653165373813834612013-07-04T02:53:00.000-07:002013-07-04T02:53:23.155-07:00BI:NP - A General Theory of Information Cost Incurred by Successful Search<div style="font-size:85%; color:darkgreen">(This is an email I wrote to William Dembski, Winston Ewert and Robert Marks II)</div><p> Hi,<p> it's nice to be able to read the proceedings of the conference on <i>Biological Information – New Perspectives</i> for free. However, I have a few questions regarding your contribution <i>"A General Theory of Information Cost Incurred by Successful Search"</i>:<p> 1) Your quasi-Baysian calculation on p. 56 gets the right result, but IMO it isn't correct: Please see http://dieben.blogspot.de/2013/07/please-show-all-your-work-for-full.html for details.<p> 2) You claim that you have found a <i>representation</i> for searches as measures on the original space. Again, this works for guesses, but seems to be quite problematic when it comes to searches: here, many quite different searches can be constructed which are "<i>represented</i>" by the same $\mu$ in $M(\Omega)$!<p> 3) You are using the uniform measure on $M(\Omega)$. Again, fine with guesses - but when it comes to searches, this becomes questionable: if $\mu_{(X_1, X_2 \dots, X_n)}$ are measures representing searches $S(X_1, X_2 \dots, X_n$), where at each step an element of $\Omega$ is chosen according to a (uniformly random) chosen measure $\theta_k$, then the measures induced by a "<i>discriminator</i>" (which returns an element of $T$ if it was found, otherwise a random element of the first line of the search matrix) aren't again uniformly distributed on $M(\Omega)$. In fact, we will get that for n tending to infinity, the measures approach $\delta_T$!<p> 4) For me, your description of a search is quite convoluted: I don't see the point of the "<i>navigator</i>"'s output, as this can be seen just as the next element of your search path. And then there is the output of the "<i>inspector</i>": you are treating it quite inconsistently - once, it is the probability of an element to be a member of the target, the next time it is the output of a fitness function...<p> I'd like to see you addressing these issues above. Denyse O'Leary promised a series of posts at Uncommon Descent, each one dedicated to an article of the proceedings. If you don't wish to answer via mail - or comment on my blog - perhaps we can discuss these questions there?<p> Yours<p>Di…Eb…DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-73920729503634132852013-07-03T15:38:00.000-07:002013-07-13T03:57:39.026-07:00Please show all your work for full credit...<div style="font-size:85%">(I promised a chapter-by-chapter critique of <i>"A General Theory of Information Cost incurred by Successful Search"</i>. This is quite tedious work, so I wanted to make this little point up front - for full reading pleasure you have to be acquainted with (some of) the definitions used in the paper.)</div> (<i><b>Nota Bene</b>: In a reply to this post, Winston Ewert wrote on <a href="http://www.uncommondescent.com/intelligent-design/questioning-information-cost/">Uncommon Descent</a>: "Dieb objects that the quasi-Bayesian calculation on Page 56 is incorrect, although it obtains the correct result. However, the calculation is called a quasi-Bayesian calculation because it engages in hand-waving rather than presenting a rigorous proof. The text in question is shortly after a theorem and is intended to explicate the consequences of that theorem rather than rigorously prove its result. The calculation is not incorrect, but rather deliberately oversimplified."</i>)<p> On page 55 of their article <i>"A General Theory of Information Cost incurred by Successful Search"</i> (<a href="http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0002">free download as pdf</a>), the authors W. A. Dembski, W. Ewert and R. J Marks II (in future I'll refer to them as DEM) write:<blockquote>To see how the probability costs ossociated with null and alternative searches relate, it is instructive to consider the following two quasi-Bayesian ways of reckoning these costs:<br>$\mathbf{P}$(locating $T$ via null search)=$\mathbf{P}$(null search locates T & null search is available)<br><div style="text-indent:25%">=$\mathbf{P}$(null search locates T|null search is avail.) × $\mathbf{P}$(null search is avail.)<br></div><div style="text-indent:25%">=$\mathbf{U}(T) \times 1$ [because the availability of null search is taken for granted]<br></div><div style="text-indent:25%">=$p$.</div><br>$\mathbf{P}$(locating $T$ via alt. search)=$\mathbf{P}$(alt. search locates T & alt. search is available)<br><div style="text-indent:25%">=$\mathbf{P}$(alt. search locates T|alt. search is avail.) × $\mathbf{P}$(alt search is avail.)<br></div><div style="text-indent:25%">=$\mu(T) \times \overline{\mathbf{U}}(\overline{T}_q)$<br></div><div style="text-indent:25%">$\le q\,\times\,p/q$</div><div style="text-indent:25%">=$p$.</div></blockquote> I have no problems with the results - at least if we can assume that the uniform measure is apt to be used on $\mathbf{M}(\Omega)$. But the equations seems to be a little bit fishy. Let me explain what I mean, using the most simple setting possible: Let $\Omega = \{0,1\}$, a set with two elements, and let $T=1$ be our target. Then $\mathbf{M}(\Omega)$ can be represented by the interval $[0;1]$: for $x \in [0;1]$, $\mu_x = (1-x) \delta_0 + x \delta_1$ is the measure with $\mu_x(\{1\}) = x$. We can even introduce an associated search $S_x := S_(\mu_x)$, which is in fact just a single guess on $\Omega$ distributed according to $\mu_x$. Ergo $\mathbf{E}(S_x) = x$. Now we can perform an experiment in two steps:<ol><li>Choose a measure $\mu_x \in \mathbf{M}(\Omega)$ at random.</li><li>Try to locate $T$ using $S_x$.</li></ol>This experiment can be represented by choosing (X,Y) on $[0;1] \times [0;1]$ according to the uniform distribution on the unit square: We look up a number x, which represents our measure, then a number y: if $y \le x$, we have located our target using $S_x$, otherwise not. <div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-aaaE-FpWRgw/UdSflM4WeNI/AAAAAAAAARs/QS-Zw_ekkhI/s500/dem-001.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" width="200" src="http://2.bp.blogspot.com/-aaaE-FpWRgw/UdSflM4WeNI/AAAAAAAAARs/QS-Zw_ekkhI/s500/dem-001.png" /></a></div>The picture displays the situation and allows us two answer some questions easily: <ul><li>What's the probability to locate our target using the process above? Well, it's $p = 1/2$, represented by the whole green area</li><li>For a fixed $q$, what is the probability to choose $\mu_x$ and find our target? That's <i>zero</i> (or <i>nil</i>): the red line symbolizes this event, which is a null-set.</li></ul>Dembski, Ewert, and Marks (DEM) obviously don't want to have this, that's why they don't look at $\{\theta \in \mathbf{M}(Q) | \theta(T) = q \}$, but at $\overline{T}_q = \{\theta \in \mathbf{M}(Q) | \theta(T) \ge q \}$. (pp. 53-54) <ul><li>What's the probability to choose a measure for which the associated guess finds the target with a probability of at least q, i.e., $\overline{\mathbf{U}}(\overline{T}_q) $? That would be (1-q), easily to be seen in our case, but much more difficult to calculate for more complicated arrangements. DEM give a upper limit for this probability of $p/q$.</li><li>What is the probability of finding our target when we have chosen a measure in $\overline{\mathbf{U}}(\overline{T}_q)$? That depends on the measure, but it is at least $q$. On average, it is $\frac{1+q}{2}$: we get this by examining the darker green area... <li>What is the probability of choosing a measure in $\overline{\mathbf{U}}(\overline{T}_q)$ and finding the target? This is given by the darker green area, ergo $(1-q)\frac{1+q}{2}=\frac{1-q^2}{2}$</li></ul>Now, the darker green area will always be smaller than the whole green area, not only in this simple example, but for all others, too. Therefore the statement: $$\mathbf{P}\text{(locating }T\text{ via alt. search}) \le p$$ is absolutely (and trivially) correct, as $\mathbf{P}\text{(locating }T\text{ via alt. search})$ is the probability of choosing an element of $\overline{\mathbf{U}}(\overline{T}_q)$ and finding the target using that element. But there is a problem in the equality $$\mu(T) \times \overline{\mathbf{U}}(\overline{T}_q) \le q\,\times\,p/q$$ While $\overline{\mathbf{U}}(\overline{T}_q) \le p/q$ we find that $$\mu(T) \ge q:$$ A measure taken from $\overline{T}_q$ will result in a search which finds the target with a probability of at least $q$. Above, we have seen, that the probability is on average $\frac{1+q}{2} > q$. So <b>we cannot say anything about the size of $\mu(T) \times \overline{\mathbf{U}}(\overline{T}_q)$!</b>. The shaded area in the picture shows $q\,\times\,p/q$: it has nothing to do with the probabilities which one can see so neatly in the graphic, it just happens to have the right area of $p$... <br> BTW: I don't think that we can split $\mathbf{P}$(alt. search locates T & alt. search is available) neatly into a product used by DEM, a little integrating would be necessary... <br>So, I wouldn't give full marks for this exercise, but perhaps I'm wrong? DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com0tag:blogger.com,1999:blog-1689592451067041352.post-70937556699765093062013-06-23T23:26:00.003-07:002013-06-23T23:28:19.440-07:00Review of "A General Theory of Information Cost Incurred by Successful Search" - Introduction(For some background information, go <a href="2013/06/the-ithaca-papers.html">here</a>) <p>There are two main ways to apply mathematics: the first is to shed light on a subject and look for a deeper understanding, the second just wants to create the impression that something important is happening somehow. After looking into the article "A General Theory of Information Cost Incurred by Successful Search" (<a href="http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0002">free download as pdf</a>) I became convinced that the authors are following the second path.<p> The abstract states:<blockquote>This paper provides a general framework for understanding targeted search. It begins by defining the search matrix, which makes explicit the sources of information that can affect search progress. The search matrix enables a search to be represented as a probability measure on the original search space. This representation facilitates tracking the information cost incurred by successful search (success being defined as finding the target). To categorize such costs, various information and efficiency measures are defined, notably, active information. Conservation of information characterizes these costs and is precisely formulated via two theorems, one restricted (proved in previous work of ours), the other general (proved for the first time here). The restricted version assumes a uniform probability search baseline, the general, an arbitrary probability search baseline. When a search with probability q of success displaces a baseline search with probability p of success where q > p, conservation of information states that raising the probability of successful search by a factor of q/p(>1) incurs an information cost of at least log(q/p). Conservation of information shows that information, like money, obeys strict accounting principles. </blockquote> The <i>general framework</i> is introduced pp 26 — 38. In my next post, I'll try to relate it to the usual definitions, but I fail to see how this new frameworks improves e.g., the ideas of David Wolpert and William G. Macready significantly (<a href="http://en.wikipedia.org/wiki/NFLT">NFLT at wikipedia</a>). pp 38 — 45 provide examples, interestingly without applying the new framework to them. Then follow a couple of pages with sound math (pp. 45 — 61), it is just not clear what they have to do with the claims the authors are making. For their mathematics to work, they have to show that searches can be represented as measures. Indeed, the authors write:<blockquote><i>"This representation will be essential throughout the sequel.</i> (p. 37)</blockquote> I will elaborate how I think that the authors failed to do so, and that the "representation" is at least a misnomer... Another point will be the subject of "Information Cost": this term isn't defined in the paper... DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com4tag:blogger.com,1999:blog-1689592451067041352.post-35826349212663447872013-06-23T13:02:00.000-07:002013-06-24T05:23:47.145-07:00The Ithaca Papers<a href="http://rationalwiki.org/wiki/William_Dembski">William A. Dembski</a> announces in his <a href="http://designinference.com/dembski-on-intelligent-design/dembski-curriculum-vitae-resume/">CV/Resumé</a> on his web site <a href="http://designinference.com/">Design Inference - Education in Culture and Worldview</a> some books which are still in preparation. Top of the list is <blockquote><i>Biological Information: New Perspectives</i> (co-edited with Robert J. Marks II, John Sanford, Michael Behe, and Bruce Gordon). Under contract with Springer Verlag.</blockquote>Well, rejoice, the <a href="http://www.worldscientific.com/worldscibooks/10.1142/8818">electronic version</a> of this book has been published (and is free for download!), and the hard copy is announced for August 2013. Albeit the publisher switched from <a href="http://www.springer.com">Springer</a> to <a href="http://www.worldscientific.com/">World Scientific</a>, the announcement hasn't changed:<blockquote>In the spring of 2011, a diverse group of scientists gathered at Cornell University to discuss their research into the nature and origin of biological information. This symposium brought together experts in information theory, computer science, numerical simulation, thermodynamics, evolutionary theory, whole organism biology, developmental biology, molecular biology, genetics, physics, biophysics, mathematics, and linguistics. This volume presents new research by those invited to speak at the conference.</blockquote> While the publication of <a href="http://en.wikipedia.org/wiki/Stephen_C._Meyer">Stephen C. Meyer</a>'s new book <a href="http://www.amazon.com/Darwins-Doubt-Explosive-Origin-Intelligent/dp/0062071475/">Darwin's Doubt</a> is hailed with great fanfare at the <a href="http://www.discovery.org/">Discovery Institute</a>'s news-outlet <a href="http://www.evolutionnews.org/">Evolution News</a>, the appearance of this volume hasn't made their news yet - though Dembski and Meyer are both fellows of the Discovery Institute's <a href="http://www.discovery.org/csc/fellows.php">Center for Science and Culture</a> (granted, Meyer is its director). Only at Dembski's (former) blog, <a href="http://www.uncommondescent.com/">Uncommon Descent</a>, there are two posts about the book:<ul><li><a href="http://www.uncommondescent.com/intelligent-design/download-the-cornell-papers-free-here/">Download the Cornell papers free here</a></li><li><a href="http://www.uncommondescent.com/intelligent-design/download-cornell-papers-on-origin-of-biological-information-free/">Download Cornell papers on origin of biological information free</a></li></ul> Instantly, there arose a discussion about Denyse O'Leary's (commenting under the <i>nom de guerre</i> "News") choice of title, where the usual combatants switched sides: the evolutionists claimed the title was designed to mislead the average reader to think that the Cornell University was somewhat involved in the conference, the apologists of Intelligent Design argued that this was just chance. Unfortunately, no one answered to <a href="http://www.uncommondescent.com/intelligent-design/download-cornell-papers-on-origin-of-biological-information-free/#comment-457837">my comment</a>:<blockquote> In the interest of discussing the data and the evidence, could we have posts on various articles of the book? I’d be quite interested in a thread on Chapter 1.1.2 “A General Theory of Information Cost Incurred by Successful Search” by William A. Dembski, Winston Ewert and Robert J. Marks II.<br>I hope that the authors are still reading this blog: this way, we could have a productive discussion, and perhaps some questions could be answered by the people involved!<br>And for the sake of a swift exchange of ideas: could someone please release me from the moderation queue? </blockquote> Maybe there is no interest in such a discussion at Uncommon Descent. Maybe no one read the comment - it was hold in the <i>moderation queue</i> for five days, and when it appeared, the article wasn't any longer at the front page. Therefore I'll start a number of posts on “A General Theory of Information Cost Incurred by Successful Search” here at my blog: I just can't believe that this <i>peer-edited</i> article would have been successfully <i>peer-reviewed</i> by Springer.... DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com2tag:blogger.com,1999:blog-1689592451067041352.post-49085067448667182922013-06-07T12:16:00.000-07:002013-06-07T12:16:00.662-07:00The initiator, the terminator, the inspector, the navigator, the nominator, and the discriminator...Why, oh why, can't they use standard notations? More about this later...DiEbhttp://www.blogger.com/profile/02099109109735165335noreply@blogger.com4