New paper showing how a fundamental law of universal computation also applies to weaker forms of computation and how this can evaluate the effectivity of measures of complexity

Previously referred to as ‘miraculous’ in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution. Coding theorem-like behaviour and emergence of the Universal Distribution. Correlation in rank (distributions were sorted in terms of each other) of empirical output distributions as compared to the output distribution of TM(5, 2). A progression towards greater correlation is noticed as a function of increasing computational power. Bold black labels are placed at their Chomsky level and gray labels are placed within the highest correlated level. Shannon entropy and lossless compression (Compress) distribute values below or at about the first 2 Chomsky types, as expected. It is not surprising to see the LBA with runtime 107 further deviate in ranking, because LBA after 27 steps produced the highest frequency strings, which are expected to converge faster. Eventually LBA 107 (which is none other than TM(4,2)) will converge to TM(5,2). An empirical bound of non-halting models seems to be low LBA even when increasing the number of states (or symbols for CA).

Source: www.tandfonline.com