Explanations of human technology often point to both its cumulative and combinatorial character. Using a novel computational framework, where individual agents attempt to solve problems by modifying, combining and transmitting technologies in an open-ended search space, this paper re-evaluates two prominent explanations for the cultural evolution of technology: that humans are equipped with (i) social learning mechanisms for minimizing information loss during transmission, and (ii) creative mechanisms for generating novel technologies via combinatorial innovation. Here, both information loss and combinatorial innovation are introduced as parameters in the model, and then manipulated to approximate situations where technological evolution is either more cumulative or combinatorial. Compared to existing models, which tend to marginalize the role of purposeful problem-solving, this approach allows for indefinite growth in complexity while directly simulating constraints from history and computation. The findings show that minimizing information loss is only required when the dynamics are strongly cumulative and characterised by incremental innovation. Contrary to previous findings, when agents are equipped with a capacity for combinatorial innovation, low levels of information loss are neither necessary nor sufficient for populations to solve increasingly complex problems. Instead, higher levels of information loss are advantageous for unmasking the potential for combinatorial innovation. This points to a parsimonious explanation for the cultural evolution of technology without invoking separate mechanisms of stability and creativity.
Valiant (2009) proposed to treat Darwinian evolution as a special kind of computational learning from statistical queries. The statistical queries represent a genotype’s fitness over a distribution of challenges. And this distribution of challenges along with the best response to them specify a given abiotic environment or static fitness landscape. Valiant’s model distinguished families of environments that are “adaptable-to” from those that are not. But this model of evolution omits the vital ecological interactions between different evolving agents – it neglects the rich biotic environment that is central to the struggle for existence.
In this article, I extend algorithmic Darwinism to include the ecological dynamics of frequency-dependent selection as a population-dependent bias to the distribution of challenges that specify an environment. This extended algorithmic Darwinism replaces simple invasion of wild-type by a mutant-type of higher scalar fitness with an evolutionary game between wild-type and mutant-type based on their frequency-dependent fitness function. To analyze this model, I develop a game landscape view of evolution, as a generalization of the classic fitness landscape approach that is popular in biology.
I show that this model of eco-evo dynamics on game landscapes can provide an exponential speed-up over the purely evolutionary dynamics of the strict algorithmic Darwinism proposed by Valiant. In particular, I prove that the noisy-Parity environment – which is known to be not adaptable-to under strict algorithmic Darwinism (and conjectured to be not PAC-learnable) – is adaptable-to by eco-evo dynamics. Thus, the ecology of frequency-dependent selection does not just increase the tempo of evolution, but fundamentally transforms its mode.
The eco-evo dynamic for adapting to the noisy-Parity environment proceeds by two stages: (1) a quick stage of point-mutations that moves the population to one of exponentially many local fitness peaks; followed by (2) a slower stage where each ‘step’ follows a double-mutation by a point-mutation. This second stage allows the population to hop between local fitness peaks to reach the unique global fitness peak in polynomial time. The evolutionary game dynamics of finite populations are essential for finding a short adaptive path to the global fitness peak during the second stage of the adaptation process. This highlights the rich interface between computational learning theory, evolutionary games, and long-term evolution.