Category: Papers

Lyfe: learning to learn better

Stuart Bartlett; Michael L. Wong
Interface Focus (2025) 15 (6): 20250019 .

Learning—in addition to thermodynamic dissipation, autocatalysis and homeostasis—has been hypothesized to be a key pillar of all living systems. Here, we examine the myriad ways in which organisms on Earth learn over various time and length scales—from Darwinian evolution to protein computation to the scientific method—in order to draw abstractions about the process of learning in general. Be it in life on Earth or lyfe elsewhere in the universe, we propose that learning can be characterized by a combination of mechanisms that favour functional fitness and those that favour novelty search. We also propose that feedbacks related to learning and dissipation, learning and environmental complexity and learning and self-modelling may be general features that guide how the information-processing and predictive abilities of learning systems evolve with time, perhaps even at the scale of planetary biospheres.

Read the full article at: royalsocietypublishing.org

The Evolutionary Ecology of Software: Constraints, Innovation, and the AI Disruption

Sergi Valverde, Blai Vidiella, Salva Duran-Nebreda

This chapter investigates the evolutionary ecology of software, focusing on the symbiotic relationship between software and innovation. An interplay between constraints, tinkering, and frequency-dependent selection drives the complex evolutionary trajectories of these socio-technological systems. Our approach integrates agent-based modeling and case studies, drawing on complex network analysis and evolutionary theory to explore how software evolves under the competing forces of novelty generation and imitation. By examining the evolution of programming languages and their impact on developer practices, we illustrate how technological artifacts co-evolve with and shape societal norms, cultural dynamics, and human interactions. This ecological perspective also informs our analysis of the emerging role of AI-driven development tools in software evolution. While large language models (LLMs) provide unprecedented access to information, their widespread adoption introduces new evolutionary pressures that may contribute to cultural stagnation, much like the decline of diversity in past software ecosystems. Understanding the evolutionary pressures introduced by AI-mediated software production is critical for anticipating broader patterns of cultural change, technological adaptation, and the future of software innovation.

Read the full article at: arxiv.org

Leveraging network motifs to improve artificial neural networks

Haoling Zhang, Chao-Han Huck Yang, Hector Zenil, Pin-Yu Chen, Yue Shen, Narsis A. Kiani & Jesper N. Tegnér
Nature Communications , Article number: (2025)

As the scale of artificial neural networks continues to expand to tackle increasingly complex tasks or improve the prediction accuracy of specific tasks, the challenges associated with computational demand, hyper-parameter tuning, model interpretability, and deployment costs intensify. Addressing these challenges requires a deeper understanding of how network structures influence network performance. Here, we analyse 882,000 motifs to reveal the functional roles of incoherent and coherent three-node motifs in shaping overall network performance. Our findings reveal that incoherent loops exhibit superior representational capacity and numerical stability, whereas coherent loops show a distinct preference for high-gradient regions within the output landscape. By avoiding such gradient pursuit, incoherent loops sustain more stable adaptation and consequently greater robustness. This mechanism is evident in 97,240 fixed-network training experiments, where coherent-loop networks consistently prioritized high-gradient regions during learning, and is further supported by noise-resilience analyses – from classical reinforcement learning tasks to biological, chemical, and medical applications – which demonstrate that incoherent-loop networks maintain stronger resistance to training noise and environmental perturbations. This work shows the functional impact of structural motif differences on the performance of artificial neural networks, offering foundational insights for designing more resilient and accurate networks.

Read the full article at: www.nature.com

Evolution by natural induction

Richard A. Watson,  Michael Levin,  Tim Lewens`

Interface Focus (2025) 15 (6): 20250025 .

It is conventionally assumed that all evolutionary adaptation is produced, and could only possibly be produced, by natural selection. Natural induction is a different mechanism of adaptation. It occurs in dynamical systems described by a network of interactions, where connections give way slightly under stress and the system is subject to occasional perturbations. This differential adjustment of connections causes reorganization of the system’s internal structure in a manner equivalent to associative learning familiar in neural networks. This is sufficient for storage and recall of multiple patterns, learning with generalization and solving difficult constraint problems (without any natural selection involved). Various biological systems (from gene-regulation networks to metabolic networks to ecosystems) meet these basic conditions and therefore have potential to exhibit adaptation by natural induction. Here (and in a follow-on paper), we consider various ways that natural induction and natural selection might interact in biological evolution. For example, in some cases, natural selection may act not as a source of adaptations but as a memory of adaptations discovered by natural induction. We conclude that evolution by natural induction is a viable process that expands our understanding of evolutionary adaptation.

Read the full article at: royalsocietypublishing.org

Characterizing Open-Ended Evolution Through Undecidability Mechanisms in Random Boolean Networks

Amahury J. López-Díaz, Pedro Juan Rivera Torres, Gerardo L. Febres, Carlos Gershenson

Discrete dynamical models underpin systems biology, but we still lack substrate-agnostic diagnostics for when such models can sustain genuinely open-ended evolution (OEE): the continual production of novel phenotypes rather than eventual settling. We introduce a simple, model-independent metric, {\Omega}, that quantifies OEE as the residence-time-weighted contribution of each attractor’s cycle length across the sequence of attractors realized over time. {\Omega} is zero for single-attractor dynamics and grows with the number and persistence of distinct cyclic phenotypes, separating enduring innovation from transient noise. Using Random Boolean Networks (RBNs) as a unifying testbed, we compare classical Boolean dynamics with biologically motivated non-classical mechanisms (probabilistic context switching, annealed rule mutation, paraconsistent logic, modal necessary/possible gating, and quantum-inspired superposition/entanglement) under homogeneous and heterogeneous updating schemes. Our results support the view that undecidability-adjacent, state-dependent mechanisms — implemented as contextual switching, conditional necessity/possibility, controlled contradictions, or correlated branching — are enabling conditions for sustained novelty. At the end of our manuscript we outline a practical extension of {\Omega} to continuous/hybrid state spaces, positioning {\Omega} as a portable benchmark for OEE in discrete biological modeling and a guide for engineering evolvable synthetic circuits.

Read the full article at: arxiv.org