Leveraging network motifs to improve artificial neural networks

Haoling Zhang, Chao-Han Huck Yang, Hector Zenil, Pin-Yu Chen, Yue Shen, Narsis A. Kiani & Jesper N. Tegnér
Nature Communications , Article number: (2025)

As the scale of artificial neural networks continues to expand to tackle increasingly complex tasks or improve the prediction accuracy of specific tasks, the challenges associated with computational demand, hyper-parameter tuning, model interpretability, and deployment costs intensify. Addressing these challenges requires a deeper understanding of how network structures influence network performance. Here, we analyse 882,000 motifs to reveal the functional roles of incoherent and coherent three-node motifs in shaping overall network performance. Our findings reveal that incoherent loops exhibit superior representational capacity and numerical stability, whereas coherent loops show a distinct preference for high-gradient regions within the output landscape. By avoiding such gradient pursuit, incoherent loops sustain more stable adaptation and consequently greater robustness. This mechanism is evident in 97,240 fixed-network training experiments, where coherent-loop networks consistently prioritized high-gradient regions during learning, and is further supported by noise-resilience analyses – from classical reinforcement learning tasks to biological, chemical, and medical applications – which demonstrate that incoherent-loop networks maintain stronger resistance to training noise and environmental perturbations. This work shows the functional impact of structural motif differences on the performance of artificial neural networks, offering foundational insights for designing more resilient and accurate networks.

Read the full article at: www.nature.com

Evolution by natural induction

Richard A. Watson,  Michael Levin,  Tim Lewens`

Interface Focus (2025) 15 (6): 20250025 .

It is conventionally assumed that all evolutionary adaptation is produced, and could only possibly be produced, by natural selection. Natural induction is a different mechanism of adaptation. It occurs in dynamical systems described by a network of interactions, where connections give way slightly under stress and the system is subject to occasional perturbations. This differential adjustment of connections causes reorganization of the system’s internal structure in a manner equivalent to associative learning familiar in neural networks. This is sufficient for storage and recall of multiple patterns, learning with generalization and solving difficult constraint problems (without any natural selection involved). Various biological systems (from gene-regulation networks to metabolic networks to ecosystems) meet these basic conditions and therefore have potential to exhibit adaptation by natural induction. Here (and in a follow-on paper), we consider various ways that natural induction and natural selection might interact in biological evolution. For example, in some cases, natural selection may act not as a source of adaptations but as a memory of adaptations discovered by natural induction. We conclude that evolution by natural induction is a viable process that expands our understanding of evolutionary adaptation.

Read the full article at: royalsocietypublishing.org

Characterizing Open-Ended Evolution Through Undecidability Mechanisms in Random Boolean Networks

Amahury J. López-Díaz, Pedro Juan Rivera Torres, Gerardo L. Febres, Carlos Gershenson

Discrete dynamical models underpin systems biology, but we still lack substrate-agnostic diagnostics for when such models can sustain genuinely open-ended evolution (OEE): the continual production of novel phenotypes rather than eventual settling. We introduce a simple, model-independent metric, {\Omega}, that quantifies OEE as the residence-time-weighted contribution of each attractor’s cycle length across the sequence of attractors realized over time. {\Omega} is zero for single-attractor dynamics and grows with the number and persistence of distinct cyclic phenotypes, separating enduring innovation from transient noise. Using Random Boolean Networks (RBNs) as a unifying testbed, we compare classical Boolean dynamics with biologically motivated non-classical mechanisms (probabilistic context switching, annealed rule mutation, paraconsistent logic, modal necessary/possible gating, and quantum-inspired superposition/entanglement) under homogeneous and heterogeneous updating schemes. Our results support the view that undecidability-adjacent, state-dependent mechanisms — implemented as contextual switching, conditional necessity/possibility, controlled contradictions, or correlated branching — are enabling conditions for sustained novelty. At the end of our manuscript we outline a practical extension of {\Omega} to continuous/hybrid state spaces, positioning {\Omega} as a portable benchmark for OEE in discrete biological modeling and a guide for engineering evolvable synthetic circuits.

Read the full article at: arxiv.org

NERCCS 2026: Ninth Northeast Regional Conference on Complex Systems: MARCH 11 – 13, 2026 ROCHESTER, NY (& ONLINE)

NERCCS 2026: The Ninth Northeast Regional Conference on Complex Systems will follow the success of the previous NERCCS conferences to promote the emerging venue of interdisciplinary scholarly exchange for complex systems researchers in the Northeast U.S. region (and beyond) to share their research outcomes through presentations and online publications, network with their peers, and promote interdisciplinary collaboration and the growth of the research community.

NERCCS will particularly focus on facilitating the professional growth of early career faculty, postdocs, and students in the region who will likely play a leading role in the field of complex systems science and engineering in the coming years.

The 2026 conference will be held primarily in person at the University of Rochester, with an online participation option via Zoom.

More at: nerccs2026.github.io

Workshop on Complex Network Analysis with Applications in Brain Network Science and Complex Systems

19–23 December 2025 (Hybrid)

Network Science Research Lab, IIIT Kottayam, India

Workshop on Complex Network Analysis with Applications in Brain Network Science and Complex Systems aims to bring together academicians, researchers, industrial experts, Ph.D. scholars, and postdoctoral fellows to explore recent advancements and foundational concepts in the fields of graph theory and its applications in network analysis. Graph theory, a cornerstone of discrete mathematics, offers a robust framework for modeling and analyzing complex networks across various domains from biological systems and brain connectivity to social, technological, and infrastructural networks.The primary aim of this five-day workshop is to provide a comprehensive introduction to the mathematical foundations and computational techniques in complex network analysis, with a particular emphasis on its applications in brain network science and biomedical data analysis. The program will cover a range of contemporary topics, including the simplicial analysis of fMRI data to study human brain dynamics during functional cognitive tasks, analysis of complex networks and prediction using deep learning models, and exploration of graph algorithms along with their computational complexity. Participants will also gain exposure to advanced methodologies such as the application of complex networks in machine learning, the characterization of resting-state fMRI for brain connectivity analysis, and diffusion MRI analysis for clinical applications. In addition, the workshop will introduce recurrence network analysis, which is used to predict climate changes and other dynamic systems. To complement these theoretical discussions, hands-on sessions will be conducted on complex network analysis using NetworkX, and nonlinear dynamics in recurrence relations.

Free registration for Complex Systems Society Members.

More at: sites.google.com