Month: April 2025

Darwin in the machine: addressing algorithmic individuation through evolutionary narratives in computing

Selin E. Nugen

AI & SOCIETY

This paper examines the application of evolutionary analogy in AI (artificial intelligence) research, focussing on narratives that perpetuate individuated and autonomous imaginaries of AI systems through biological diction. AI research has long drawn inspiration from evolution to design and predict algorithmic change. Occasionally, these narratives extend inspiration to reimagine AI as a non-human species subject to the same evolutionary pressures as biological organisms. As AI technologies embed more pervasively in public life and require critical perspectives on their social impacts, these comparisons in AI discourse raise critical questions about the limits of and responsibility in employing such analogies and their potential impact on how broader audiences consume and perceive AI systems. This paper examines the diverse ways and intentions behind how evolution is invoked in AI research narratives by analysing the adaptation of individuating evolutionary language and concepts across three fields of AI-related research: evolutionary computing, Artificial Life, and existential risk. It scrutinises the challenge of accurate scientific communication when drawing inspiration from biological evolution and assigning organismal attributes to digital technologies whilst decontextualising wider evolutionary scholarly discourses. I argue that the intertwined history between evolutionary theory and technological change paired with the potential risks to wider perceptions of AI and biological evolution, requires (1) strategic consideration about the limits of evolutionary analogies in categorising AI in relation to biological organisms, balancing creative inspiration with scientific caution and (2) active, collaborative multidisciplinary engagement with addressing potential misinformation, recognising that biological narratives have sociopolitical implications that influence human interaction with machines.

Read the full article at: link.springer.com

Graph coloring framework to mitigate cascading failure in complex networks

Karan Singh, V. K. Chandrasekar, Wei Zou, Jürgen Kurths & D. V. Senthilkumar 

Communications Physics volume 8, Article number: 170 (2025)

Cascading failures pose a significant threat to the stability and functionality of complex systems, making their mitigation a crucial area of research. While existing strategies aim to enhance network robustness, identifying an optimal set of critical nodes that mediates the cascade for protection remains a challenging task. Here, we present a robust and pragmatic framework that effectively mitigates the cascading failures by strategically identifying and securing critical nodes within the network. Our approach leverages a graph coloring technique to identify the critical nodes using the local network topology, and results in a minimal set of critical nodes to be protected yet maximally effective in mitigating the cascade thereby retaining a large fraction of the network intact. Our method outperforms existing mitigation strategies across diverse network configurations and failure scenarios. An extensive empirical validation using real-world networks highlights the practical utility of our framework, offering a promising tool for enhancing network robustness in complex systems.

Read the full article at: www.nature.com

Large AI models are cultural and social technologies

Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents—perhaps even superintelligent AGI agents. But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us to understand AI systems more accurately. Large models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.

HENRY FARRELL, ALISON GOPNIK, COSMA SHALIZI, AND JAMES EVANS Authors Info & Affiliations
SCIENCE 13 Mar 2025 Vol 387, Issue 6739

Read the full article at: www.science.org

Optimal flock formation induced by agent heterogeneity

Arthur N. Montanari, Ana Elisa D. Barioni, Chao Duan, Adilson E. Motter

The study of flocking in biological systems has identified conditions for self-organized collective behavior, inspiring the development of decentralized strategies to coordinate the dynamics of swarms of drones and other autonomous vehicles. Previous research has focused primarily on the role of the time-varying interaction network among agents while assuming that the agents themselves are identical or nearly identical. Here, we depart from this conventional assumption to investigate how inter-individual differences between agents affect the stability and convergence in flocking dynamics. We show that flocks of agents with optimally assigned heterogeneous parameters significantly outperform their homogeneous counterparts, achieving 20-40% faster convergence to desired formations across various control tasks. These tasks include target tracking, flock formation, and obstacle maneuvering. In systems with communication delays, heterogeneity can enable convergence even when flocking is unstable for identical agents. Our results challenge existing paradigms in multi-agent control and establish system disorder as an adaptive, distributed mechanism to promote collective behavior in flocking dynamics.

Read the full article at: arxiv.org