Month: October 2025

Emergent Coordination in Multi-Agent Language Models

Christoph Riedl

When are multi-agent LLM systems merely a collection of individual agents versus an integrated collective with higher-order structure? We introduce an information-theoretic framework to test — in a purely data-driven way — whether multi-agent systems show signs of higher-order structure. This information decomposition lets us measure whether dynamical emergence is present in multi-agent LLM systems, localize it, and distinguish spurious temporal coupling from performance-relevant cross-agent synergy. We implement both a practical criterion and an emergence capacity criterion operationalized as partial information decomposition of time-delayed mutual information (TDMI). We apply our framework to experiments using a simple guessing game without direct agent communication and only minimal group-level feedback with three randomized interventions. Groups in the control condition exhibit strong temporal synergy but only little coordinated alignment across agents. Assigning a persona to each agent introduces stable identity-linked differentiation. Combining personas with an instruction to “think about what other agents might do” shows identity-linked differentiation and goal-directed complementarity across agents. Taken together, our framework establishes that multi-agent LLM systems can be steered with prompt design from mere aggregates to higher-order collectives. Our results are robust across emergence measures and entropy estimators, and not explained by coordination-free baselines or temporal dynamics alone. Without attributing human-like cognition to the agents, the patterns of interaction we observe mirror well-established principles of collective intelligence in human groups: effective performance requires both alignment on shared objectives and complementary contributions across members.

https://arxiv.org/abs/2510.05174 

Complex Contagion in Social Networks: Causal Evidence from a Country-Scale Field Experiment

Jaemin Lee, David Lazer, Christoph Riedl

Sociological Science

Complex contagion rests on the idea that individuals are more likely to adopt a behavior if they experience social reinforcement from multiple sources. We develop a test for complex contagion, conceptualized as social reinforcement, and then use it to examine whether empirical data from a country-scale randomized controlled viral marketing field experiment show evidence of complex contagion. The experiment uses a peer encouragement design in which individuals were randomly exposed to either one or two friends who were encouraged to share a coupon for a mobile data product. Using three different analytical methods to address the empirical challenges of causal identification, we provide strong support for complex contagion: the contagion process cannot be understood as independent cascades but rather as a process in which signals from multiple sources amplify each other through synergistic interdependence. We also find social network embeddedness is an important structural moderator that shapes the effectiveness of social reinforcement.

https://sociologicalscience.com/articles-v12-28-685/ 

AI and jobs. A review of theory, estimates, and evidence

R. Maria del Rio-Chanona, Ekkehard Ernst, Rossana Merola, Daniel Samaan, Ole Teutloff

Generative AI is altering work processes, task composition, and organizational design, yet its effects on employment and the macroeconomy remain unresolved. In this review, we synthesize theory and empirical evidence at three levels. First, we trace the evolution from aggregate production frameworks to task- and expertise-based models. Second, we quantitatively review and compare (ex-ante) AI exposure measures of occupations from multiple studies and find convergence towards high-wage jobs. Third, we assemble ex-post evidence of AI’s impact on employment from randomized controlled trials (RCTs), field experiments, and digital trace data (e.g., online labor platforms, software repositories), complemented by partial coverage of surveys. Across the reviewed studies, productivity gains are sizable but context-dependent: on the order of 20 to 60 percent in controlled RCTs, and 15 to 30 percent in field experiments. Novice workers tend to benefit more from LLMs in simple tasks. Across complex tasks, evidence is mixed on whether low or high-skilled workers benefit more. Digital trace data show substitution between humans and machines in writing and translation alongside rising demand for AI, with mild evidence of declining demand for novice workers. A more substantial decrease in demand for novice jobs across AI complementary work emerges from recent studies using surveys, platform payment records, or administrative data. Research gaps include the focus on simple tasks in experiments, the limited diversity of LLMs studied, and technology-centric AI exposure measures that overlook adoption dynamics and whether exposure translates into substitution, productivity gains, erode or increase expertise.

Read the full article at: arxiv.org

Indeterminism in Large Language Models: An Unintentional Step Toward Open-Ended Intelligence

Georgii, Karelin and Nakajima, Kohei and Soto-Astorga, Enrique F. and Carr, Earnest and James, Mark and Froese, Tom

Synergy between stochastic noise and deterministic chaos is a canonical route to unpredictable behavior in nonlinear systems. This letter analyzes the origins and consequences of indeterminism that has recently appeared in leading Large Language Models (LLMs), drawing connections to open-endedness, precariousness, artificial life, and the problem of meaning. Computational indeterminism arises in LLMs from a combination of the non-associative nature of floating-point arithmetic and the arbitrary order of execution in large-scale parallel software-hardware systems. This low-level numerical noise is then amplified by the chaotic dynamics of deep neural networks, producing unpredictable macroscopic behavior. We propose that irrepeatable dynamics in computational processes lend them a mortal nature.
Irrepeatability might be recognized as a potential basis for genuinely novel behavior and agentive artificial intelligence and could be explicitly incorporated into system designs.
The presence of beneficial intrinsic unpredictability can then be used to evaluate when artificial computational systems exhibit lifelike autonomy.

Read the full article at: philsci-archive.pitt.edu

How Much Math Is Knowable? Scott Aaronson

Theoretical computer science has over the years sought more and more refined answers to the question of which mathematical truths are knowable by finite beings like ourselves, bounded in time and space and subject to physical laws. I’ll tell a story that starts with Godel’s Incompleteness Theorem and Turing’s discovery of uncomputability. I’ll then introduce the spectacular Busy Beaver function, which grows faster than any computable function. Work by me and Yedidia, along with recent improvements by O’Rear, Riebel, and others, has shown that the value of BB(549) is independent of the axioms of set theory; on the other end, an international collaboration proved last year that BB(5) = 47,176,870. I’ll speculate on whether BB(6) will ever be known, by us or our AI successors. I’ll next discuss the P!=NP conjecture and what it does and doesn’t mean for the limits of machine intelligence. As my own specialty is quantum computing, I’ll summarize what we know about how scalable quantum computers, assuming we get them, will expand the boundary of what’s mathematically knowable. I’ll end by talking about hypothetical models even beyond quantum computers, which might expand the boundary of knowability still further, if one is able (for example) to jump into a black hole, create a closed timelike curve, or project oneself onto the holographic boundary of the universe.

Watch at: www.youtube.com