Engineering Emergence

bel Jansma, Erik Hoel

One of the reasons complex systems are complex is because they have multiscale structure. How does this multiscale structure come about? We argue that it reflects an emergent hierarchy of scales that contribute to the system’s causal workings. An example is how a computer can be described at the level of its hardware circuitry but also its software. But we show that many systems, even simple ones, have such an emergent hierarchy, built from a small subset of all their possible scales of description. Formally, we extend the theory of causal emergence (2.0) so as to analyze the causal contributions across the full multiscale structure of a system rather than just over a single path that traverses the system’s scales. Our methods reveal that systems can be classified as being causally top-heavy or bottom-heavy, or their emergent hierarchies can be highly complex. We argue that this provides a more specific notion of scale-freeness (here, when causation is spread equally across the scales of a system) than the standard network science terminology. More broadly, we provide the mathematical tools to quantify this complexity and provide diverse examples of the taxonomy of emergent hierarchies. Finally, we demonstrate the ability to engineer not just degree of emergence in a system, but how that emergence is distributed across the multiscale structure.

Read the full article at: arxiv.org

Artificially intelligent agents in the social and behavioral sciences: A history and outlook

Petter Holme, Milena Tsvetkova

We review the historical development and current trends of artificially intelligent agents (agentic AI) in the social and behavioral sciences: from the first programmable computers, and social simulations soon thereafter, to today’s experiments with large language models. This overview emphasizes the role of AI in the scientific process and the changes brought about, both through technological advancements and the broader evolution of science from around 1950 to the present. Some of the specific points we cover include: the challenges of presenting the first social simulation studies to a world unaware of computers, the rise of social systems science, intelligent game theoretic agents, the age of big data and the epistemic upheaval in its wake, and the current enthusiasm around applications of generative AI, and many other topics. A pervasive theme is how deeply entwined we are with the technologies we use to understand ourselves.

Read the full article at: arxiv.org

Emergent Coordination in Multi-Agent Language Models

Christoph Riedl

When are multi-agent LLM systems merely a collection of individual agents versus an integrated collective with higher-order structure? We introduce an information-theoretic framework to test — in a purely data-driven way — whether multi-agent systems show signs of higher-order structure. This information decomposition lets us measure whether dynamical emergence is present in multi-agent LLM systems, localize it, and distinguish spurious temporal coupling from performance-relevant cross-agent synergy. We implement both a practical criterion and an emergence capacity criterion operationalized as partial information decomposition of time-delayed mutual information (TDMI). We apply our framework to experiments using a simple guessing game without direct agent communication and only minimal group-level feedback with three randomized interventions. Groups in the control condition exhibit strong temporal synergy but only little coordinated alignment across agents. Assigning a persona to each agent introduces stable identity-linked differentiation. Combining personas with an instruction to “think about what other agents might do” shows identity-linked differentiation and goal-directed complementarity across agents. Taken together, our framework establishes that multi-agent LLM systems can be steered with prompt design from mere aggregates to higher-order collectives. Our results are robust across emergence measures and entropy estimators, and not explained by coordination-free baselines or temporal dynamics alone. Without attributing human-like cognition to the agents, the patterns of interaction we observe mirror well-established principles of collective intelligence in human groups: effective performance requires both alignment on shared objectives and complementary contributions across members.

https://arxiv.org/abs/2510.05174 

Complex Contagion in Social Networks: Causal Evidence from a Country-Scale Field Experiment

Jaemin Lee, David Lazer, Christoph Riedl

Sociological Science

Complex contagion rests on the idea that individuals are more likely to adopt a behavior if they experience social reinforcement from multiple sources. We develop a test for complex contagion, conceptualized as social reinforcement, and then use it to examine whether empirical data from a country-scale randomized controlled viral marketing field experiment show evidence of complex contagion. The experiment uses a peer encouragement design in which individuals were randomly exposed to either one or two friends who were encouraged to share a coupon for a mobile data product. Using three different analytical methods to address the empirical challenges of causal identification, we provide strong support for complex contagion: the contagion process cannot be understood as independent cascades but rather as a process in which signals from multiple sources amplify each other through synergistic interdependence. We also find social network embeddedness is an important structural moderator that shapes the effectiveness of social reinforcement.

https://sociologicalscience.com/articles-v12-28-685/ 

AI and jobs. A review of theory, estimates, and evidence

R. Maria del Rio-Chanona, Ekkehard Ernst, Rossana Merola, Daniel Samaan, Ole Teutloff

Generative AI is altering work processes, task composition, and organizational design, yet its effects on employment and the macroeconomy remain unresolved. In this review, we synthesize theory and empirical evidence at three levels. First, we trace the evolution from aggregate production frameworks to task- and expertise-based models. Second, we quantitatively review and compare (ex-ante) AI exposure measures of occupations from multiple studies and find convergence towards high-wage jobs. Third, we assemble ex-post evidence of AI’s impact on employment from randomized controlled trials (RCTs), field experiments, and digital trace data (e.g., online labor platforms, software repositories), complemented by partial coverage of surveys. Across the reviewed studies, productivity gains are sizable but context-dependent: on the order of 20 to 60 percent in controlled RCTs, and 15 to 30 percent in field experiments. Novice workers tend to benefit more from LLMs in simple tasks. Across complex tasks, evidence is mixed on whether low or high-skilled workers benefit more. Digital trace data show substitution between humans and machines in writing and translation alongside rising demand for AI, with mild evidence of declining demand for novice workers. A more substantial decrease in demand for novice jobs across AI complementary work emerges from recent studies using surveys, platform payment records, or administrative data. Research gaps include the focus on simple tasks in experiments, the limited diversity of LLMs studied, and technology-centric AI exposure measures that overlook adoption dynamics and whether exposure translates into substitution, productivity gains, erode or increase expertise.

Read the full article at: arxiv.org