Month: March 2025

Agent-Based Modeling in Economics and Finance: Past, Present, and Future

Robert L. Axtell, J. Doyne Farmer
JOURNAL OF ECONOMIC LITERATURE VOL. 63, NO. 1, MARCH 2025 (pp. 197–287)

Agent-based modeling (ABM) is a novel computational methodology for representing the behavior of individuals in order to study social phenomena. Its use is rapidly growing in many fields. We review ABM in economics and finance and highlight how it can be used to relax conventional assumptions in standard economic models. ABM has enriched our understanding of markets, industrial organization, labor, macro, development, public policy, and environmental economics. In financial markets, substantial accomplishments include understanding clustered volatility, market impact, systemic risk, and housing markets. We present a vision for how ABMs might be used in the future to build more realistic models of the economy and review some of the hurdles that must be overcome to achieve this.

Read the full article at: www.aeaweb.org

Network renormalization

Andrea Gabrielli, Diego Garlaschelli, Subodh P. Patil & M. Ángeles Serrano 

Nature Reviews Physics (2025)

The renormalization group (RG) is a powerful theoretical framework. It is used on systems with many degrees of freedom to transform the description of their configurations, along with the associated model parameters and coupling constants, across different levels of resolution. The RG also provides a way to identify critical points of phase transitions and study the system’s behaviour around them. In traditional physical applications, the RG largely builds on the notions of homogeneity, symmetry, geometry and locality to define metric distances, scale transformations and self-similar coarse-graining schemes. More recently, efforts have been made to extend RG concepts to complex networks. However, in such systems, explicit geometric coordinates do not necessarily exist, different nodes and subgraphs can have different statistical properties, and homogeneous lattice-like symmetries are absent — all features that make it complicated to define consistent renormalization procedures. In this Technical Review, we discuss the main approaches, important advances, and the remaining open challenges for network renormalization.

Read the full article at: www.nature.com

Network Reconstruction via the Minimum Description Length Principle

Tiago P. Peixoto

Phys. Rev. X 15, 011065

A fundamental problem associated with the task of network reconstruction from dynamical or behavioral data consists in determining the most appropriate model complexity in a manner that prevents overfitting and produces an inferred network with a statistically justifiable number of edges and their weight distribution. The status quo in this context is based on 𝐿1 regularization combined with cross-validation. However, besides its high computational cost, this commonplace approach unnecessarily ties the promotion of sparsity, i.e., abundance of zero weights, with weight “shrinkage.” This combination forces a trade-off between the bias introduced by shrinkage and the network sparsity, which often results in substantial overfitting even after cross-validation. In this work, we propose an alternative nonparametric regularization scheme based on hierarchical Bayesian inference and weight quantization, which does not rely on weight shrinkage to promote sparsity. Our approach follows the minimum description length principle, and uncovers the weight distribution that allows for the most compression of the data, thus avoiding overfitting without requiring cross-validation. The latter property renders our approach substantially faster and simpler to employ, as it requires a single fit to the complete data, instead of many fits for multiple data splits and choice of regularization parameter. As a result, we have a principled and efficient inference scheme that can be used with a large variety of generative models, without requiring the number of reconstructed edges and their weight distribution to be known in advance. In a series of examples, we also demonstrate that our scheme yields systematically increased accuracy in the reconstruction of both artificial and empirical networks. We highlight the use of our method with the reconstruction of interaction networks between microbial communities from large-scale abundance samples involving on the order of 104–105 species and demonstrate how the inferred model can be used to predict the outcome of potential interventions and tipping points in the system.

Read the full article at: link.aps.org

Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics

Markus J. Buehler

We report fundamental insights into how agentic graph reasoning systems spontaneously evolve toward a critical state that sustains continuous semantic discovery. By rigorously analyzing structural (Von Neumann graph entropy) and semantic (embedding) entropy, we identify a subtle yet robust regime in which semantic entropy persistently dominates over structural entropy. This interplay is quantified by a dimensionless Critical Discovery Parameter that stabilizes at a small negative value, indicating a consistent excess of semantic entropy. Empirically, we observe a stable fraction (12%) of “surprising” edges, links between semantically distant concepts, providing evidence of long-range or cross-domain connections that drive continuous innovation. Concomitantly, the system exhibits scale-free and small-world topological features, alongside a negative cross-correlation between structural and semantic measures, reinforcing the analogy to self-organized criticality. These results establish clear parallels with critical phenomena in physical, biological, and cognitive complex systems, revealing an entropy-based principle governing adaptability and continuous innovation. Crucially, semantic richness emerges as the underlying driver of sustained exploration, despite not being explicitly used by the reasoning process. Our findings provide interdisciplinary insights and practical strategies for engineering intelligent systems with intrinsic capacities for long-term discovery and adaptation, and offer insights into how model training strategies can be developed that reinforce critical discovery.

Read the full article at: arxiv.org