Author: cxdig

Disentangling Boltzmann Brains, the Time-Asymmetry of Memory, and the Second Law

David Wolpert, Carlo Rovelli, and Jordan Scharnhorst

Entropy 2025, 27(12), 1227

Are your perceptions, memories and observations merely a statistical fluctuation arising from of the thermal equilibrium of the universe, bearing no correlation to the actual past state of the universe? Arguments are given in the literature for and against this “Boltzmann brain” hypothesis. Complicating these arguments have been the many subtle—and very often implicit—joint dependencies among these arguments and others that have been given for the past hypothesis, the second law, and even for Bayesian inference of the reliability of experimental data. These dependencies can easily lead to circular reasoning. To avoid this problem, since all of these arguments involve the stochastic properties of the dynamics of the universe’s entropy, we begin by formalizing that dynamics as a time-symmetric, time-translation invariant Markov process, which we call the entropy conjecture. Crucially, like all stochastic processes, the entropy conjecture does not specify any time(s) which it should be conditioned on in order to infer the stochastic dynamics of our universe’s entropy. Any such choice of conditioning times and associated entropy values must be introduced as an independent assumption. This observation allows us to disentangle the standard Boltzmann brain hypothesis, its “1000CE” variant, the past hypothesis, the second law, and the reliability of our experimental data, all in a fully formal manner. In particular, we show that these all make an arbitrary assumption that the dynamics of the universe’s entropy should be conditioned on a single event at a single moment in time, differing only in the details of their assumptions. In this aspect, the Boltzmann brain hypothesis and the second law are equally legitimate (or not).

Read the full article at: www.mdpi.com

Representation in science and trust in scientists in the USA 

James N. Druckman, Katherine Ognyanova, Alauna Safarpour, Jonathan Schulman, Kristin Lunz Trujillo, Ata Aydin Uslu, Jon Green, Matthew A. Baum, Alexi Quintana-Mathé, Hong Qu, Roy H. Perlis & David M. J. Lazer
Nature Human Behaviour (2025)

Scientists provide important information to the public. Whether that information influences decision-making depends on trust. In the USA, gaps in trust in scientists have been stable for 50 years: women, Black people, rural residents, religious people, less educated people and people with lower economic status express less trust than their counterparts (who are more represented among scientists). Here we probe the factors that influence trust. We find that members of the less trusting groups exhibit greater trust in scientists who share their characteristics (for example, women trust women scientists more than men scientists). They view such scientists as having more benevolence and, in most cases, more integrity. In contrast, those from high-trusting groups appear mostly indifferent about scientists’ characteristics. Our results highlight how increasing the presence of underrepresented groups among scientists can increase trust. This means expanding representation across several divides—not just gender and race/ethnicity but also rurality and economic status.

Read the full article at: www.nature.com

Early warning signals for loss of control

Jasper J. van Beers, Marten Scheffer, Prashant Solanki, Ingrid A. van de Leemput, Egbert H. van Nes, Coen C. de Visser

Maintaining stability in feedback systems, from aircraft and autonomous robots to biological and physiological systems, relies on monitoring their behavior and continuously adjusting their inputs. Incremental damage can make such control fragile. This tends to go unnoticed until a small perturbation induces instability (i.e. loss of control). Traditional methods in the field of engineering rely on accurate system models to compute a safe set of operating instructions, which become invalid when the, possibly damaged, system diverges from its model. Here we demonstrate that the approach of such a feedback system towards instability can nonetheless be monitored through dynamical indicators of resilience. This holistic system safety monitor does not rely on a system model and is based on the generic phenomenon of critical slowing down, shown to occur in the climate, biology and other complex nonlinear systems approaching criticality. Our findings for engineered devices opens up a wide range of applications involving real-time early warning systems as well as an empirical guidance of resilient system design exploration, or “tinkering”. While we demonstrate the validity using drones, the generic nature of the underlying principles suggest that these indicators could apply across a wider class of controlled systems including reactors, aircraft, and self-driving cars.

Read the full article at: arxiv.org

Hierarchical analysis of spreading dynamics in complex systems

Aparimit Kasliwal, Abdullah Alhadlaq, Ariel Salgado, Auroop R. Ganguly, Marta C. González

Computer-Aided Civil and Infrastructure Engineering

Volume40, Issue31, 29 December 2025, Pages 6223-6241

Modeling spreading dynamics on spatial networks is crucial to addressing challenges related to traffic congestion, epidemic outbreaks, efficient information dissemination, and technology adoption. Existing approaches include domain-specific agent-based simulations, which offer detailed dynamics but often involve extensive parameterization, and simplified differential equation models, which provide analytical tractability but may abstract away spatial heterogeneity in propagation patterns. As a step toward addressing this trade-off, this work presents a hierarchical multiscale framework that approximates spreading dynamics across different spatial scales under certain simplifying assumptions. Applied to the Susceptible-Infected-Recovered (SIR) model, the approach ensures consistency in dynamics across scales through multiscale regularization, linking parameters at finer scales to those obtained at coarser scales. This approach constrains the parameter search space, and enables faster convergence of the model fitting process compared to the non-regularized model. Using hierarchical modeling, the spatial dependencies critical for understanding system-level behavior are captured while mitigating the computational challenges posed by parameter proliferation at finer scales. Considering traffic congestion and COVID-19 spread as case studies, the calibrated fine-scale model is employed to analyze the effects of perturbations and to identify critical regions and connections that disproportionately influence system dynamics. This facilitates targeted intervention strategies and provides a tool for studying and managing spreading processes in spatially distributed sociotechnical systems.

Read the full article at: onlinelibrary.wiley.com

The Physics of Causation

Leroy Cronin, Sara I. Walker

Assembly theory (AT) introduces a concept of causation as a material property, constitutive of a metrology of evolution and selection. The physical scale for causation is quantified with the assembly index, defined as the minimum number of steps necessary for a distinguishable object to exist, where steps are assembled recursively. Observing countable copies of high assembly index objects indicates that a mechanism to produce them is persistent, such that the object’s environment builds a memory that traps causation within a contingent chain. Copy number and assembly index underlie the standardized metrology for detecting causation (assembly index), and evidence of contingency (copy number). Together, these allow the precise definition of a selective threshold in assembly space, understood as the set of all causal possibilities. This threshold demarcates life (and its derivative agential, intelligent and technological forms) as structures with persistent copies beyond the threshold. In introducing a fundamental concept of material causation to explain and measure life, AT represents a departure from prior theories of causation, such as interventional ones, which have so far proven incompatible with fundamental physics. We discuss how AT’s concept of causation provides the foundation for a theory of physics where novelty, contingency and the potential for open-endedness are fundamental, and determinism is emergent along assembled lineages.

Read the full article at: arxiv.org