Month: October 2025

Egosyntonicity and emotion regulation: a probabilistic model of valence dynamics

Eleonora Vitanza , Chiara Mocenni and Pietro De Lellis

In this paper, we introduce a novel Markovian model that describes the impact of egosyntonicity on emotion dynamics. We focus on the dominant current emotion and describe the time evolution of its valence, modelled as a binary variable, where 0 and 1 correspond to negative and positive valences, respectively. In particular, the one-step transition probabilities will depend on the external events happening in daily life, the attention the individual devotes to such events, and the egosyntonicity, modelled as the agreement between the current valence and the internal mood of the individual. A steady-state analysis shows that, depending on the model parameters, four classes of individuals can be identified. Two classes are somewhat expected, corresponding to individuals spending more (less) time in egosyntonicity experiencing positive valences for longer (shorter) times. Surprisingly, two further classes emerge: the self-deluded individuals, where egosyntonicity is associated to a prevalence of negative valences, and the troubled happy individuals, where egodystonicity is associated to positive valences. These findings are aligned with the literature showing that, even if egosyntonicity typically has a positive impact in the short term, it may not always be beneficial in the long run.

Read the full article at: royalsocietypublishing.org

Quantifying Human-AI Synergy

Christoph Riedl, Ben Weidmann

We introduce a novel Bayesian Item Response Theory framework to quantify human–AI synergy, separating individual and collaborative ability while controlling for task difficulty in interactive settings. Unlike standard static benchmarks, our approach models human–AI performance as a joint process, capturing both user-specific factors and moment-to-moment fluctuations. We validate the framework by applying it to human–AI benchmark data (n=667) and find significant synergy. We demonstrate that collaboration ability is distinct from individual problem-solving ability. Users better able to infer and adapt to others’ perspectives achieve superior collaborative performance with AI–but not when working alone. Moreover, moment-to-moment fluctuations in perspective taking influence AI response quality, highlighting the role of dynamic user factors in collaboration. By introducing a principled framework to analyze data from human-AI collaboration, interactive benchmarks can better complement current single-task benchmarks and crowd-assessment methods. This work informs the design and training of language models that transcend static prompt benchmarks to achieve adaptive, socially aware collaboration with diverse and dynamic human partners.

https://osf.io/preprints/psyarxiv/vbkmt_v1