Month: February 2017

Improving election prediction internationally

This study reports the results of a multiyear program to predict direct executive elections in a variety of countries from globally pooled data. We developed prediction models by means of an election data set covering 86 countries and more than 500 elections, and a separate data set with extensive polling data from 146 election rounds. We also participated in two live forecasting experiments. Our models correctly predicted 80 to 90% of elections in out-of-sample tests. The results suggest that global elections can be successfully modeled and that they are likely to become more predictable as more information becomes available in future elections. The results provide strong evidence for the impact of political institutions and incumbent advantage. They also provide evidence to support contentions about the importance of international linkage and aid. Direct evidence for economic indicators as predictors of election outcomes is relatively weak. The results suggest that, with some adjustments, global polling is a robust predictor of election outcomes, even in developing states. Implications of these findings after the latest U.S. presidential election are discussed.

 

 

Improving election prediction internationally
Ryan Kennedy, Stefan Wojcik, David Lazer

Science  03 Feb 2017:
Vol. 355, Issue 6324, pp. 515-520
DOI: 10.1126/science.aal2887

Source: science.sciencemag.org

Sequence Memory Constraints Give Rise to Language-Like Structure through Iterated Learning

Human language is composed of sequences of reusable elements. The origins of the sequential structure of language is a hotly debated topic in evolutionary linguistics. In this paper, we show that sets of sequences with language-like statistical properties can emerge from a process of cultural evolution under pressure from chunk-based memory constraints. We employ a novel experimental task that is non-linguistic and non-communicative in nature, in which participants are trained on and later asked to recall a set of sequences one-by-one. Recalled sequences from one participant become training data for the next participant. In this way, we simulate cultural evolution in the laboratory. Our results show a cumulative increase in structure, and by comparing this structure to data from existing linguistic corpora, we demonstrate a close parallel between the sets of sequences that emerge in our experiment and those seen in natural language.

 

Cornish, H., Dale, R., Kirby, S. & Christiansen, M.H. (2017). Sequence memory constraints give rise to language-like structure through iterated learning. PLoS ONE 12(1): e0168532.

Source: journals.plos.org