The booming field of artificial intelligence (AI) is grappling with a replication crisis, much like the ones that have afflicted psychology, medicine, and other fields over the past decade. Just because algorithms are based on code doesn’t mean experiments are easily replicated. Far from it. Unpublished codes and a sensitivity to training conditions have made it difficult for AI researchers to reproduce many key results. That is leading to a new conscientiousness about research methods and publication protocols. Last week, at a meeting of the Association for the Advancement of Artificial Intelligence in New Orleans, Louisiana, reproducibility was on the agenda, with some teams diagnosing the problem—and one laying out tools to mitigate it.
Artificial intelligence faces reproducibility crisis
Science 16 Feb 2018:
Vol. 359, Issue 6377, pp. 725-726
In this episode, Angie interviews author of Embracing Complexity: Strategic Perspectives for an Age of Turbulence, Jean Boulton, who is also an academic and management consultant, specializing in complexity theory. Boulton talks with us about many different concepts including: how complexity thinking compares to systems thinking, change management, organizational strategy, complexity as a worldview, and even how this field is shining a light on climate change.
We introduce the Xpuck swarm, a research platform with an aggregate raw processing power in excess of two teraflops. The swarm uses 16 e-puck robots augmented with custom hardware that uses the substantial CPU and GPU processing power available from modern mobile system-on-chip devices. The augmented robots, called Xpucks, have at least an order of magnitude greater performance than previous swarm robotics platforms. The platform enables new experiments that require high individual robot computation and multiple robots. Uses include online evolution or learning of swarm controllers, simulation for answering what-if questions about possible actions, distributed super-computing for mobile platforms, and real-world applications of swarm robotics that requires image processing, or SLAM. The teraflop swarm could also be used to explore swarming in nature by providing platforms with similar computational power as simple insects. We demonstrate the computational capability of the swarm by implementing a fast physics-based robot simulator and using this within a distributed island model evolutionary system, all hosted on the Xpucks.
A Two Teraflop Swarm
Simon Jones, Matthew Studley, Sabine Hauert, and Alan Winfield
Front. Robot. AI, 19 February 2018 | https://doi.org/10.3389/frobt.2018.00011
Most models of product adoption predict S-shaped adoption curves. Here we report results from two country-scale experiments in which we find linear adoption curves. We show evidence that the observed linear pattern is the result of active information-seeking behaviour: individuals actively pulling information from several central sources facilitated by modern Internet searches. Thus, a constant baseline rate of interest sustains product diffusion, resulting in a linear diffusion process instead of the S-shaped curve of adoption predicted by many diffusion models. The main experiment seeded 70 000 (48 000 in Experiment 2) unique voucher codes for the same product with randomly sampled nodes in a social network of approximately 43 million individuals with about 567 million ties. We find that the experiment reached over 800 000 individuals with 80% of adopters adopting the same product—a winner-take-all dynamic consistent with search engine driven rankings that would not have emerged had the products spread only through a network of social contacts. We provide evidence for (and characterization of) this diffusion process driven by active information-seeking behaviour through analyses investigating (a) patterns of geographical spreading; (b) the branching process; and (c) diffusion heterogeneity. Using data on adopters’ geolocation we show that social spreading is highly localized, while on-demand diffusion is geographically independent. We also show that cascades started by individuals who actively pull information from central sources are more effective at spreading the product among their peers.
Product diffusion through on-demand information-seeking behaviour
Christoph Riedl, Johannes Bjelland, Geoffrey Canright, Asif Iqbal, Kenth Engø-Monsen, Taimur Qureshi, Pål Roe Sundsøy, David Lazer
Published 21 February 2018.DOI: 10.1098/rsif.2017.0751
The workshop Complexity72h is an interdisciplinary event whose aim is to bring together young researchers from different fields of complex systems.
Inspired by the 72h Hours of Science, participants will form working groups aimed at carrying out a project in a three-day time, i.e. 72 hours. Each group’s goal is to upload on the arXiv a report of their work by the end of the event.
A team of tutors will propose the projects, and assist and guide each group in developing their project.
Alongside teamwork, participants will attend lectures from scientists coming from different fields of complex systems, and applied workshops.