Month: December 2019

The unmapped chemical complexity of our diet

Albert-László Barabási, Giulia Menichetti & Joseph Loscalzo 
Nature Food (2019)


Our understanding of how diet affects health is limited to 150 key nutritional components that are tracked and catalogued by the United States Department of Agriculture and other national databases. Although this knowledge has been transformative for health sciences, helping unveil the role of calories, sugar, fat, vitamins and other nutritional factors in the emergence of common diseases, these nutritional components represent only a small fraction of the more than 26,000 distinct, definable biochemicals present in our food—many of which have documented effects on health but remain unquantified in any systematic fashion across different individual foods. Using new advances such as machine learning, a high-resolution library of these biochemicals could enable the systematic study of the full biochemical spectrum of our diets, opening new avenues for understanding the composition of what we eat, and how it affects health and disease.


Editorial: Novel Technological and Methodological Tools for the Understanding of Collective Behaviors

Elio Tuci1, Vito Trianni, Andrew King and Simon Garnier

Front. Robot. AI, 10 December 2019


The social processes that give rise to coordinated actions of a group of agents and the emergence of global structures—referred to as collective behaviors—are observed in a range of biological and artificial systems. Collective behavior research, therefore, focuses upon a range of different phenomena with the common goal of understanding the dynamics of emergent group level responses, and has resulted in a burgeoning, diverse, and interdisciplinary research community.

Studying collective behaviors in biological and artificial systems is particularly challenging because of their intrinsic complexity, requiring novel approaches that can help unraveling these systems in order to explain how and why certain patterns are produced and maintained. This Research Topic brings together a collection of studies that focus on technological and methodological tools that can support the understanding of collective behaviors. The contributions included within the Research Topic can be broadly categorized as: (i) Review Articles, (ii) Tools and Technologies, and (iii) Empirical Studies.

Our goal is to facilitate the dissemination of ideas, theories, and methods among scientists that share an interest on the study of collective behavior in all its diverse manifestations. It is our hope that, together, this Research Topic and contributions may afford a more complete understanding of the nature of proximate and ultimate causes of collective behaviors in biological systems, and provide opportunity to generate a theoretical framework to engineer robust, resilient, and effective technologies, such as multi-robot systems, smart grids, and sensor networks.


On cycling risk and discomfort: urban safety mapping and bike route recommendations

David Castells-Graells, Christopher Salahub, Evangelos Pournaras



Bike usage in Smart Cities is paramount for sustainable urban development: cycling promotes healthier lifestyles, lowers energy consumption, lowers carbon emissions, and reduces urban traffic. However, the expansion and increased use of bike infrastructure has been accompanied by a glut of bike accidents, a trend jeopardizing the urban bike movement. This paper leverages data from a diverse spectrum of sources to characterise geolocated bike accident severity and, ultimately, study cycling risk and discomfort. Kernel density estimation generates a continuous, empirical, spatial risk estimate which is mapped in a case study of Zürich city. The roles of weather, time, accident type, and severity are illustrated. A predominance of self-caused accidents motivates an open-source software artifact for personalized route recommendations. This software is used to collect open baseline route data that are compared with alternative routes minimizing risk and discomfort. These contributions have the potential to provide invaluable infrastructure improvement insights to urban planners, and may also improve the awareness of risk in the urban environment among experienced and novice cyclists alike.


Helping machines to perceive laws of physics by themselves

ADEPT, an artificial intelligence model developed by MIT researchers, demonstrates an understanding of some basic “intuitive physics” by registering a surprise signal when objects in a scene violate assumed reality, similarly to how human infants and adults would register surprise.


We often think of artificial intelligence as a tool for automating certain tasks. But it turns out that the technology could also help give us a better understanding of ourselves. At least that’s what a team of researchers at the Massachusetts Institute of Technology (MIT) think they’ll be able to do with their new AI model.


Dubbed ADEPT, the system is able to, like a human being, understand some laws of physics intuitively. It can look at an object in a video, predict how it should act based on what it knows of the laws of physics and then register surprise if what it was looking at subsequently vanishes or teleports. The team behind ADEPT say their model will allow other researchers to create smarter AIs in the future, as well give us a better understanding of how infants understand the world around them.


"By the time infants are three months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport," said Kevin A. Smith, one of the researchers that created ADEPT. "We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes."


ADEPT depends on two modules to do what it does. The first examines an object, determining its shape, pose and velocity. What’s interesting about this module is that it doesn’t get caught up in details. It only looks at the approximate geometry of something, rather than analyzing every facet of it, before it moves onto the next step. This was by design, according to the ADEPT team; it allows the system to predict the movement of a variety of different objects, not just ones it was trained to understand. Moreover, it’s an aspect of the system’s design that makes it similar to infants. Like ADEPT, it turns out that children don’t care much about the specific physical properties of something when they’re thinking about how it may move.


The second module is a physics system. It shares similarities with the software video game developers employ to replicate real-world physics in their games. It takes the data captured by the graphics module and simulates how an object should act based on the laws of physics. Once it has a couple of predicted outcomes, it will compare those against the next frames of a video. If it notices a discrepancy in what it thought would happen with what actually occurred, it will send out a signal. The stronger the signal, the more surprised it was by what just happened. What’s interesting about ADEPT is that its level of surprise matched those of humans who were shown the same set of videos.


Moving forward, the team says they want to further explore how young children see the world, and incorporate those findings into their model. "We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents," Smith said.