Tag: AI

A Generic Encapsulation to Unravel Social Spreading of a Pandemic: An Underlying Architecture

Saad Alqithami
Computers 2021, 10(1), 12

Cases of a new emergent infectious disease caused by mutations in the coronavirus family, called “COVID-19,” have spiked recently, affecting millions of people, and this has been classified as a global pandemic due to the wide spread of the virus. Epidemiologically, humans are the targeted hosts of COVID-19, whereby indirect/direct transmission pathways are mitigated by social/spatial distancing. People naturally exist in dynamically cascading networks of social/spatial interactions. Their rational actions and interactions have huge uncertainties in regard to common social contagions with rapid network proliferations on a daily basis. Different parameters play big roles in minimizing such uncertainties by shaping the understanding of such contagions to include cultures, beliefs, norms, values, ethics, etc. Thus, this work is directed toward investigating and predicting the viral spread of the current wave of COVID-19 based on human socio-behavioral analyses in various community settings with unknown structural patterns. We examine the spreading and social contagions in unstructured networks by proposing a model that should be able to (1) reorganize and synthesize infected clusters of any networked agents, (2) clarify any noteworthy members of the population through a series of analyses of their behavioral and cognitive capabilities, (3) predict where the direction is heading with any possible outcomes, and (4) propose applicable intervention tactics that can be helpful in creating strategies to mitigate the spread. Such properties are essential in managing the rate of spread of viral infections. Furthermore, a novel spectra-based methodology that leverages configuration models as a reference network is proposed to quantify spreading in a given candidate network. We derive mathematical formulations to demonstrate the viral spread in the network structures.

Read the full article at: www.mdpi.com

Towards Social Capital in a Network Organization: A Conceptual Model and an Empirical Approach

 Saad Alqithami, Rahmat Budiarto, Musaad Alzahrani and Henry Hexmoor

Entropy 2020, 22(5), 519

 

Due to the complexity of an open multi-agent system, agents’ interactions are instantiated spontaneously, resulting in beneficent collaborations with one another for mutual actions that are beyond one’s current capabilities. Repeated patterns of interactions shape a feature of their organizational structure when those agents self-organize themselves for a long-term objective. This paper, therefore, aims to provide an understanding of social capital in organizations that are open membership multi-agent systems with an emphasis in our formulation on the dynamic network of social interactions that, in part, elucidate evolving structures and impromptu topologies of networks. We model an open source project as an organizational network and provide definitions and formulations to correlate the proposed mechanism of social capital with the achievement of an organizational charter, for example, optimized productivity. To empirically evaluate our model, we conducted a case study of an open source software project to demonstrate how social capital can be created and measured within this type of organization. The results indicate that the values of social capital are positively proportional towards optimizing agents’ productivity into successful completion of the project.

Source: www.mdpi.com

Helping machines to perceive laws of physics by themselves

ADEPT, an artificial intelligence model developed by MIT researchers, demonstrates an understanding of some basic “intuitive physics” by registering a surprise signal when objects in a scene violate assumed reality, similarly to how human infants and adults would register surprise.

 

We often think of artificial intelligence as a tool for automating certain tasks. But it turns out that the technology could also help give us a better understanding of ourselves. At least that’s what a team of researchers at the Massachusetts Institute of Technology (MIT) think they’ll be able to do with their new AI model.

 

Dubbed ADEPT, the system is able to, like a human being, understand some laws of physics intuitively. It can look at an object in a video, predict how it should act based on what it knows of the laws of physics and then register surprise if what it was looking at subsequently vanishes or teleports. The team behind ADEPT say their model will allow other researchers to create smarter AIs in the future, as well give us a better understanding of how infants understand the world around them.

 

"By the time infants are three months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport," said Kevin A. Smith, one of the researchers that created ADEPT. "We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes."

 

ADEPT depends on two modules to do what it does. The first examines an object, determining its shape, pose and velocity. What’s interesting about this module is that it doesn’t get caught up in details. It only looks at the approximate geometry of something, rather than analyzing every facet of it, before it moves onto the next step. This was by design, according to the ADEPT team; it allows the system to predict the movement of a variety of different objects, not just ones it was trained to understand. Moreover, it’s an aspect of the system’s design that makes it similar to infants. Like ADEPT, it turns out that children don’t care much about the specific physical properties of something when they’re thinking about how it may move.

 

The second module is a physics system. It shares similarities with the software video game developers employ to replicate real-world physics in their games. It takes the data captured by the graphics module and simulates how an object should act based on the laws of physics. Once it has a couple of predicted outcomes, it will compare those against the next frames of a video. If it notices a discrepancy in what it thought would happen with what actually occurred, it will send out a signal. The stronger the signal, the more surprised it was by what just happened. What’s interesting about ADEPT is that its level of surprise matched those of humans who were shown the same set of videos.

 

Moving forward, the team says they want to further explore how young children see the world, and incorporate those findings into their model. "We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents," Smith said.

Source: news.mit.edu

My Text in Your Handwriting

There are many scenarios where we wish to imitate a specific author’s pen-on-paper handwriting style. Rendering new text in someone’s handwriting is difficult because natural handwriting is highly variable, yet follows both intentional and involuntary structure that makes a person’s style self-consistent.
We present an algorithm that renders a desired input string in an author’s handwriting. An annotated sample of the author’s handwriting is required; the system is flexible enough that historical documents can usually be used with only a little extra effort. Experiments show that our glyph-centric approach, with learned parameters for spacing, line thickness, and pressure, produces novel images of handwriting that look hand-made to casual observers, even when printed on paper.

 

My Text in Your Handwriting
Tom S.F. Haines, Oisin Mac Aodha, and Gabriel J. Brostow
University College London
Transactions on Graphics 2016

Source: visual.cs.ucl.ac.uk