Tag: AI

Towards Social Capital in a Network Organization: A Conceptual Model and an Empirical Approach

 Saad Alqithami, Rahmat Budiarto, Musaad Alzahrani and Henry Hexmoor

Entropy 2020, 22(5), 519

 

Due to the complexity of an open multi-agent system, agents’ interactions are instantiated spontaneously, resulting in beneficent collaborations with one another for mutual actions that are beyond one’s current capabilities. Repeated patterns of interactions shape a feature of their organizational structure when those agents self-organize themselves for a long-term objective. This paper, therefore, aims to provide an understanding of social capital in organizations that are open membership multi-agent systems with an emphasis in our formulation on the dynamic network of social interactions that, in part, elucidate evolving structures and impromptu topologies of networks. We model an open source project as an organizational network and provide definitions and formulations to correlate the proposed mechanism of social capital with the achievement of an organizational charter, for example, optimized productivity. To empirically evaluate our model, we conducted a case study of an open source software project to demonstrate how social capital can be created and measured within this type of organization. The results indicate that the values of social capital are positively proportional towards optimizing agents’ productivity into successful completion of the project.

Source: www.mdpi.com

Helping machines to perceive laws of physics by themselves

ADEPT, an artificial intelligence model developed by MIT researchers, demonstrates an understanding of some basic “intuitive physics” by registering a surprise signal when objects in a scene violate assumed reality, similarly to how human infants and adults would register surprise.

 

We often think of artificial intelligence as a tool for automating certain tasks. But it turns out that the technology could also help give us a better understanding of ourselves. At least that’s what a team of researchers at the Massachusetts Institute of Technology (MIT) think they’ll be able to do with their new AI model.

 

Dubbed ADEPT, the system is able to, like a human being, understand some laws of physics intuitively. It can look at an object in a video, predict how it should act based on what it knows of the laws of physics and then register surprise if what it was looking at subsequently vanishes or teleports. The team behind ADEPT say their model will allow other researchers to create smarter AIs in the future, as well give us a better understanding of how infants understand the world around them.

 

"By the time infants are three months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport," said Kevin A. Smith, one of the researchers that created ADEPT. "We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes."

 

ADEPT depends on two modules to do what it does. The first examines an object, determining its shape, pose and velocity. What’s interesting about this module is that it doesn’t get caught up in details. It only looks at the approximate geometry of something, rather than analyzing every facet of it, before it moves onto the next step. This was by design, according to the ADEPT team; it allows the system to predict the movement of a variety of different objects, not just ones it was trained to understand. Moreover, it’s an aspect of the system’s design that makes it similar to infants. Like ADEPT, it turns out that children don’t care much about the specific physical properties of something when they’re thinking about how it may move.

 

The second module is a physics system. It shares similarities with the software video game developers employ to replicate real-world physics in their games. It takes the data captured by the graphics module and simulates how an object should act based on the laws of physics. Once it has a couple of predicted outcomes, it will compare those against the next frames of a video. If it notices a discrepancy in what it thought would happen with what actually occurred, it will send out a signal. The stronger the signal, the more surprised it was by what just happened. What’s interesting about ADEPT is that its level of surprise matched those of humans who were shown the same set of videos.

 

Moving forward, the team says they want to further explore how young children see the world, and incorporate those findings into their model. "We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents," Smith said.

Source: news.mit.edu

My Text in Your Handwriting

There are many scenarios where we wish to imitate a specific author’s pen-on-paper handwriting style. Rendering new text in someone’s handwriting is difficult because natural handwriting is highly variable, yet follows both intentional and involuntary structure that makes a person’s style self-consistent.
We present an algorithm that renders a desired input string in an author’s handwriting. An annotated sample of the author’s handwriting is required; the system is flexible enough that historical documents can usually be used with only a little extra effort. Experiments show that our glyph-centric approach, with learned parameters for spacing, line thickness, and pressure, produces novel images of handwriting that look hand-made to casual observers, even when printed on paper.

 

My Text in Your Handwriting
Tom S.F. Haines, Oisin Mac Aodha, and Gabriel J. Brostow
University College London
Transactions on Graphics 2016

Source: visual.cs.ucl.ac.uk