Active projects

AI Society: Emergence of Communication amongst Autonomous Neural Networks

The next challenge in AI will probably not be about making faster computers, collecting more data, or designing adaptive robot embodiment. The key will be to allow for machines to communicate their internal states, in a process arguably similar to humans sharing about their emotions. The now very popular deep neural networks, even though extremely efficient at implementing complicated tasks, represent hundreds of thousands of parameters. Apart from looking at the outputs, no human can make sense anymore of how computations are really made inside those networks, or “how the AI thinks”. The next step will naturally be for the machines themselves to report the way they reach conclusions. In order for those reports to be understandable to humans and other machines, communication will need to be established, much like a natural language for AI. In this research, we connect a population of neural networks together, with the task of teaching each other relevant information to solve different sets of tasks, using a limited medium. Our research aims to understand the underlying principles of the spontaneous emergence of communication, from the interaction between autonomous agents. From the connectivity between different AIs, emerges a society that coevolves with its environment. This society may acquire its own swarm mind, transitioning to a phase in which it is controlled by new sets of phenomena, as a result making them more and more independent from their hardware. [In preparation]

Network optimization


The Expansion of Intelligence in Emergent Systems

All forms of technology are tools invented or discovered by living beings which brought them a different – arguably more efficient – picture of reality, helping them to make choices through their lifetime. One example of such technology is language, which since it has become part of human cognition, has increased the human ability to learn about the regularities in their environment, and has eventually given rise to science (after the invention of writing). Mathematics and AI are other examples of technologies which increased the global cognitive capacity of human culture. Now, if one considers human cognition to be separable from the tool, one may worry about the danger of one cognitive entity taking over the computation made in another. For example, there is a danger that in the future humans may offload so much of their thinking to their smartphones that they become much less capable of performing the kind of thinking process they used to be capable of before. In this work, we first analyze the conditions of such separation, to then analyze the effects of local increases and decreases of intelligence, as computational processes, in artificial life models. Preliminary results demonstrate how agents can append technology to themselves in such a way that their own cognitive ability is increased, and not shrunk. Implications could result in a new theory of integration of so-called “relevant computation”, i.e. cooperative processing of information among groups of intelligent entities. [In preparation]

ant_cooperation


Emergence of Autonomy and Agency in Complex Systems

The spontaneous generation of life has long been a central question investigated in the study of the origins of life. We attempt to address this question with two different approaches: information theory and artificial chemistry. We first construct an artificial chemistry model, simulating a system composed of chemical substances, either simulated with interaction rules and with more or less coarse-grained structures or implemented in vitro. We also design a collection of information-theoretical measures aimed to identify autonomous subprocesses in a system, which allows us to divide and conquer the dynamical space. Our early results suggest new ways to quantify the emergence of individuality in early life. [PDF]


Swarm Ethics: Evolution of Cooperation in a Multi-Agent Foraging Model

“It brings out the animal in us” is often heard, when speaking of unaltruistic behavior. Frans de Waal has argued against a ” veneer theory ” of one of humanity’s most valued traits: morality. It has been proposed that morality emerges as a result of a system of evolutionary processes, giving rise to social altruistic instincts. Traditional research has been arguing that fully-fledged cognitive systems were required to give each individual its autonomy. In this paper, we propose that a simple sense of morality can evolve from swarms of agents picking actions such that they are viable to the survival of the whole group. In order to illustrate the emergence of a moral sense within a community of individuals, we use an asynchronous evolutionary model, simulating populations of simulated agents performing a foraging task on a two-dimensional map. We discuss the morality of each emergent behavior within each population, then subsequently analyze several cases of interactions between different evolved foraging strategies, which we argue bring some insight on the concept of morality out of a group, or across species.This proposed approach brings a new perspective on the way morality can be studied in an artificial model, in terms of adaptive behavior, corroborating the argument in which morality can be defined not only in highly cognitive species, but across all levels of complexity in life. [PDF]


Self-Organizing Particles under Variable Internal vs. External Competition

In this project, we attempt to  investigate the influence of competition intra- and inter-group on the evolution of cooperative behavior in different species of simulated agents.