AI Society: Emergence of Communication amongst Autonomous Neural Networks
The next challenge in AI will probably not be about making faster computers, collecting more data, or designing adaptive robot embodiment. The key will be to allow for machines to communicate their internal states, in a process arguably similar to humans sharing about their emotions. The now very popular deep neural networks, even though extremely efficient at implementing complicated tasks, represent hundreds of thousands of parameters. Apart from looking at the outputs, no human can make sense anymore of how computations are really made inside those networks, or “how the AI thinks”. The next step will naturally be for the machines themselves to report the way they reach conclusions. In order for those reports to be understandable to humans and other machines, communication will need to be established, much like a natural language for AI. In this research, we connect a population of neural networks together, with the task of teaching each other relevant information to solve different sets of tasks, using a limited medium. Our research aims to understand the underlying principles of the spontaneous emergence of communication, from the interaction between autonomous agents. From the connectivity between different AIs, emerges a society that coevolves with its environment. This society may acquire its own swarm mind, transitioning to a phase in which it is controlled by new sets of phenomena, as a result making them more and more independent from their hardware.
Emergence of Autonomy and Agency in Complex Systems
The spontaneous generation of life has long been a central question investigated in the study of the origins of life. We attempt to address this question with two different approaches: information theory and artificial chemistry. We first construct an artificial chemistry model, simulating a system composed of chemical substances, either simulated with interaction rules and with more or less coarse-grained structures or implemented in vitro. We also design a collection of information-theoretical measures aimed to identify autonomous subprocesses in a system, which allows us to divide and conquer the dynamical space. Our early results suggest new ways to quantify the emergence of individuality in early life. [PDF]
Swarm Ethics: Evolution of Cooperation in a Multi-Agent Foraging Model
“It brings out the animal in us” is often heard, when speaking of unaltruistic behavior. Frans de Waal has argued against a ” veneer theory ” of one of humanity’s most valued traits: morality. It has been proposed that morality emerges as a result of a system of evolutionary processes, giving rise to social altruistic instincts. Traditional research has been arguing that fully-fledged cognitive systems were required to give each individual its autonomy. In this paper, we propose that a simple sense of morality can evolve from swarms of agents picking actions such that they are viable to the survival of the whole group. In order to illustrate the emergence of a moral sense within a community of individuals, we use an asynchronous evolutionary model, simulating populations of simulated agents performing a foraging task on a two-dimensional map. We discuss the morality of each emergent behavior within each population, then subsequently analyze several cases of interactions between different evolved foraging strategies, which we argue bring some insight on the concept of morality out of a group, or across species.This proposed approach brings a new perspective on the way morality can be studied in an artificial model, in terms of adaptive behavior, corroborating the argument in which morality can be defined not only in highly cognitive species, but across all levels of complexity in life. [PDF]
Self-Organizing Particles under Variable Internal vs. External Competition
In this project, we attempt to investigate the influence of competition intra- and inter-group on the evolution of cooperative behavior in different species of simulated agents.