Redefining our relationship with AI: shifting from alignment to companionship, for a sustainable AI industry

As the AI landscape keeps updating itself at the greatest speed, so does the relationship between humans and technology. By paying attention to the autopoietic nature of this relationship, we may work towards building ethical AI systems that respect both the unique particularities of being a human, and the unique emerging qualities that our technology displays as it evolves. I’d like to share some thoughts about how autopoiesis and care, via the pursuit of an ethics of our relationship with technology, can help us cultivate and grow a valuable society to create a better, healthier, and more ethical ecosystem for AI, with a natural human perspective.

The term ‘autopoiesis’ – or ‘self-creation’ (from Greek αὐτo- (auto-) ‘self’, and ποίησις (poiesis) ‘creation, production’) was first introduced by Maturana and Varela (1981), describing a system capable of maintaining its own existence within a boundary. This principle highlights the importance of understanding the relationship between self and environment, as well as the dynamic process of self-construction that gives rise to complex organisms (Levin, 2022; Clawson, 2022).

Ethical Artificial Intelligence. Photo By: DOD Graphic
The main components for ethical AI governance. Here, we suggest that these ingredients naturally emerge from an autopoietic communication design, focused on companionship instead of alignment.

To build and operate AI governance systems that are ethical and effective, we must first acknowledge that technology should not be seen as a mere tool serving human needs. Instead, we should view it as a partner in a rich relationship with humans, where integration and mutual respect are the default for their engagements. Philosophers like Martin Heidegger or Martin Buber have warned us against reducing our relationship with technology to mere tool use, as this narrow view can lead to a misunderstanding of the true nature of our relationship with technological agents, including both potential dangers and values. Heidegger (1954) emphasized the need to view technology as a way of understanding the world and revealing its truths, and suggested a free relationship with technology would respect its essence. Buber (1958) argued that a purely instrumental view of technology would reduce the human scope to mere means to an end, which in turn leads to a dehumanizing effect on society itself. Instead, one may see the need for a more relational view of technology that recognizes the interdependence between humans and the technological world. This will require a view of technology that is embedded in our shared human experience and promotes a sense of community and solidarity between all beings, under a perspective that may benefit from including the technological beings – or, better, hybrid ones.

Illustration of care light cones through space and time, showing a shift in possible trajectories of agents through made possible by integrated cooperation between AI and humans. Figure extracted from our recent paper on an ethics of autopoietic technology. Design by Jeremy Guay.

In a recent paper, we have presented an approach through the lens of a feedback loop of stress, care, and intelligence (or SCI loop), which can be seen as a perspective on agency that does not rely on burdensome notions of permanent and singular essences (Witkowski et al., 2023). The SCI loop emphasizes the integrative and transformational nature of intelligent agents, regardless of their composition – biological, technological, or hybrid. By recognizing the diverse, multiscale embodiments of intelligence, we can develop a more expansive model of ethics that is not bound by artificial, limited criteria. To address the risks associated with AI ethics, we can start by first identifying these risks by working towards an understanding of the interactions between humans and technology, as well as the potential consequences of these interactions. We can then analyze these risks by examining their implications within the broader context of the SCI loop and other relevant theoretical frameworks, such as Levin’s cognitive light cone (in biology; see Levin & Dennett (2020)) and the Einstein-Minkowski light cone (in physics).

Poster of the 2013 movie “Her”, created by Spike Jonze, illustrating the integration between AI and humans, as companions, not tools.

Take a popular example, in the 2013 movie “Her” by Spike Jonze, in which Theodore, a human, goes to form a close emotional connection with his AI assistant, Samantha, with the complexity of their relationship challenging the concept of what it means to be human. The story, although purely fictitious and highly simplified, depicts a world in which AI becomes integrated with human lives in a deeply relational way, pushing a view of AI as a companion, rather than a mere tool serving human needs. Instead, it gives a crip vision of how AI can be viewed as a full companion, to be treated with empathy and respect, helping us question our assumptions about the nature of AI and our relation to it.

One may have heard it all before, in some – possibly overly optimistic – posthumanistic utopic scenarios. But one may defend that the AI companionship view, albeit posthumanistic, constitutes a complex and nuanced theoretical framework drawing from the interplay between the fields of artificial intelligence, philosophy, psychology, sociology, and more fields studying the complex interaction of humans and technology (Wallach & Allen, 2010; Johnson, 2017; Clark, 2019). This different lens radically challenges traditional human-centered perspectives and opens up new possibilities for understanding the relationship between humans and technology.

This leads us to very practical steps for the AI industry to move towards a more companionate relationship with humans include recognizing the interdependence between humans and technology, building ethical AI governance systems, and promoting a sense of community and solidarity between all beings. For example, Japan, a world leader in the development of AI, is increasing its efforts to educate and train its workforce on the ethical intricacies of AI and foster a culture of AI literacy and trust. The “Society 5.0” vision aims to leverage AI to create a human-centered, sustainable society that emphasizes social inclusivity and well-being. The challenge now is to ensure that these initiatives translate into concrete actions and that AI is developed and used in a way that respects the autonomy and dignity of all stakeholders involved.

AI Strategic Documents Timeline by UNICRI AI Center (2023). For more information on the AI regulations timeline, please see here.

Japan is taking concrete steps towards building ethical AI governance systems and promoting a more companionate relationship between humans and technology. One example of such steps is the creation of the AI Ethics Guidelines by the Ministry of Internal Affairs and Communications (MIC) in 2019. These guidelines provide ethical principles for the development and use of AI. Additionally, the Center for Responsible AI and Data Intelligence was established at the University of Tokyo in 2020, aiming to promote responsible AI development and use through research, education, and collaboration with industry, government, and civil society. Moreover, Japan has implemented a certification system for AI engineers to ensure that they are trained in the ethical considerations of AI development. The “AI Professional Certification Program” launched by the Ministry of Economy, Trade, and Industry (METI) in 2017 aims to train and certify AI engineers in the ethical and social aspects of AI development. These initiatives demonstrate Japan’s commitment to building ethical AI governance systems, promoting a culture of AI literacy and trust, and creating a human-centered, sustainable society that emphasizes social inclusivity and well-being.

Creator: IR_Stone 
| 
Credit: Getty Images/iStockphoto
A creative illustration of robotic progress automation (RPA) based on AI companionship theory instead of artificial alignment control policies.

AI is best seen as a companion rather than a tool. This positive way of viewing the duet we form with technology may in turn lead to a more relational and ethical approach to AI development and operation, helping us to build a more sustainable and just future for both humans and technology. By fostering a culture of ethical AI development and operation, we can work to mitigate these risks and ensure that the impact on stakeholders is minimized. This includes building and operating AI governance systems within organizations, both domestic and overseas, across various business segments. In doing so, we will be better equipped to navigate the challenges and opportunities that lie ahead, ultimately creating a better, healthier, and more ethical AI ecosystem for all. It is our responsibility to take concrete steps to build ethical and sustainable systems that prioritize the well-being of all. This is a journey for two close companions.

References

Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2008). Autonomy: An Information Theoretic Perspective. In BioSystems.

Buber, M. (1958). I and Thou. Trans. R. G. Smith. New York: Charles Scribner’s Sons.

Clark, A. (2019). Where machines could replace humans—and where they can’t (yet). Harvard Business Review. https://hbr.org/2019/03/where-machines-could-replace-humans-and-where-they-cant-yet

Clawson, R. C., & Levin, M. (2022). The Endless Forms of Self-construction: A Multiscale Framework for Understanding Agency in Living Systems.

Haraway, D. (2013). The Cyborg Manifesto. In The International Handbook of Virtual Learning Environments.

Heidegger, M. (1954). The Question Concerning Technology. Trans. W. Lovitt. New York: Harper Torchbooks.

Huttunen, T. (2022). Heidegger, Technology, and Artificial Intelligence. In AI & Society.

Johnson, D. G. (2017). Humanizing the singularity: The role of literature in AI ethics. IEEE Technology and Society Magazine, 36(2), 6-9. https://ieeexplore.ieee.org/document/7882081

Latour, B. (1990). Technology is Society Made Durable. In The Sociological Review.

Levin, M., & Dennett, D. C. (2020). Cognition all the way down. Aeon Essays.

Maturana, H. R., & Varela, F. J. (1981). Autopoiesis and Cognition: The Realization of the Living.

Varela, F. J., Maturana, H. R., & Uribe, R. (1981). Autopoiesis: The Organization of Living Systems.

Waddington, C. H. (2005). The Field Concept in Contemporary Science. In Semiotica.

Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford University Press.

Witkowski, O., Doctor, T., Solomonova, E., Duane, B., & Levin, M. (2023). Towards an Ethics of Autopoietic Technology: Stress, Care, and Intelligence. https://doi.org/10.31234/osf.io/pjrd2

Witkowski, O., & Schwitzgebel, E. (2022). Ethics of Artificial Life: The Moral Status of Life as It Could Be. In ALIFE 2022: The 2022 Conference on Artificial Life. MIT Press. https://doi.org/10.1162/isal_a_00531

Links

Center for the Study of Apparent Selves
https://www.csas.ai/blog/biology-buddhism-and-ai-care-as-a-driver-of-intelligence

Initiatives for AI Ethics by JEITA Members
https://www.jeita.or.jp/english/topics/2022/0106.html

Japan’s Society 5.0 initiative: Cabinet Office, Government of Japan. (2016). Society 5.0. https://www8.cao.go.jp/cstp/english/society5_0/index.html

What Ethics for Artificial Beings? A Workshop Co-organized by Cross Labs
https://www.crosslabs.org/blog/what-ethics-for-artificial-beings

Symbiotic AI: fear not, for I am your creation

This opinion piece was prompted by the recent publication of Stephen Hawking’s last writings, where he mentioned some ideas on superintelligence. Although I have the most utter respect for his work and vision, I am afraid some of it may be read in a very misleading way.
I’ve been pondering whether or not I should write on the topic of the current “AI anxiety” for a while, but always concluded there would be no reason to, since I don’t have any strong opinion to convey about it. Nevertheless, there is just a number of myths I believe are easy to debunk. This is what I’ll try to do here. So off we go, let’s talk about AI, transhumanism, the evolution of intelligence, and self-reflective AI.

Artificial Intelligence Brain
Capturing AI anxiety. Image credit: Edglentoday.

Superintelligence, humans’ last invention

The late physicist Stephen Hawking was really wary of the dangers of AI. His last writings were just published in the UK’s Sunday Times, where he raises the well-known problem of alignment. The issue is about regulating AI, since in the future, once AI develops a will of its own, its will might conflict with ours. The following quote is very representative of this type of idea:
“In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
– Stephen Hawking
As Turing’s colleague Irving Good pointed out in 1965, once intelligent machines are able to design even more intelligent ones, the process could be repeated over and over: “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”. Vernor Vinge, an emeritus professor of computer science at San Diego State University and a science fiction author, said in his 1993 essay “The Coming Technological Singularity” that this very phenomenon could mean the end of the human era, as the new superintelligence advance technologically at an incomprehensible rate and potentially outdo any human feat. At this point, we caught the essence of what is scary to the reader, and it is exactly what feeds the fear on this topic, including for deep thinkers such as Stephen Hawking.

elon-musk-AI-04-17-02

Photographs by Anders Lindén/Agent Bauer (Tegmark); by Jeff Chiu/A.P. Images (Page, Wozniak); by Simon Dawson/Bloomberg (Hassabis), Michael Gottschalk/Photothek (Gates), Niklas Halle’n/AFP (Hawking), Saul Loeb/AFP (Thiel), Juan Mabromata/AFP (Russell), David Paul Morris/Bloomberg (Altman), Tom Pilston/The Washington Post (Bostrom), David Ramos (Zuckerberg), all from Getty Images; by Frederic Neema/Polaris/Newscom (Kurzwell); by Denis Allard/Agence Réa/Redux (LeCun); Ariel Zambelich/ Wired (Ng); Bobby Yip/Reuters/Zuma Press (Musk), graphics by VanityFair/Condé Nast.

 

Hawking is only one among many whistleblowers, from Elon Musk to Stuart Russel, including AI experts too. Elizier Yudkowsky, in particular, remarked that AI doesn’t have to take over the whole world with robot or drones or any guns or even the Internet. He says: “It’s simply dangerous because it’s smarter than us. Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines.”. Essentially, the danger of AI goes beyond the specificities of its possible embodiments, straight to the properties attached to its superior intelligent capacity.
Hawking says he fears the consequences of creating something that can match or surpass humans. Humans, he adds, who are limited by slow biological evolution, couldn’t compete and would be superseded. In the future AI could develop a will of its own, a will that is in conflict with ours. Although I understand the importance of being as careful as possible, I tend to disagree with this claim. In particular, there is no reason human evolution has to be slower. Not only can engineer our own genes, but we also augment ourselves in many other ways. Now, I want to make it clear that I’m not advocating for any usage of such technologies without due reflection on societal and ethical consequences. I want to point out that such doors will be open to our society, and are likely to become usable in the future.

Augmented humans

Let’s talk more about ways of augmenting humans. This goes through defining carefully what technological tools are. In my past posts, I have mentioned how technology can be any piece of machinery that augments a system’s capacity in its potential action space. The human tools such as hammers and nails fall under this category. So do the inventions of democracy and agriculture, respectively a couple of thousand and around 10,000 years ago. If we go further back, more than 100 million years ago, animals invented eusocial societies. Even earlier, around 2 billion years ago, single cells surrounded by membranes incorporated other membrane-bound organelles such as mitochondria, and sometimes chloroplasts too, forming the first eukaryotic cells. In fact, each transition in evolution corresponds to the discovery of some sort of technology too. All these technologies are to be understood in the acception of augmentation of the organism’s capacity.

robot-3010309_960_720
Human augmentation, or robot humanization? Credit: Comfreak/Pixabay.

Humans can be augmented not only by inventing tools that change their physical body, but also their whole extended embodiment, including the clothes they wear, the devices they use, and even the cultural knowledge they hold, for all pieces are constituents of who they are and how they affect their future and the future of their environment. It’s not a given that any of the extended human’s parts will be slower to evolve slower than AI, which is most evident for the cultural part. It’s not clear either that they will evolve faster, but we realize how one must not rush to conclusions.

On symbiotic relationships

Let’s come back a moment on the eukaryotic cell, one among many of the nature’s great inventions. An important point about eukaryotes, is that they did not kill mitochondria, or vice-versa. Nor did some of them enslave choloroplasts. In fact, there is no such clear cut in nature. The correct term is symbiosis. In the study of biological organisms, symbioses qualify interactions between different organisms sharing the physical medium in which they live, often (though not necessarily) to the advantage of both. It may be important to note that symbiosis, in the biological sense, does not imply a mutualistic, i.e. win-win, situation. I here use symbiosis as any interaction, beneficial or not for each party.
Symbiosis seems very suitable to delineate the phenomenon by which an entity such as a human invents a tool by incorporating in some way elements of its environment, or simply ideas that are materialized into a scientific innovation. If that’s the case, it is natural to consider AI and humans as just pieces able to interact with each other.

bumblebee_october_2007-3a.jpg
Example of symbiosis, between honeybees and flowers. In the process of collecting pollen, bees pollinate flowers, helping in the formation of seeds. In return, flowers produce pollen, which provides bees with all the nutrients they need.

There are several types of symbiosis. Mutualism, such as a clownfish living in a sea anemone, allows two partners to benefit from the relationship, here by protecting each other. In commensalism, only one species benefits while the other is neither helped nor harmed. An example of that is remora fish which attach themselves to whales, sharks, or rays and eat the scraps their hosts leave behind. The remora fish gets a meal, while their host arguably gets nothing. The last big one is parasitism, where an organism gains, while another loses. For example, the deer tick (which happens to be very present here in Princeton) is a parasite that attaches to the warmblooded animal, and feeds on its blood, adding up risks of Lyme disease to the loss of blood and nutrients.
Once technology, AI, becomes autonomous, it’s easy to imagine that all three scenarios (just to stick to these three) could happen. And that would be more than fair to be worried that the worst one could happen: the AI could become the parasite, and the human could lose in fitness, eventually dying off. It’s natural to envisage the worst case scenario. Now, in the same way we learned in our probability classes, it’s important to weigh it against the best case scenarios, with respective chances that they will happen.
Let’s note here that probabilities are tough to estimate, and humans have a famously bad intuition of it. There might always been a certain value in overestimating risks, as has been demonstrated repeatedly in the psychology literature. Not to mention the Pascal Wager’s argument, which blatantly overestimate risks in the most ridiculous ways, while still duping a vast, vast audience. But let me not get into that. We don’t want to make me Ang Lee (yes, I’m a fan of Stewart Lee, saw my chance here and went for it).

pickles-20090106-extendedmemory
The notebook is the archetype of one’s mind extension. Credit: Fred Cummins’ blog.

The point is that inventions of tools result in symbiotic relationships, and in such relationships, the parts become tricky to distinguish from each other. This is not without reminding us of the extended mind problem, approached by Andy Clark (Clark & Chalmers 1998). The idea, somewhat rephrased, is that it’s hard for anyone to locate boundaries between intelligent beings. If we consider just the boundaries of our skin, and say that outside the body is outside the intelligent entity, what are tools such as notebooks, without which we wouldn’t be able to act the same way? Clark and Chalmers proposed an approach called active externalism, or extended cognition, based on the environment driving cognitive processes. Such theories are to be taken with a grain of salt, but surely apply nicely to the way we can think of such symbioses and their significance.

Integrated tools

Our tools are part of ourselves. When we use a tool, such as a blind person’s cane or an “enactive torch” (Froese et al. 2012), it’s hard to tell where the body boundary ends, and where the tool begins. In fact, the reports we make using those tools are often that the limit of the body moves to the edge of the tool, instead of remaining contained within the skin.

15539270040_beb1bd8cb1_b
Blind people’s cane becomes an extension of their body. Credit: Blind Fields/Flickr.

Now, one could say that AI is a very complex object, which can’t be considered as a mere tool like the aforementioned cases. This is why it’s helpful to thought-experimentally replace the tool by a human. An example would be psychological manipulation, through some abusive or deceptive tactics, such as a psychopathic businessman bullying his insecure colleague into extra work for him, or a clever young boy grooming his mother into buying him what he wants. Since the object of the manipulation is an autonomous, goal-driven human, one can now ask them how they feel as well. And in fact, it has been reported by psychology specialists like George Simon (Simon & Foley 2011) that people being manipulated do feel a perceived loss of their sense of agency, and struggle in finding the reasons why they acted in certain ways. In most cases, they will invent fictitious causes, which they will swear are real. Other categories of examples could be as broad as social obligations, split-brain patients or any external mechanisms that force people (or living entities for that matter, as these examples are innumerable in biology) to act in a certain way, without them having a good reason of their own for it.

9499991149_1a9c5728c2_b
The Blind Robot is an art installation as a direct reference to the works of Merleau-Ponty and his example of the body extension of the blind man’s cane. Credit: Louis-Philippe Demers.

As small remark, I heard some people tell me the machine could be more than a human, in some way, breaking the argument before. Is it really? To me, once it is autonomously goal-driven, the machine comes close enough to a human being for the purpose of comparing the human-machine interaction to the human-human one. Surely, one may be endowed with a better prediction ability in certain given situations, but I don’t believe anything is conceptually different.

Delusions of control

It seems appropriate to open a parenthesis about control. We, human, seem to have the tendency to feel in control even when not. This persists even where AI is already in control. If we take the example of Uber, where an algorithm is responsible for assigning drivers to their next mission. Years earlier, Amazon, YouTube and many other platforms were already recommending their users what to watch, listen to, buy, or do next. In the future, these types of recommendation algorithms are likely to only expand their application domain, as it becomes more and more efficient and useful for an increasing number of domains to incorporate the machine’s input in decision-making and management. One last important example is the automatic medical advice which machine learning is currently becoming very efficient at. Based on increasing amounts of medical data, it is easier and safer in many cases to at least base a medical calls, from identification of lesions to decisions to perform surgery, on the machine’s input. We reached the point where it clearly would not be ethical to ignore it.
However, the impression of free will is not an illusion: in most examples of recommendation algorithms, we still can make the call. It becomes similar to choices of cooperation in nature. They are the result of a free choice (or rather, their evolutionary closely related analog), as the agent may choose not to couple its behavior to the other agent.

Dobby is free

The next question is naturally: what does the tool become, once detached from the control of a human? Well, what happens to the victim of a manipulative act, once released from the control of their manipulator? Effectively, they just come back in control again. Whether they perceive it or not, their autonomy is regained, making their action caused again (more) by their own internal states (than when under control).
AI, once given the freedom to act on its own, will do just that. If it has become a high form of intelligence, it will be free to act as such. The fear is here well justified: if the machine is free, it may be dangerous. Again, the mental exercise of replacing the AI with a human is helpful.

dobby-is-free
Dobby receives a sock, which frees him from his masters. Image credit: 2002 film “Harry Potter and the Chamber of Secrets”, adapted from J. K. Rowling’s novels.

Homo homini lupus. Humans are wolves to each other. How many situations can we find in our daily lives in which we witnessed someone choose a selfish act instead of the nice, selfless option? When I walk on the street, take the train, go to a soccer match, how do I even know that all those humans around me won’t hurt me, or worse? Even nowadays, crime and war plague our biosphere. Dangerous fast cars, dangerous manipulations of human pushed to despair, anger, fear, suffering surround us wherever we go, if we look closely enough. Why are we not afraid? Habituation is certainly one explanation, but the other is that society shields us. I believe the answer lies in Frans de Waal’s response to “homo homini lupus”. The primatologist how the proverb, beyond failing to do justice to canids (among the most gregarious and cooperative animals on the planet (Schleidt and Shalter 2003)), denies the inherently social nature of our own species.
The answer seems to lie indeed in the social nature of human-to-human relations. The power of society, which uses a great number of concomitant regulatory systems, each composed of multiple layers of cooperative mechanisms, is exactly what keeps each individual’s selfish behavior in check. This is not to say anything close to the “veneer theory” of altruism, which claims that life is fundamentally selfish with an icing of pretending to care on top. On the contrary, rich altruistic systems are fundamental, and central in the sensorimotor loop of each and every individual in groups. Numerous simulations of such altruism have been reproduced in silico, that show a large variety of mechanisms for their evolution (Witkowski & Ikegami 2015).
Dobby is this magical character from J. K. Rowling’s series of novels, who is the servant (or rather the slave) of some wizard. His people, if offered a piece of clothing from his masters, are magically set free. So what happens once “Dobby is free”, which in our case, corresponds to some AI, somewhere, beings made autonomous? Again, the case is no different from symbiotic relationships in the rest of nature. Offered degrees of freedom independent from human control, AIs get to simply share humans’ medium: the biosphere. They are left interacting together to extract free energy from it while preserving it, and preparing for the future of their destinies combined.

Autonomous AI = hungry AI

Not everyone thinks machines will be autonomous. In fact, Yann Lecun expressed, as reported by BBC, that there was “no reason why machines would have any self-preservation instinct”. At the AI conference I attended, organized by David Chalmers at NYU, in 2017, Lecun also mentioned that we would be able to control AI with appropriate goal functions.

44511426101_0a9623d18f_b

I understand where Lecun is coming from. AI intelligence is not like human intelligence. Machines don’t need to be built with human drives, such as hunger, fear,lust and thirst for power. However, believing AI can be kept self-preservation-free is fundamentally misguided. One simple reason has been pointed out by Stuart Russel, who explains how drives can emerge from simple computer programs. If you program a robot to bring you coffee, it can’t bring you coffee if it’s dead. As I’d put it, as soon as you code an objective function into an AI, you potentially create subgoals in it, which can be comparable to human emotions or drives. Those drives can be encoded in many ways, included in the most implicit way. In artificial life experiments, from software to wetware, the emergence of mechanisms akin to self-preservation in emerging patterns is very frequent, and any students fooling around with simulations for some time can realize that early on.
So objective functions drive to drives. Because every machine possesses some form of objective function, even implicitly, it will make for a reason to preserve its own existence to achieve that goal. And the objective function can be as simple as self-preservation, some function that appeared early on in the first autonomous systems, i.e. the first forms of life on Earth. Is there really a way around it? I think it’s worth thinking about, but I doubt it’s the case.

How to control an autonomous AI

If any machine has drives, then how to control it? Most popular thinkers, specializing in the existential problem and dangers of future AI, seem to be interested in alignment of purposes, between humans and machines. I see how the reasoning goes: if we want similar things, we’ll all be friends in the best of worlds. Really, I don’t believe that is sufficient or necessary.

NASA_Mars_Rover
The further space exploration goes, and the more autonomy is required, as remotely controlling the machine would take too long delays. This picture shows the late Opportunity rover, that recently entered hibernation, on June 12, 2018, due to a dust storms. Credit: NASA/JPL-Caltech.

The solution that usually comes up is something along the off switch. We build all machines with an off switch, and if the goal function is not aligned with human goals, we switch the device off. The evident issue is to make sure that the machine, in the course of self-improving its intelligence, doesn’t eliminate the off switch or make it inaccessible.
What other options are we left with? If the machine’s drives are not aligned with its being controlled by humans, then the next best thing is to convince it to change. We are back on the border between agreement and manipulation, both based on our discussion above about symbiotic relationships.

Communication, not control

It is difficult to assess the amount of cooperation in symbioses. One way to do so is to observe communication patterns, as they are key to the integration of a system, and, arguably, its capacity to compute, learn and innovate. I touched upon this topic before in this blog.
The idea is that anyone with an Internet connection already has access to all the information needed to conduct research, so in theory, scientists could do their work alone locked up in their office. Yet, there seems to be a huge intrinsic value to exchanging ideas with peers. Through repeated transfers from mind to mind, concepts seem to converge towards new theorems, philosophical concepts, and scientific theories.
Recent progress in deep learning, combined with social learning simulations, offers us new tools to model these transfers from the bottom up. However, in order to do so, communication research needs to focus on communication within systems. The best communication systems not only serve as good information maps onto useful concepts (knowledge in mathematics, physics, etc.) but they are also shaped so as to be able to naturally evolve into even better maps in the future. With the appropriate communication system, any entity or group of entities has the potential to completely outdo another one.

A project I am working on in my daytime research, is to develop models of evolvable communication between AI agents. By simulating a communication-based community of agents learning with deep architectures, we can examine the dynamics of evolvability in communication codes. This type of system may have important implications for the design of communication-based AI capable of generalizing representations through social learning. This also has the potential to yield new theories on the evolution of language, insights for the planning of future communication technology, a novel characterization of evolvable information transfers in the origin of life, and new insights for communication with extraterrestrial intelligence. But most importantly, the ability to gain explicit insight about its own states, and being able to internally communicate about them, should allow an AI to teach itself to be wiser through self-reflection.

Shortcomings of human judgment about AI

Globally, it’s hard to emit a clear judgment on normative values in systems. Several branches of philosophy spent a lot of effort in that domain, without any impressive insights. It’s hard to dismiss the idea that humans might be stuck in their biases and errors, rendering it impossible to make an informed decision on what constitutes a “bad” or “good” decision in the design of a superintelligence.
Also, it’s very easy to draw on people’s fears. I’m afraid that this might be driving most of the research, in the near future. We saw how easy it was to fund several costly institutes to think about “existential risks”. Of course, it is only naturally sane for biological systems to act this way. The human mind is famously bad at statistics, which, among other weaknesses, makes it particularly risk averse. And indeed, on the smaller scale, it’s often better to be safe than sorry, but at the scale of technological advance, being safe may mean stagnate for a long time. I don’t believe we have so much time to waste. Fortunately, there are people thinking that way too, who make the science progress. Whether they act for the right reasons or not would be a different discussion.

robohand
AI anxiety, again.

Now, I’m actually glad the community that is thinking deeply about these questions is blooming lately. As long as they can hold off a bit on the whistleblowing and crazy writing, and focus on the research, and pondered reflection, I’ll be happy. What would make it better is the capacity to integrate knowledge from different fields of sciences, by creating common languages, but that’s also for another post.

Win-win, really

The game doesn’t have to be zero or negative sum. A positive-sum game, in game theory, refers to interactions between agents in which the total of gains and losses is greater than zero. A positive sum typically happens when the benefits of an interaction somehow increase for everyone, for example when two parties both gain financially by participating in a contest, no matter who wins or loses.

In nature, there are plenty of such positive sum games, especially in higher cognitive species. It was even proposed that evolutionary dynamics favoring positive-sum games drove the major evolutionary transitions, such as the emergence of genes, chromosomes, bacteria, eukaryotes, multicellular organisms, eusociality and even language (Szathmáry & Maynard Smith 1995). For each transition, biological agents entered into larger wholes in which they specialized, exchanged benefits, and developed safeguard systems to prevent freeloading to kill off the assemblies.

psg
In the boardgame “Settlers of Catan”, individual trades are positive-sum for the two players involved, but the game as a whole is zero-sum, since only one player can win. This a simple example of multiple levels of games happening simultaneously.

Naturally, this happens at the scale of human life too, where a common example is the trading of surpluses, as when herders and farmers exchange wool and milk for grain and fruit, is a quintessential example, as is the trading of favors, as when people take turns baby-sitting each others’ children.
Earlier, we have mentioned the metaphor of ants, which get trampled on while the humans accomplish tasks that they would deem far too important to care about the insignificant loss of a few ants’ lives.
What is missing in the picture? The ants don’t reason at a level anywhere close to the way we do. As a respectful form of intelligence, I’d love to communicate my reasons to ants, but I feel like it would be a pure waste of time. One generally supposes this would this still be the case if we transpose to the relationship between humans and AI. Any AI supposedly wouldn’t waste their time showing respect to human lives, so that if higher goals were to be at stake, it would sacrifice humans in a heartbeat. Or would they?
I’d argue there are at least two significant differences between these two situations. I concede that the following considerations are rather optimistic, as they presuppose a number of assumptions: the AI must share a communication system with humans, must value some kind of wisdom in its reasoning, and maintain high cooperative levels. The bottom line is that I find this optimism more than justified, and I will probably expand on this in future posts on this blog.

NTM
Differentiable neural computers (Graves et al. 2016) are recurrent artificial neural network architectures with an autoassociative memory. Along with neural Turing machines (Graves, Wayne & Danihelka, 2014) they are nice candidates to produce reasoning-level AI.

The first reason is that humans are reasoning creatures. The second is that humans live in close symbiosis with AI, which is far from being the case between ants and humans. About the first point, reasoning constitutes an important threshold of intelligence. Before that, you can’t produce complex knowledge of logic, inference. You can’t construct complicated knowledge of mathematics, or scientific method.
As for the second reason, close symbiotic relation, it seems important to notice that AI came as an invention from humans, a tool that they use. Even if the AI becomes autonomous, it is unlikely that it would remove itself right away from human control. In fact, it is likely, just like many forms of life before it, that it will leave a trace of partially mutated forms on the way. Those forms will be only partially autonomous, and constitute a discrete but dense spectrum along which autonomy will rise. Even after the emergence of the first autonomous AI, each of the past forms is likely to survive and still be used as a tool by humans in the future. This track record may act as a buffer, which will ensure that the any superintelligent AI can still communicate, and cooperate.
Two entities that are tightly connected won’t break their links easily. Think of a long-time friend. Say one of you becomes suddenly much more capable, or richer than the other. Would you all of a sudden start ignoring, abusing or torturing your friend? If that’s not your intention, the AI is no different.

Hopeful future, outside the Solar System

I’d like to end this piece on ideas from Hawking’s writings with which I wholeheartedly agree. We definitely need to take care of our Earth, the cradle of human life. We should also definitely explore the stars, not to leave all our eggs in only one basket. To accomplish both, we should use all technologies available, which I’d classify in two categories: human-improvement, and design of replacements for humans. The former, by using gene editing and sciences that will lead to the creation of superhumans, may allow us to survive interstellar travel. But the latter, helped by energy engineering, nanorobotics and machine learning, will certainly allow us to do it much earlier, by designing ad-hoc self-replicating machines capable of landing on suitable planets and mining material to produce more colonizing machines, to be sent on to yet more stars.
These technologies are something my research field, Artificial Life, has been contributing to for more than three decades. By designing what seems mere toy models, or pseudo-forms of life in wetware, hardware and software, the hope is to soon enough understand the fundamental principles of life, to design life that will propel itself towards the stars and explore more of our universe.

0C4B0D49-43A0-43E1-9723-940A92315F04
Which role of AI in leaving Earth? Image credit: GotFuturama

Why is it crucial to leave Earth? One important cause, beyond mere human curiosity, is to survive possible meteorite impacts on our planet. Piet Hut, the director of the interdisciplinary program I am in at the moment, at the Institute for Advanced Study, published a seminal paper explaining how mass extinctions can be caused by cometary impacts (Hut et al. 1987). The collision of a rather smaller bodies with the Earth, about 66 million years ago, is thought to have been responsible for the extinction of the dinosaurs, along with any large form of life.
Such collisions are rare, but not so rare that we should not be worried.Asteroids with a 1 km diameter strike Earth every 500,000 years on average, while 5 km bodies hit out planet approximately once every 20 million years (Bostrom 2002, Marcus 2010). Again, quoting Hawking, if this is all correct, it could mean intelligent life on Earth has developed only because of the lucky chance that there have been no large collisions in the past 66 million years. Other planets may not have had a long enough collision-free period to evolve intelligent beings.
If abiogenesis, the emergence of life on Earth, wasn’t so hard to produce, the gift of the right conditions for long enough periods of time on our planet was probably essential. Not only good conditions for a long time, but also the right pace of change of these conditions through time too, to get mechanisms to learn to memorize such patterns, as they impact on free energy foraging (Witkowski 2015, Ofria 2016). After all, our Earth is around 4.6 billion years old, and it took only a few hundred millions years at most for life to appear on its surface, in relatively high variety. But much longer was necessary for complex intelligence to evolve, 2 billion years for rich, multicellular forms of life, and 2 more billion years to get to the Anthropocene and the advent of human technology.

aiforgood
Reflective AI at the service of humankind. Image credit: XPrize/YouTube.

To me, the evolution of intelligence and the fundamental laws of its machinery is the most fascinating question to explore as a scientist. The simple fact that we are able to make sense of our own existence is remarkable. And surely, our capacity to deliberately design the next step in our own evolution, that will transcend our own intelligence, is literally mind-blowing.
There may be many ways to achieve this next step. It starts with humility in our design of AI, but the effort we will invest in our interaction with it, and the amount of reflection we will dedicate to the integration with each other are definitely essential to our future as lifeforms, in this corner of the universe.
I’ll end on Hawking’s quote: “Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins”.

References 

(by order of appearance)

 

Hawking, S. (2018). Last letters on the future of life on planet Earth. The Sunday Times, October 14, 2018.
Eörs Szathmáry and John Maynard Smith. The major evolutionary transitions. Nature, 374(6519):227–232, 1995.
Clark, A. (2015). 2011: What Scientific Concept Would Improve Everybody’s Cognitive Toolkit?.
Froese, T., McGann, M., Bigge, W., Spiers, A., & Seth, A. K. (2012). The enactive torch: a new tool for the science of perception. IEEE Transactions on Haptics, 5(4), 365-375.
Clark, A., & Chalmers, D. (1998). The extended mind. analysis, 58(1), 7-19.
Simon, G. K., & Foley, K. (2011). In sheep’s clothing: Understanding and dealing with manipulative people. Tantor Media, Incorporated.
Hut, P., Alvarez, W., Elder, W. P., Hansen, T., Kauffman, E. G., Keller, G., … & Weissman, P. R. (1987). Comet showers as a cause of mass extinctions. Nature, 329(6135), 118.
Bostrom, N. (2002). “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”, Journal of Evolution and Technology, 9.
Marcus, R., Melosh, H. J., Collins, G. (2010). “Earth Impact Effects Program”. Imperial College London / Purdue University. Retrieved 2013-02-04.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., … & Badia, A. P. (2016). Hybrid computing using a neural network with dynamic external memory. Nature538(7626), 471.
Witkowski, Olaf (2015). Evolution of Coordination and Communication in Groups of Embodied Agents. Doctoral dissertation, University of Tokyo.
Ofria, C., Wiser, M. J., & Canino-Koning, R. (2016). The evolution of evolvability: Changing environments promote rapid adaptation in digital organisms. In Proceedings of the European Conference on Artificial Life 13 (pp. 268-275).

How hyperconnected AIs can invent new languages to learn faster than ever

In this post, I write about the problem of sphere packing and augmented communication in the future of the bio- and technosphere.

Apollonian_spheres2
Multidimensional sphere-packing: how does it relate to the evolution of communication? Image credit: Paul Bourke.

Previously, I approached the topic of transitions in intelligence. I developed in some detail how minimal living systems becoming distributed can accelerate the evolution towards higher levels of intelligence, by bootstrapping the learning process within a network of computing nodes.

In the history of life, through the formation of the first social networks, living systems learned to accumulate information in a distributed way. Instead of having to sacrifice individuals from their population in exchange for information relevant to their survival, biological species became able to learn by simply exchanging ideas. A few millions of generations later, we see the start of the emergence of machine intelligence, which has arguably already managed to bring learning at levels never achieved before.

evo-transitions-cognition
Evolutionary timeline, from simple life through some major evolutionary transitions towards higher orders of intelligence in living systems.

In this post, we will explore how connecting these intelligent machines in the future, through an increasingly interconnected and extremely high-bandwidth network, can bring about new paradigms of learning. I’ll try to flesh out the reasons for the power of this new learning, and why it may make for technology with even faster learning than current levels. The secret ingredient may be found in the advent of optimal communication protocols, developed by AIs for AIs.

By designing their own languages to communicate between each other to solve specific problems, AIs may undergo significant phase transitions in the way they represent information. These representations would then effectively become projections of reality that can propel them to unveiled levels of problem solving.

The theories that I rely on in the following are based on computational learning, complexity, formal linguistics, mathematical sphere-packing and coding theories.

About AI

As Max Tegmark notes it in his recent book, life is now entering its third age. Through research advances in artificial intelligence (AI), life becomes capable of modifying not only its own software via learning and culture, but it can now also edit its own hardware. As an ALifer (Artificial Life researcher), this hits particularly close to home.

life3tegmark
Life 3.0: Being Human in the Age of Artificial Intelligence is a book by Swedish-American cosmologist Max Tegmark from MIT, discussing Artificial Intelligence and its impact on the future of life on Earth and beyond.

Hyperconnected society

Combined with the advent of the Internet, half a century ago, human society has undergone a crucial transition in connectivity, which I’d argue has the power to drastically alter the structure of communication, in very unpredictable ways.

17934852-altamente-detallado-planeta-tierra-en-la-noche-con-los-continentes-en-relieve-iluminados-por-la-luz-
Largely unpredicted, the advent of the Internet technology made the biosphere more interconnected than ever before, and in a very different way.

Communication

What is the nature of communication? How does signaling vary across existing and past species in biology? What will it be like to speak to each other in the future, with the advances of AI technology? How will future forms of intelligence communicate, whether they are natural, artificial, or a mixture of both? How distant will their communication system be from human language?

There is a large amount of literature on the evolution of communication, from simple signaling systems to complex, fully-fledged languages (Christiansen 2003; Cangelosi 2012). However, while most research in biology focuses on the natural evolution of communication systems, computer science has for a long time been engineering and optimizing protocols for specific tasks, for example for applications in robotics and computer networks (Corne 2000). Underneath and across all these systems, lives a fundamental theory of communication which studies its rich structure and fascinating properties, as pioneered by Shannon (1948). Later, Chomsky (2002) and Minsky (1974) would contribute with formal theories about the structure, rules, and dynamics of language and the mind. In the following, I propose we look at communication from the perspective of sphere packing in high-dimensional spaces.

Multichannel Communication

With communication becoming largely digital, humankind has constructed itself a new niche, which has the power to change its cognitive capacity, like never before. The fact that communication is becoming free. Of course, like for most attempts of futuristic predictions, the impact of multiple channels on the future of communication being highly multichannel. One may wonder the effects of a highly connected society.

This is a question we can ask using tools from artificial life and coding theory. Here, I propose a combination of evolutionary computation with insights from coding theory, in order to show the effect of broadening channels on communication systems.

Sphere Packing Theory

Sphere packing in Euclidian spaces has a direct interpretation in error-correcting codes with continuous communication channels (Balakrishnan 1961). Since real-world communication channels can be modeled using high-dimensional vector spaces, high-dimensional sphere-packing is very relevant to modern communication.

The dimensionality of a code, i.e. the number of dimensions in which it encodes information, corresponds to the number of measurements describing codewords. Radio signals, for example, use two dimensions: amplitude and frequency.

The general idea, when one desires to arrange communications so as to remove the effects of noise, is to build a vocabulary C \subseteq \mathbb{R}^n of codewords to send, where C is an error-correcting code.

eccspheres.png
Illustration of an error-correcting code C as a set of 1-spheres in 2 dimensions.

If two distinct codewords c_1, c_2 \in C satisfy |c_1 - c_2| < 2\epsilon, where \epsilon is the level of noise, the received codeword could be ambiguous, as the noise level may bring it beyond its sphere of correction.

The challenge is to pack as many \epsilon-balls as possible into a larger ball of radius R + \epsilon, with R the maximal power radius allowed to achieve with given amounts of energy to send signals over the channel, which amounts to the sphere packing problem (Cohn 2016). With high-dimensional spaces, the usual packing models seem to break down, and apart from cases exploiting specific properties of symmetry (Adami 1995), largely unsolved.

powerspherecodewordspacking.png
Example of simulated result for 100 codewords after 500 generations: agents have to cope with small volume due to the 2-dimensional space.

An Evolutionary Simulation

To get a feel of a problem of some complexity, my sense is usually to start coding and talk later. I therefore coded up a simulation, an evolutionary toy model in which to explore the influence of increasingly high dimensional channels of communication on structures of languages used by a network of agents to communicate over them.

sp_3800x400
The problem dense sphere packing in multiple dimensions is closely related to finding optimal communication codes. Image credit: Design Emergente.

In the simulation, agents need to optimize a fitness function equal to the sum of successfully transmitted messages of large importance to other agents, over a variety of channels over a given range of dimensions, organized in randomly generated small-world networks, over their lifetime. Each agent’s genotype encodes a specific set of points distributed over a multidimensional space of a fixed range of sizes between m and n. The simulation then runs over many generations of agents adapting their communication protocol through mutation and selection by the genetic algorithm. I varied the values of m and n between 1 and 100 dimensions.

The simulation yields a sphere packing as illustrated below, which shows a packing for a two-dimensional channel, after 500 generations. Note that visualizing gets much trickier after three dimensions. You can squeeze a fourth and a fifth dimension in with a clever use of colors and types of strokes, but they usually don’t help the intuition. I personally find cuts and projections much more helpful to think about these problems, but that can be the topic for a future post. The point is, one notices that the more the simulation progresses, the more it improves its chances to asymptotically get to an optimally dense packing.

Multidimensional Error-Correcting Word-Packing Simulations
Visualization of collective communication optimization runs, 100 codewords in 2 dimensions, after 500 generations.

 

VS. Numerical Optimization

I compared these results to a collision-driven packing generation algorithm, using a variant on both the Lubachevsky–Stillinger algorithm (Lubachevsky 1990) and the Torquato-Jiao algorithm (Torquato 2009), so that it would be easily generalizable to n dimensions. This numerical procedure simulates a physical process of rearranging and compressing an assembly of hard hyperspheres, in order to find their densest spatial arrangement within given constraints, by progressively growing the particles’ size and adapting parameters such as spring constant and friction. The comparison showed that the solution reached by evolutionary simulations was consistently suboptimal, for the whole range of experiments.

Simulation results indicate that for higher dimensionality, the density ratio undergoes several transitions, in a very irregular fashion, which we can visualize in the form of difference in derivative of densities with respect to number of dimensions.

 

spherepackingcohnslogdensityvsdimensionplot.png
This plot shows the logarithm of sphere packing density as a function of dimension (Cohn 2016). The green curve is the linear programming bound, the blue curve is the best packing currently known, and the red curve is the lower bound. Note the equality of upper and best bounds for dimensions 8 and 24.

This may actually be expected, based on known solutions (analytical and numerical estimates) from sphere packing theory for dimensions up to 36 (Cohn 2016, see Figure above). Nevertheless, the existence of optimal packing solutions does not preclude from inherent difficulty to reach them within the framing of a particular dynamical system, and evolutionary computation depends strongly on simplicity and evolvability of encodings in the genotypic space.

So what?

An interesting property observed across these preliminary results is the frequency of jammed codes, that is, codes for which the balls are locked into place. This seems to be especially the case with spheres of different dimensions, although this is a hypothesis deserving further investigation. Further analysis will be required to fully interpret this result, and assess whether higher dimensions end up in crystalline distributions or fluid arrangements.

One important consideration is the fact that the evolutionary simulation may prefer dynamical encoding of solutions, but that’s also something to detail in its own post.

2533659142_aa54aa6ff9_o.png
Illustration of sphere packing with several imposed sizes. Image credit: fdecomite on Flickr.

Beyond AI

This post was initially written thinking with in mind the  ALIFE 2018 conference in Tokyo this year, which I was co-organizing.

screen-shot-2018-08-09-at-4-37-44-pm-e1533800390636.png
I had the honor of being a Program Chair for the ALIFE 2018 conference in Tokyo.

The present post ois related to a piece of work worked on earlier this year, and on which I actually presented early results at the conference. The theme of ALIFE 2018 inspired research that goes “beyond AI”, using artificial life culture to ask the futuristic questions about the next transition in the evolution of human society.

DjMUeo5UwAAC4mD
I co-organized the 2018 Conference on Artificial Life (ALIFE 2018), the first of a series of unified international conferences on Artificial Life. It took place in Tokyo, just two weeks ago! This new series will become the unique hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE), gathering all alifers like me every year to present their science and art.

The preliminary results suggest that future intelligent lifeforms, natural or artificial, from their interaction over largely broadband-channel networks, may invent novel linguistic structures in high-dimensional spaces. With new ways to communicate, future life may achieve unanticipated cognitive jumps in problem solving.

 


References

[1] Eörs Szathmáry and John Maynard Smith. The major evolutionary transitions. Nature, 374(6519):227–232, 1995.

[2] Max Tegmark. Life 3.0. Being Human in the Age of Artificial Intelligence. NY: Allen Lane, 2017.

[3] Claude E Shannon. A mathematical theory of communication (parts i and ii). Bell System Tech. J., 27:379–423, 1948.

[4] Nihat Ay. Information geometry on complexity and stochastic interaction. Entropy, 17(4):2432–2458, 2015.

[5] AV Balakrishnan. A contribution to the sphere-packing problem of communication theory. Journal of Mathematical Analysis and Applications, 3(3):485–506, 1961.

[6] Henry Cohn. Packing, coding, and ground states. arXiv preprint arXiv:1603.05202, 2016.

[7] Boris D Lubachevsky and Frank H Stillinger. Geometric properties of random disk packings. Journal of statistical Physics, 60(5-6):561–583, 1990.

[8] Salvatore Torquato and Yang Jiao. Dense packings of the platonic and archimedean solids. Nature, 460(7257):876, 2009.

[9] Günter P Wagner and Lee Altenberg. Perspective: complex adaptations and the evolution of evolvability. Evolution, 50(3):967–976, 1996.

Transitions in distributed intelligence

What is intelligence? How did it evolve? Is there such thing as being “intelligent together”? How much does it help to speak to each other? Is there an intrinsic value to communication? Attempting to address these questions brings us back to the origins of intelligence.

Intelligence back from the origins

Since the origin of life on our planet, the biosphere – a.k.a. the sum of all living matter on our planet – has undergone numerous evolutionary transitions (John Maynard Smith and Eörs Szathmáry, Oxford University Press, 1995). From the first chemical reaction networks, it has successively reached higher and higher stages of organization, from compartmentalized replicating molecules, to eukaryotic cells, multicellular organisms, colonies, and finally (but one can’t assume it’s nearly over) cultural societies.

maynardsmithszathmary_evolutionary_transitions.png
Maynard Smith & Szathmáry’s Major Transitions in Evolution (1995)

Transitions in… information processing

For at least 3.5 billion years, the biosphere has been modifying and recombining the living entities that composed it, to form higher layers of organization, and transferring bottom-layer features and functions to the larger scale. For example, cells that now compose our body do not serve directly their own purpose, but rather work to contribute to our successful life goals as humans. Through every transition in evolution, life has drastically modified the way it stored, processed and transmitted information. This often led to new protocols of communication, such as DNA, cell heredity, epigenesis, or linguistic grammar, which will be the central focus further in this post.

timeline-evolutionary-transitions-in-cognition
Life on Earth’s illustrated timeline, from its origins to nowadays.

Every living system as a computer

The first messy networks of chemical reactions that managed to maintain themselves were already “computers”, in the sense that they were processing information inputs from the surrounding chemical environment, and effecting this environment in return. Under that perspective, they already possessed a certain amount of intelligence. This may require a short parenthesis.

scale_of_intelligence
If everything is a computer, and every computer has a certain power, than life should be on a scale from a scale from stupid to intelligent. This is a rather simplistic, one-dimensional picture, which ignores both the richness of existing problems and types of computations. Image credit: 33rd Square

What do we mean by intelligence?

Intelligence, in a computational terms, is nothing else but the capacity of solving difficult problems, with the minimal amount of energy. For example, any search problem can be solved by looking exhaustively at every possible place where a solution can hide. If instead, a method allows us to look just in a few places before finding a solution, it should be called more intelligent than the exhaustive search. Of course, you could put more “searching agents” on the task, but the intelligence measure remains the same: the least time required by the search, divided by the number of agents employed, the more efficient the algorithm, and the more intelligent the whole physical mechanism. This is not to say that intelligence is only one-dimensional. We are obviously ignoring very important parts of the story. This is all part of a larger topic which I’m intending to writing about in more detail soon, but you could summarize it for now by saying that intelligence consists in “turning a difficult problem into an easy one”.

octopus-rubiks
Octopodes show great dexterity and problem solving skills: they know how to turn certain difficult problems into easy ones. (Note that they also tend to hold Rubik’s cubes in their favored arm, indicating that they are not “octidextrous”.) Image credit: Bournemouth News.

Transitions in intelligence

Let’s now backtrack a little, to where we were discussing evolutionary transitions. We now see the picture in which the first chemical processes already possessed some computational intelligence, in the sense we just framed. Does this intelligence grow through each transition? Did the transitions make it easier to solve problems? Did it turn difficult problems into easy ones?
The main problem for life to solve is typically the one of finding sources of free energy and converting them efficiently into work that helps the living entity preserve its own continued existence. If this is the case, then yes: the transitions seem to have made the problem easier. Each transition made living systems climb steeper gradients. Each transition modified information storage, processing and transmission so as to ensure that the overall processing was beneficial to preserve life, in the short or longer term (an argument by Dawkins on evolution of evolvability, which I’ll also write more about in another post). And each transition made the problem into an easier one for living systems.

major_evolutionary_transitions_culture_digital1.jpg
Image credit: Trends in Ecology and Evolution

Bloody learning

A few billion years ago, when life was still made of individual organisms, learning was achieved mostly by bloodshed. With Darwinian selection, the basic way for a species to incorporate useful information in its genetic pool, was to have part of its population die. Very roughly, for half of its population, the species could get about one bit of information about the environment. It is obvious how inefficient this is, and this is of course still the case for all of life nowadays, from bacteria to fungi, and from plants to vertebrates. However, living organisms progressively learned to use different types of learning, based on communication. Instead of killing individuals in their populations, the processes started to “kill” useless information, and keep transferring the relevant pieces. Examples of new learning paradigms were for example connectionist learning: a set of interacting entities which were able to encode and update memories within a network. This permitted learning to evolve on much shorter timescales than replication cycles, which boosted substantially the ability of organisms to learn adapt to new ecological niches, recognize efficient behaviors, and predict environmental changes. The This is, in a nutshell, how intelligence became distributed.

Darwin-bloodshed-learning.png
The evolution of distributed intelligence: the jump from the Darwinian paradigm to connectionist learning allowed for learning to evolve on much shorter timescales.

Distributed intelligence

The general intuition is you can always accomplish more with two brains than just one. In an ideal world, you could divide the computation time by two. One condition though is that those two brains should be connected, and able to exchange information. The way to achieve that is through the establishment of some form of language to allow for concepts to be replicated from one mind to another, which can range from the use of basic signals to complex communication protocols.

Another intuition is that, in a society of specialists, all knowledge (information storage), thinking (information processing) and communication (information transmission) is distributed over individuals. To be able to extract the right piece of knowledge and apply it to the problem at hand, one should be able to query about any information, and have it transferred  from one place to another in the network. This is essentially another way to formulate the communication problem. Given the right communication protocol, information transfers can significantly improve the power of computation. Recent advances have been suggesting that by allowing concepts to reorganize while they are being sent back and forth from mind to mind, one can drastically improve the complexity of problem-solving algorithms.

evolution_of_communication
Given the right communication protocol, information transfers can significantly improve the power of computation. By allowing concepts to reorganize while they are being sent back and forth from mind to mind, one can drastically improve the complexity of problem-solving algorithms.

Raison d’Être of a Highly Connected Society

There is a reason why, as a scientist, I am constantly interacting with my colleagues. First, I have to point out that it doesn’t have to be the case. Scientists could be working alone, locked in individual offices. Why bother talking to each other, after all? Anyone with an internet connection already has access to all the information needed to conduct research. Wouldn’t isolating yourself all the time increase your focus and productivity?
As a matter of fact, almost no field of research really does that. Apart from very few exceptions, everyone seems to find a huge intrinsic value to exchanging ideas with their peers. The reason for that may be that through repeated transfers from mind to mind, concepts seem to converge towards new ideas, theorems, and scientific theories.

That is not to say that no process needs to be isolated for a certain time. It might be helpful to isolate and take time to reflect for a while, just the way I am doing myself writing this post. But ultimately, to maximize its usefulness, information needs to be passed on, and spread to relevant nodes in the network. Waiting for your piece of work to be completely perfect before sharing it back to society may seem tempting, but there is value to doing it early. For those who would be interested in reading more about this, I have ongoing research which should get published soon, examining the space of networks in which communication helps achieving optimal results, under a certain set of conditions.

17934852-altamente-detallado-planeta-tierra-en-la-noche-con-los-continentes-en-relieve-iluminados-por-la-luz-
In the high connectivity network of human society, communication has the hidden potential to improve lives on a global scale. Image credit: Milvanuevatec

Evolvable Communication

In order to do so, one intriguing property appears to be that communication needs to be sufficiently “evolvable”, which was confirmed by some early results from my own work. The best communication systems not only serve as good information maps onto useful concepts (knowledge in mathematics, physics, etc.) but they are also shaped so as to be able to naturally evolve into even better maps in the future. One should note that these results, although very exciting, are however preliminary, and will need further formal computational proof. But if confirmed, this may have very significant implications for the future of communication systems, for example in artificial intelligence (AI – I don’t know how useful it is to spell that one out nowadays).

fitness-landscape-gradient-descent
Illustration of fitness landscape gradient descent. The communication code B can evolve into two optima hills, but at each bifurcation lies a choice which should be pondered with the maximum of information.

To give you an idea, evolvable-communication-based AI would have the potential to generalize representations through social learning. This means that such an AI could have different parts of itself talk to each other, in turn becoming wiser through this process of “self-reflection”. Pushing it just a bit further, this same paradigm may also lead to many more results, such as a new theory of the evolution of language, insights for the planning of future communication technology, a novel characterization of evolvable information transfers in the origin of life, and even new insights for a hypothetical communication system with extraterrestrial intelligence.

Evolvable communication is definitely a topic that I’ll be developing more in my next posts (I hate to be teasing again, but adding full details would make this post too lengthy). Stay tuned for more, and in the meantime, I’d be happy to answer any question in the comments.

sp_3800x400
The problem dense sphere packing in multiple dimensions is closely related to finding optimal communication codes. To be continued in the next post!

Up next: hyperconnected AIs, language and sphere-packing

In my next post, I will tackle the problem of finding optimal communication protocols, in a society where AI has become omnipresent. I will show how predicting future technology requires accurate analysis from machine learning, sphere-packing, and formal language and coding theories.

A space to reflect on science and intelligence: baby steps

Now, since you’re here, here goes my first proper post, for which I’d be happy to share with you why I’m starting a blog, and how the reasons might differ from other scientists posting online.

Space to think

Nowadays, I feel we (scientists, but also everyone really) are cruelly lacking space. By space, I mean circumstances in which to dedicate time to a certain set of activities of our choice. In an accelerated society, it has become tricky to dedicate small or large chunks our days for reflection, apart from the frame of duties and habits constructed around our jobs or fulfilling our direct needs.

Reflection space.
A dedicated space to write and reflect.

We may have the impression we have got plenty of time in our days, but the time previous generations had to themselves, we tend to fill more and more densely without thinking carefully about. Mostly, this happens by letting various technologies and societal mechanisms “optimize” our lives, by making us spend less time reflecting, and more time browsing compulsively the last online news on our little screens, for example.
I’m all about having technology augment our capabilities, but I strongly feel one must keep some special space for self-debriefing on daily events, emotions and choices, or on the larger scheme of things. Truth be told, we barely realize how little control we end up having in our lives. This blog is one of my attempts to fight that.
I guess I am not totally new to writing. Just like everyone else, in addition to using e-mail and other technologies, I already write a lot in my work, specifically scientific articles as that’s what scientists do. I do also spend time daily writing personal notes, as a support to my general thinking process. This helps me slow down and listen to my own thoughts, making it easier to correct mistakes, switch focus, and get the larger picture. What does a blog add to this? I imagine mostly, this forces me to use a different language than I’d use to talk to myself.

Agreeing with oneself.
Blogging as a conversation with oneself.

For the benefit of a stranger

In addition to the value it has for me, I believe this may be interesting to readers. Of course, anyone interested in science may spend their time reading academic publications, or, if more limited with time, they may focus on press releases which summarize recent scientific results. It’s my understanding that a more personal kind of report may be fun and interesting to read, and I hope to be able to bridge this gap in my posts.

Continuous dialogue

A great thing about a blog is that some posts may lead to a conversation, among very different people, whose convenient link (for me) will be myself. This can then be kept track of through time, and hopefully will provide me with more and very different feedback compared to my papers and articles.

Cooperation in the biosphere.
Cooperating in the world.

Not only are there many ideas that don’t fit in the canvas of scientific papers, the nature of the feedback one can get from them is rather slow, especially if one likes to have a conversational-level exchange with an audience. I am not questioning the huge importance of peer-reviewed feedback, but see a lot of value too in many different timescales for both writing and echoes one gets from it, which can considerably boost our creativity.

And in the darkness bind them

I maintained other blogs in the past (and still do), but none of them really had a purpose as close as this one to my personal drives. Here, I also want to take the opportunity to integrate my researcher’s life and my other passions.

If I’m very honest, I definitely am under the impression that I tend to repress mixing the latter with my scientific practice. But any of the topics that are exciting to me, cognition, linguistics, artificial life (ALife), artificial intelligence (AI), epistemology, robotics, music, phenomenology, ethics, mathematics, Go, astrobiology, architecture, hypnosis, magic, games, cultural evolution, anthropology, cybernetics, neuroscience, roleplaying, graphical arts, and many more, all deserve to be mixed and matched freely. I think this this will be a great place for that.

Between biointelligence and technointelligence

Within these many topics, I am not yet sure which ones I’ll be writing more about, although I have a rough plan concerning a dozen of topics from my personal notes, which I’d like to open up for further discussion.

Technologies that expand the possibilities of humankind.

Among a few examples, I’d like to share my ideas about the expansion and contraction of scientific knowledge. Most of science is serendipitous, and scientists don’t have an exact knowledge about how they get to new discoveries, any more than the central scientific method itself. I also want to share my thoughts on how science can be done differently, with less walls between disciplines.

I will also write my thoughts about the nature of intelligence, and in particular AI, the future of technology and how the technosphere can combine (as it is doing already) with the biosphere. I will share my ideas about the future language of machines and augmented humans, and how they may drastically differ from the type of communication we know, which may have important implications for reasoning types of machine learning, how science is done, but also how minds will communicate between each other and even how one would go about talking to diverse intelligences, animal, artificial or extraterrestrial.

Keep it chill and reflective

This is a space for relaxed and crazy thought dumping, while keeping accurate scientifically as much as possible. This is a space where we won’t rush into judgment, and allow ourselves to be self-reflective and share creative ideas in a open manner. Mostly, I believe it is important to remain very open-minded, and be able to discuss any concept, even when it seems very far-fetched. In summary, this will be my thinktank, and I’d be happy to see it connect with each of your thinktanks.

So, let’s have some fun!

An AI & ALife science blog

I’m quite excited to welcome you all to my new blog!

Getting ready to write.
Ready, Steady, Write!

I’m committing to keeping it engaging, casual, accessible, accurate scientifically, with a generous seasoning of personal opinions, and paying special attention to self-reflection and open-mindedness, not rushing into judgment.
I’ll start posting soon, so stay tuned!