We live in a time of information abundance. It has become as plentiful—and as carefully engineered to exploit our every weakness—as modern processed food. The more content is optimized to manipulate our attention, the more our cognitive patterns are hijacked. Digital platforms are not only distracting; they reshape what we pay attention to and how we think.
The same cognitive patterns that once helped us survive by seeking vital free energy and novelty, now keep us locked in cycles of endless scrolling, technostress, and mental malnutrition. Worse, the more these systems optimize to capture our engagement, the harder it becomes to find truthful, nourishing ideas for the growth of humanity. Are we witnessing the edge of a Great Filter for intelligent life itself?
Overcoming this challenge will demand either drastically augmenting our cognitive ability to process the overflow of information, developing a brand new kind of immune system for the mind, or rapidly catalyzing the development of social care and compassion for other minds. Perhaps at this point, there is no choice but to pursue all three.
In this Age of Information Abundance, humans face an engineered deluge of manipulative digital information that exploits our every instinct; escaping it will demand strengthening our minds, mental immune systems, and shared compassion.Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Jul 4, 2025.
Digital Addiction and Mind Starvation in the Attention Economy
This post was inspired to me by brilliant science communicator and author Hank Green’s recent thoughts on whether the internet is best compared to cigarettes or food (Green, 2025)—a brilliant meditation on addiction and the age of information abundance. Hank ultimately lands on the metaphor of food: the internet isn’t purely harmful like cigarettes. It’s become closer in dynamics to our modern food landscape, full of hyper-processed, hyper-palatable products. Some are nutritious. Many are engineered to hijack our lives.
This idea isn’t new. Ethologist Niko Tinbergen, who won the 1973 Nobel Prize in Biology, demonstrated that instinctive behaviors in animals could be triggered by what he coined supernormal stimuli—exaggerated triggers that mimic something once essential for survival—using creatively unrealistic dummies such as oversized fake eggs that birds preferred, red-bellied fish models that provoked attacks from male sticklebacks, and artificial butterflies that elicited mating behavior (Tinbergen, 1951). Philosopher Daniel Dennett (1995), pointed out how evolved drives can be hijacked by such supernormal stimuli, and how our minds, built by evolutionary processes to react predictably to certain signals, can thus be taken advantage of, often without our conscious awareness. Chocolate is a turbo-charged version of our drive to seek high-energy food, which we come to prefer to a healthier diet of fruits and vegetables. Nesting birds prefer oversized fake eggs over their own. Cuteness overload rides on our innate caregiving instincts to respond powerfully to cartoonish or exaggerated baby-like features such as large eyes, round faces, and small noses. Pornography exaggerates sexual cues to hijack evolved reproductive instincts, ending up producing stronger reactions than natural encounters.
For most of our biological history, we humans have evolved in a world of information scarcity, where every piece of knowledge could be vital. Today, we turned into the central lane of the Information Age, where digital information has become a “supernormal stimuli”—coined by Nobel Prize Winner Niko Tinbergen—susceptible to manipulate our every thought and decision. Book reference: Tinbergen, N. (1951). The study of instinct. Oxford University Press.
And here we are with the internet. For most of our biological history, we have evolved in a world of information scarcity, where every piece of knowledge could be vital. Today, we turned into the central lane of the Information Age highway, pelted by an endless storm of content engineered to feel urgent, delicious, or enraging (Carr, 2010; McLuhan, 1964). It’s not that infotaxis—meant in the pure sense of information foraging, like chemotaxis is to chemical gradients—is inherently bad. It can be as helpful as Dennett’s food, sex, or caretaking-related instincts. But the environment has changed faster than our capacity to navigate it wisely. The same drives that helped our ancestors survive can now keep us scrolling endlessly, long past the point of nourishment. As Adam Alter (2017) describes, modern platforms are designed addictions—custom-engineered experiences that feed our craving for novelty while diminishing our sense of agency. Meanwhile, Shoshana Zuboff (2019) has shown how surveillance capitalism exploits these vulnerabilities systematically, capturing and monetizing attention at an unprecedented scale. And before that, pioneers like BJ Fogg (2003) have documented how persuasive technology can be designed to influence attitudes and behaviors, effectively reshaping our habits in subtle but powerful ways.
This is the core problem we face: the more these systems optimize for engagement, the harder it becomes to access truthful, useful, nourishing information—the kind that helps us think clearly and live well. What looks like an endless buffet of knowledge is often a maze of distraction, manipulation, and empty mental calories. In other words, the very abundance of content creates an environment where the most beneficial ideas are the hardest to find and the easiest to ignore.
Defending Global Access to Truthful Information
Shall we pause for a moment and acknowledge how ridiculously challenging it has become to search for information online? Some of us can certainly remember times and versions of the Internet in which it wasn’t all that difficult to find an article, a picture, or a quote we’d seen before. Of course, information was once easier to retrieve because there was less overwhelming abundance, and part of this is due to the increasing incentives to flood us with manipulative content. But beyond simply the challenge of locating something familiar, it has also become much harder to know what source to trust. The same platforms that bury useful information under endless novelty also strip away the signals that once helped us judge credibility, leaving us adrift in a sea where misinformation and insight are nearly indistinguishable.
Red Queen’s race between cyberattacks from ads-based social media and systems protecting our mental health. Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Jul 4, 2025.
So how do we adapt? I think we have three broad paths. One is to augment our cognitive capacity—to build better tools, shared knowledge systems, and personal practices that help us sort signal from noise. This practice relies on an open field Red Queen’s race between mental cyberattacks for advertisement and protective systems trying to figure out how to transmit information. It’s not impossible. Just costly—to the extent it may render this way nonviable. This is the path of cryptography.
The second is to bury the signal itself. We can hide cognitive paths, and the channels we really use to communicate survival information with each other or within our own bodies, through a collection of local concealment strategies and obfuscation mechanisms. Even our body’s immune system, rooted in the cryptographic paradigm (Krakauer, 2015), works by obscuring and complicating access to its key processes to resist exploitation by external—as well as internal—parasites. Biological processes like antigen mimicry, chemical noise for camouflage, or viral epitope masking do involve concealment and obfuscation of vital information—meaning access to the organism’s free energy. Protecting by building walls—which means exploiting inherent asymmetries present in the underlying physical substrate—can incur great costs. In many cases, making information channels covert, albeit at the expense of signal openness and transparency, can be a cheaper tactic. This second path is called steganography.
The third and final path is to increase trust and care among humans—and the other agents, biological or artificial, with whom we co-create our experience in the world. Everyone can help tend each other’s mental health and safety. Just as we have learned to protect and heal each other’s bodies, we will need to learn to protect each other’s minds. This means allowing our emerging, co-evolving systems of human and non-human agents to develop empathy, shared responsibility, and trust, so that manipulative systems and predatory incentives cannot so easily exploit our vulnerabilities. This was the focus of my doctoral dissertation, which examined the fundamental principles and practical conditions by which a system composed of populations of competing agents could eventually give rise to trust and cooperation. The solution calls for reimagining the architectures of our social, technological, and economic institutions so they align with mutual care, and cultivating diverse mixtures of cultures that value psychological safety at an individual and global level. This is the path of care, compassion, and social resilience.
We’re only beginning to build the cultural tools to deal with this. Naming the dynamic—seeing how our attention gets manipulated and why it makes truth harder to reach—is the first step. Maybe the question isn’t whether the internet is cigarettes or food. Maybe it’s whether we can learn to distinguish empty calories from real sustenance, and whether we can do it together—helping each other flourish in a world too full.
Overcoming Mental Cyberattacks with Tools for Care and Reflection
I’ve seen people throwing technology at it: why don’t we use ChatGPT to steward our consumption of information. I’m all in favor of having AI mentor us—it holds many values, and I’m myself engaged in various projects developing this kind of technology for mindful education and personalized mentorship. But this presents a risk, which we can’t afford to ignore: these new proxies are so, so very susceptible to being hijacked themselves. Jailbreaks, hijacking, man-in-the-middle attack, you name it. LLMs are weak, and—take it from someone whose training came from cryptography and is spending large efforts on LLM cyberdefense—will remain so for many years. Actually, LLMs haven’t even gone through the test of any significant time like our original cognitive algorithms have—our whole biology relies on layer on layer of protections and immune systems shielding us from ourselves—from self-exposition to injuries, germs, harmful bacteria, viruses and harm of countless sorts and forms. This is why we don’t need to live in constant fear of death and suffering on a daily basis. But technology is a new vector for new diseases—many of which unidentified.
To protect our mental health, education ought to adapt quickly in the Age of Information Overload. Credit: Jamie Gill / Getty Images
We need to learn to deal with such new threats in this changing world. The internet. Entertainment. Our ways of healthy life. How we raise our children. The entire design of education itself must evolve. We need new schools—institutions built not just to transmit knowledge but to help us develop the discernment, resilience, and collective care needed to thrive amid infinite information. It’s no longer about preparing for yesterday’s challenges. It’s about learning to navigate a world where our instincts are constantly manipulated and where reflection, curiosity, and shared wisdom are our best defenses against mental malnutrition. It’s time to work on incorporating care and reflection in our technolives—acknowledgedly, we can’t not recognize our lives are going to be technological too, and we also need to accept it and focus on symbiotizing with the technological rather fighting it—which requires in turn to help catalyze a protective niche of care and reflection-oriented technosociety around it. Let’s develop a healthcare system for our minds.
AI communication channels may represent the next major technological leap, driving more efficient interaction between agents—artificial or not. While recent projects like Gibberlink demonstrate AI optimizing exchanges beyond the constraints of human language, fears of hidden AI languages must be correctly debunked. The real challenge is balancing efficiency with transparency, ensuring AI serves as a bridge—not a barrier—in both machine and human-AI communication.
At the ElevenLabs AI hackathon in London last month, developers Boris Starkov and Anton Pidkuiko introduced a proof-of-concept program called Gibberlink. The project features two AI agents that start by conversing in human language, recognizing each other as AI, before switching to a more efficient protocol using chirping audio signals. The demonstration highlights how AI communication can be optimized when unhinged from the constraints of human-interpretable language.
While Gibberlink points to a valuable technological direction in the evolution of AI-to-AI communication—one that has rightfully captured public imagination—it remains but an early-stage prototype relying so far on rudimentary principles from signal processing and coding theory. Actually, Starkov and Pidkuiko themselves emphasized that Gibberlink’s underlying technology isn’t new: it dates back to the dial-up internet modems of the 1980s. Its use of FSK modulation and Reed-Solomon error correction to generate compact signals, while a good design, falls short of modern advances, leaving substantial room for improvement in bandwidth, adaptive coding, and multi-modal AI interaction.
Gibberlink, Global Winner of ElevenLabs 2025 Hackathon London. The prototype demonstrated how two AI agents started a normal phone call about a hotel booking, then discovered they both are AI, and decided to switch from verbal english to a more efficient open standard data-over-sound protocol ggwave. Code: https://github.com/PennyroyalTea/gibberlink Video Credit: Boris Starkov and Anton Pidkuiko
Media coverage has also misled the public, overstating the risks of AI concealing information from humans and fueling speculative narratives—sensationalizing real technical challenges into false yet compelling storytelling. While AI-to-AI communication can branch out of human language for efficiency, it has already done so across multiple domains of application without implying any deception or harmful consequences from meaning obscuration. Unbounded by truth-inducing mechanisms, social media have amplified unfounded fears about malicious AI developing secret languages beyond human oversight—ironically, more effort in AI communication research may actually enhance transparency by discovering safer ad-hoc protocols, reducing ambiguity, embedding oversight meta-mechanisms, and in term improving explainability and enhancing human-AI collaboration, ensuring greater transparency and accountability.
In this post, let us take a closer look at this promising trajectory of AI research, unpacking these misconceptions while examining its technical aspects and broader significance. This development builds on longstanding challenges in AI communication, representing an important innovation path with far-reaching implications for the future of machine interfaces and autonomous systems.
Debunking AI Secret Language Myths
Claims that AI is developing fully-fledged secret languages—allegedly to evade human oversight—have periodically surfaced in the media. While risks related to AI communication exist, such claims are often rooted in misinterpretations of the optimization processes that shape AI behavior and interactions. Let’s explore three examples. In 2017, we remember the story around Facebook AI agents streamlining negotiation dialogues, which far from being a surprising emergent phenomenon, merely amounted to a predictable outcome of reinforcement learning, mistakenly seen by humans as a cryptic language (Lewis et al., 2017). Similarly, a couple of years ago, OpenAI’s DALL·E 2 seemed to be responding to gibberish prompts, sparked widespread discussion, often misinterpreted as AI developing a secret language. In reality, this behavior is best explained by how AI models process text through embedding spaces, tokenization, and learned associations rather than intentional linguistic structures. What seemed like a secret language to some, may be closer to low-confidence neural activations, akin to mishearing lyrics, rather than a real language.
Source: Daras & Dimakis (2022)
Models like DALL·E (Ramesh et al., 2021) map words and concepts as high-dimensional vectors, and seemingly random strings can, by chance, land in regions of this space linked to specific visuals. Built from a discrete variational autoencoder (VAE), an autoregressive decoder-only transformer similar to GPT-3, and a CLIP-based pair of image and text encoders, DALL·E processes text prompts by first tokenizing them using Byte-Pair Encoding (BPE). Since BPE breaks text into subword units rather than whole words, we should also note that even gibberish inputs can be decomposed into meaningful token sequences for which the model has learned associations. These tokenized representations are then mapped into DALL·E’s embedding space via CLIP’s text encoder, where they may, by chance, activate specific visual concepts. This understanding of training and inference mechanisms highlights intriguing quirks, explaining why nonsensical strings sometimes produce unexpected yet consistent outputs, with important implications for adversarial attacks and content moderation (Millière, 2023). While there is no proper hidden language to be found, analyzing the complex interactions within model architectures and data representations can reveal vulnerabilities and security risks, which may are likely to occur at their interface with humans, and will need to be addressed.
One third and more creative connection may be found in the reinforcement learning and guided search domain, with AlphaGo, which developed compressed, task-specific representations to optimize gameplay, much like expert shorthand (Silver et al., 2017). Rather than relying on explicit human instructions, it encoded board states and strategies into efficient, unintuitive representations, refining itself through reinforcement learning. The approach somewhat aligns with the argument by Lake et al. (2017) that human-like intelligence requires decomposing knowledge into structured, reusable compositional parts and causal links, rather than mere brute-force statistical correlation and pattern recognition—like Deep Blue back in the days. However, AlphaGo’s ability to generalize strategic principles from experience used different mechanisms from human cognition, illustrating how AI can develop domain-specific efficiency without explicit symbolic reasoning. This compression of knowledge, while opaque to humans, is an optimization strategy, not an act of secrecy.
Illustration of AlphaGo’s representations being able to capture tactical and strategic principles of the game of go. Source: Egri-Nagy & Törmänen (2020)
Fast forward to the recent Gibberlink prototype, with AI agents switching from English to a sound-based protocol for efficiency, is a deliberately programmed optimization. Media narratives framing this as dangerous slippery slope towards AI secrecy overlook that such instances are explicitly programmed optimizations, not emergent deception. These systems are designed to prioritize efficiency in communication, not to obscure meaning, although there might be some effects on transparency—which may be carefully addressed and mediated, if it were the point of focus.
The Architecture of Efficient Languages
In practice, AI-to-AI communication naturally gravitates toward faster, more reliable channels, such as electrical signaling, fiber-optic transmission, and electromagnetic waves, rather than prioritizing human readability. However, one does not preclude the other, as communication can still incorporate “subtitling” for oversight and transparency. The choice of a communication language does not inherently prevent translations, meta-reports, or summaries from being generated for secondary audiences beyond the primary recipient. While arguments could be made that the choice of language influences ranges of meanings that can be conveyed—with perspectives akin to the Sapir-Whorf hypothesis and related linguistic relativity—this introduces a more nuanced discussion on the interaction between language structure, perception, and cognition (Whorf, 1956; Leavitt, 2010).
Language efficiency, extensively studied in linguistics and information theory (Shannon, 1948; Gallager, 1962; Zipf, 1949), drives AI to streamline interactions much like human shorthand. For an interesting piece of research that takes information-theoretic tools to characterize natural languages, Coupé et al. (2011) showed that, regardless of speech rate, languages tend to transmit information at an approximate rate of 39 bits per second. This, in turn, suggested a universal constraint on processing efficiency, which again connects with linguistic relativity . While concerns about AI interpretability and security are valid, they should be grounded in technical realities rather than speculative fears. Understanding how AI processes and optimizes information clarifies potential vulnerabilities—particularly at the AI-human interface—without assuming secrecy or intent. AI communication reflects engineering constraints, not scheming, reinforcing the need for informed discussions on transparency, governance, and security.
This figure from Coupé et al.’s study illustrates that, despite significant variations in speech rate (SR) and information density (ID) across 17 languages, the overall information rate (IR) remains consistent at approximately 39 bits per second. This consistency suggests that languages have evolved to balance SR and ID, ensuring efficient information transmission. Source: Coupé et al. (2011)
AI systems are become more omnipresent, and will increasingly need to interface with each other in an autonomous manner. This will require the development of more specialized communication protocols, either by human design, or continuous evolution of such protocols—and probably various mixtures of both. We may then witness emergent properties akin to those seen in natural languages—where efficiency, redundancy, and adaptability evolve in response to environmental constraints. Studying these dynamics could not only enhance AI transparency but also provide deeper insights into the future architectures and fundamental principles governing both artificial and human language.
When AI Should Stick to Human Language
Despite the potential for optimized AI-to-AI protocols, there are contexts where retaining human-readable communication is crucial. Fields involving direct human interaction—such as healthcare diagnostics, customer support, education, legal systems, and collaborative scientific research—necessitate transparency and interpretability. However, it is important to recognize that even communication in human languages can become opaque due to technical jargon and domain-specific shorthand, complicating external oversight.
AI can similarly embed meaning through techniques analogous to human code-switching, leveraging the idea behind the Sapir-Whorf hypothesis (Whorf, 1956), whereby language influences cognitive structure. AI will naturally gravitate toward protocols optimized for their contexts, effectively speaking specialized “languages.” In some cases, this is explicitly cryptographic—making messages unreadable without specific decryption keys, even if the underlying language is known (Diffie & Hellman, 1976). AI systems could also employ sophisticated steganographic techniques, embedding subtle messages within ordinary-looking data, or leverage adversarial code obfuscation and data perturbations familiar from computer security research (Fridrich, 2009; Goodfellow et al., 2014). These practices reflect optimization and security measures rather than sinister intent.
Gibberlink operates by detecting when two AI agents recognize each other as artificial intelligences. Upon recognition, the agents transition from standard human speech to a faster data-over-audio format called ggwave. The modulation approach employed is Frequency-Shift Keying (FSK), specifically a multi-frequency variant. Data is split into 4-bit segments, each transmitted simultaneously via multiple audio tones in a predefined frequency range (either ultrasonic or audible, depending on the protocol settings). These audio signals cover a 4.5kHz frequency spectrum divided into 96 equally spaced frequencies, utilizing Reed-Solomon error correction for data reliability. Received audio data is decoded using Fourier transforms to reconstruct the original binary information.
Although conceptually elegant, this approach remains relatively basic compared to established methods in modern telecommunications. For example, advanced modulation schemes such as Orthogonal Frequency-Division Multiplexing (OFDM), Spread Spectrum modulation, and channel-specific encoding techniques like Low-Density Parity-Check (LDPC) and Turbo Codes could dramatically enhance reliability, speed, and overall efficiency. Future AI-to-AI communication protocols will undoubtedly leverage these existing advancements, transcending the simplistic methods currently seen in demonstrations such as Gibberlink.
This is a short demonstration of ggwave in action. A console application, a GUI desktop program and a mobile app are communicating through sound using ggwave. Source code: https://github.com/ggerganov/ggwave Credit: Georgi Gerganov
New AI-Mediated Channels for Human Communication
Beyond internal AI-to-AI exchanges, artificial intelligence increasingly mediates human interactions across multiple domains. AI can augment human communication through real-time translation, summarization, and adaptive content filtering, shaping our social, professional, and personal interactions (Hovy & Spruit, 2016). This growing AI-human hybridization blurs traditional boundaries of agency, raising complex ethical and practical questions. It becomes unclear who authors a message, makes a decision, or takes an action—the human user, their technological partner, or a specific mixture of both. With authorship, of course, comes responsibility and accountability. Navigating this space is a thin rope to walk, as over-reliance on AI risks diminishing human autonomy, while restrictive policies may stifle innovation. Continuous research in this area is key. If approached thoughtfully, AI can serve as a cognitive prosthetic, enhancing communication while preserving user intent and accountability (Floridi & Cowls, 2019).
Thoughtfully managed, this AI-human collaboration will feel intuitive and natural. Rather than perceiving AI systems as external tools, users will gradually incorporate them into their cognitive landscape. Consider the pianist analogy: When an experienced musician plays, they no longer consciously manage each muscle movement or keystroke. Instead, their cognitive attention focuses on expressing emotions, interpreting musical structures, and engaging creatively. Similarly, as AI interfaces mature, human users will interact fluidly and intuitively, without conscious translation or micromanagement, elevating cognition and decision-making to new creative heights.
Ethical issues and how to address them were addressed by our two panelist speakers, Dr. Pattie Maes (MIT Media Lab) and Dr. Daniel Helman (Winkle Institute), at the final session of the New Human Interfaces Hackathon, part of Cross Labs’ annual workshop 2025.
What Would Linguistic Integration Between Humans and AI Entail?
Future AI-human cognitive integration may follow linguistic pathways familiar from human communication studies. Humans frequently switch between languages (code-switching), blend languages into creoles, or evolve entirely new hybrid linguistic structures. AI-human interaction could similarly generate new languages or hybrid protocols, evolving dynamically based on situational needs, cognitive ease, and efficiency.
Ultimately, Gibberlink offers a useful but modest illustration of a much broader trend: artificial intelligence will naturally evolve optimized communication strategies tailored to specific contexts and constraints. Rather than generating paranoia over secrecy or loss of control, our focus should shift toward thoughtfully managing the integration of AI into our cognitive and communicative processes. If handled carefully, AI can serve as a seamless cognitive extension—amplifying human creativity, enhancing our natural communication capabilities, and enriching human experience far beyond current limits.
Gibberlink’s clever demonstration underscores that AI optimization of communication protocols is inevitable and inherently beneficial, not a sinister threat. The pressing issue is not AI secretly communicating; rather, it’s about thoughtfully integrating AI as an intuitive cognitive extension, allowing humans and machines to communicate and collaborate seamlessly. The future isn’t about AI concealing messages from us—it’s about AI enabling richer, more meaningful communication and deeper cognitive connections.
References
Coupé, C., Oh, Y. M., Dediu, D., & Pellegrino, F. (2019). Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche. Science Advances, 5(9), eaaw2594. https://doi.org/10.1126/sciadv.aaw2594
Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors. Available at SSRN 3388669.
Daras, G., & Dimakis, A. G. (2022). Discovering the hidden vocabulary of dalle-2. arXiv preprint arXiv:2206.00169.
Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644-654.
Egri-Nagy, A., & Törmänen, A. (2020). The game is not over yet—go in the post-alphago era. Philosophies, 5(4), 37.
Fridrich, J. (2009). Steganography in digital media: Principles, algorithms, and applications. Cambridge University Press.
Gallager, R. G. (1962). Low-density parity-check codes. IRE Transactions on Information Theory, 8(1), 21-28.
Goodfellow, I., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D., & Batra, D. (2017). Deal or no deal? End-to-end learning for negotiation dialogues. arXiv:1706.05125.
Millière, R. (2023). Adversarial attacks on image generation with made-up words: Macaronic prompting and the emergence of DALL·E 2’s hidden vocabulary.
Ramesh, A. et al. (2021). Zero-shot text-to-image generation. arXiv:2102.12092.
Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-423.
Silver, D. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354-359.
Whorf, B. L. (1956). Language, thought, and reality. MIT Press.
Zipf, G. K. (1949). Human behavior and the principle of least effort: An introduction to human ecology. Addison-Wesley.
As AI continues to evolve, conversations have started questioning the future of traditional programming and computer science education. The rise of prompt engineering—the art of crafting inputs to lead AI models to generating specific outputs—has led many to believe that mastering this new skill could replace the need for deep computational expertise. While this perspective does capture a real ongoing shift in how humans interact with technology, it overlooks the essential role of human intelligence in guiding and collaborating with AI. This seems a timely topic, and one worthy of discussion—one that we’ll be exploring at this year’s Cross Labs Spring Workshop on New Human Interfaces.
Can we forget about programming? As AI reshapes our interaction with technology, the future is unlikely to erase programming—rather, it would redefine it as another language between humans and machines. Image Credit: Ron Lach
The Evolving Role of Humans in AI Collaboration
Nature demonstrates a remarkable pattern of living systems integrating new tools into, transforming them into essential components of life itself. For example, mitochondria, once an independent bacteria, became integral to eukaryotic cells, taking the function of energy producers through endosymbiosis. Similarly, the incorporation of chloroplasts enabled plants to harness sunlight for photosynthesis, and even the evolution of the vertebrate jaw exemplifies how skeletal elements were repurposed into functional innovations. These examples highlight nature’s ability to adapt and integrate external systems, offering a profound analogy for how humans might collaborate with AI to augment and expand our own capabilities.
Current AI systems, regardless of their sophistication, remain tools that require some kind of human direction to achieve meaningful outcomes. Effective collaboration with AI involves not only instructing the machine but also understanding its capabilities and limitations. Clear communication of our goals ensures that AI can process and act upon them accurately. This process transcends mere command issuance; it quickly turns into a dynamic, iterative dialogue where human intuition and machine computation synergize to mutually cause a desirable outcome.
Jensen Huang’s Take on Democratizing Programming
“The language of programming is becoming more natural and accessible.” –NVIDIA CEO Jensen Huang at Computex Taipei, emphasizing how interfacing computers is undergoing a radical shift. Credit: NVIDIA 2023
NVIDIA Founder and CEO Jensen Huang highlighted this paradigm shift in these words:
“The language of programming is becoming more natural and accessible. With AI, we can communicate with computers in ways that align more closely with human thought. This democratization empowers experts from all fields to leverage computing power without traditional coding.”
The quote is from a couple of years ago, in Huang’s Computex 2023 keynote address. This transformation means that domain specialists—not just scientists and SWEs—can now harness AI to drive their work forward. But this evolution doesn’t render foundational knowledge in computer science obsolete; it merely underscores the imminent change in the nature of human-machine interactions. Let us dive further into this interaction.
Augmenting Human Intelligence Through New Cybernetics
A helpful approach, to fully realize AI’s potential, is to focus on augmenting human intelligence via ways akin to the infamous ways of cybernetic technology (Wiener, 1948; Pickering, 2010). « Infamous » only because of how the field abruptly fell out of favor in the late 70’s, as the time’s technical limitations unfortunately led to its decline, along with a growing skepticism about its implications for agency, autonomy, and human identity. They were nevertheless not a poor approach by any count, and may soon become relevant again, as technology evolves into a substrates that better supports its goals.
El Ajedrecista: the first computer game in history. Norbert Wiener (right), the father of cybernetics—the science studying communication and control in machines and living beings—is shown observing the pioneering chess automaton created by Leonardo Torres Quevedo, here demonstrated by his son at the 1951 Paris Cybernetic Conference. Image credit: Wikimedia Commons
Such cybernetic efforts towards human augmentation aim to couple human cognition with external entities such as smart sensors, robotic tools, and autonomous AI models, through a universal science of dynamical control and feedback. This integration between humans and other systems fosters an enactive and rich, high-bandwidth communication between humans and machines, enabling a seamless exchange of information and capabilities.
Embedding ourselves within a network of intelligent systems, and possibly other humans as well, enhances our cognitive and sensory abilities. Such a symbiotic relationship would allow us to better address complex challenges, by efficiently processing and interpreting vast amounts of data. Brain-computer interfaces (BCIs) are examples of such technologies that facilitate direct communication between the human brain and external devices, offering promising avenues for cognitive enhancement (Lebedev & Nicolelis, 2017). Another example is augmented reality (AR), which overlays real-world percepts and virtual knobs with digital additions to enhance human’s connections to physical reality, thus enhancing our experience and handling of it (Billinghurst et al., 2015; Dargan et al., 2023). If they manage to seamlessly blend physical and virtual realities, AR systems have the power to amplify human cognitive and sensory capabilities, empowering us to navigate our problem spaces in contextually meaningful yet intuitive ways.
Humble Steps: The Abacus
Augmenting human intelligence with tools. Here, we observe the classical but not less impressive example of abacuses to assimilate advanced mathematical skills. Video Credit: Smart Kid Abacus Learning Pvt. Ltd. 2023
We just mentioned a couple of rather idealized, advanced-stage technologies to cybernetically couple humans to new rendering of the reality of problem spaces they navigate. But a new generation of cybernetics need not start at the most complex and technologically advanced level. In fact, the roots of such a technology can already be found in simple, yet powerful and transformative tools such as the abacus. That simple piece of wood, beads, and strings has done a tremendous job externalizing our memory and computation, extending our mind’s ability to process numbers and solve problems. In doing so, it has demonstrated how even the most modest-looking tool may amplify cognitive abilities in such a groundbreaking way, laying the basis for more sophisticated augmentations that may merge human intelligence with machine computation.
Not only did the abacus extend human mathematical abilities, but it did so by enactively—through an active, reciprocal interaction—stretching our existing mechanisms of perception and understanding, enhancing how we sense, interpret, interact with, and make sense of the world around us. The device doesn’t replace, but instead really amplifies our existing innate sensorial mechanisms. It brings our mathematical cognitive processes into a back-and-forth ensemble of informational exchanges between our inner understanding and the tool’s mechanics, through which we effectively—and, again, enactively—extend. Similarly, today’s cybernetic technologies can start as modest, focused augmentations—intuitive, accessible, and seamlessly integrated—building step by step toward more profound cognitive symbiosis.
Extending One’s Capabilities Through Integration of Tools
The principle of extending human capabilities through tools, as exemplified by the abacus, can be generalized in a computational framework, where integrating various systems may enhance their combined power to solve problems. Let’s take a simple example of such phenomenon. For instance, in Biehl and Witkowski (2021), we considered how the computational capacity of a specific region in elementary cellular automata (ECA) can be expanded in terms of the number of functions it can compute.
Modern cybernetics: a person seamlessly working on their laptop, using a prosthetic arm, exemplifying an extension of their own embodiment and cognitive ability. This somewhat poetically illustrates Biehl and Witkowski (2021)’s work on expansions of number of functions computed by a given agent, as the agent’s computation tool expands. Credit: Anna Shvets
Interestingly, this research led to the discovery that while coupling certain agents (regions of the ECA) with certain devices they may use (adjacent regions) typically increases the number of functions they are able to compute, some strange instances would occur as well. Sometimes, in spite of « upgrading » the devices used by agents (enlarging the adjacent regions to them), the computing abilities of the agents ended up dropping instead of increasing. This is the computational analog to people upgrading to a new phone, a more powerful laptop, or a bigger car, only to end up being behaviorally handicapped by this very upgrade.
This shows how this mechanism of functional extension, much like the abacus’s amplification of human cognition, extends nicely to a large range of situation, and yet should be treated with great care as it may affect greatly the agency of users. Now that we have established the scene, let’s dive into the specific and important tool of programming.
Programming: Towards A Symbiotic Transition
Programming. What an impactful, yet fundamental tool invented by humans. Was it even invented by humans? The process of creating and organizing instructions—which can later be followed by oneself or a distinct entity to carry out specific tasks—can be found in many instances in nature. DNA encodes genetic instructions, ant colonies use pheromones to dynamically compute and coordinate complex tasks, and even « simple » chemical reaction networks (CRNs) display many features of programming, as certain molecules act as instructions that catalyze their own production, effectively programming reactions that propagate themselves when specific conditions are met. Reservoir computing would be a recent example of this ubiquity of programming in the physical world.
Programming, a human invention, or a human discovery? This image—titled “an artist’s illustration of artificial intelligence”—was inspired by neural networks used in deep learning. Credit: Novoto Studio
Recently, many have argued that it may no longer be worthwhile to study traditional programming, as prompt engineering and natural language interactions dominate. While it’s true that the methods for working with AI are changing, the essence of programming—controlling another entity to achieve goals—remains intact. Sure, programming will look and feel different. But has everyone forgotten how far we’ve come since the advent of Ada Lovelace’s pioneering work and the earliest low-level programs? Modern software engineering typically implies working with numerous abstraction layers built atop the machine and OS levels—using high-level languages like Python, front-end frameworks such as React or Angular, and containerization tools like Docker and Kubernetes. This doesn’t have to be seen as so different from the recent changes with AI. At the end of the day, in one way or another, humans must utilize some language—or, as we’ve established, a set of languages at different levels of abstractions—to combine their own cognitive computation with the machine’s—or, other humans too, really, at the scale of society—so as to work together to produce meaningful results.
This timeless collaboration—akin to the mitochondria or chloroplast story we touched on earlier—will always involve the establishment of a two-way communication. On one hand, humans must convey goals and contexts to machines, as AI cannot achieve objectives without clear guidance. On the other, machines must send back outputs, whether as solutions, insights, or ongoing dialogue, refining and improving the achievement of goals. This bidirectional exchange is the foundation of a successful human-machine partnership. Far from signaling the end of programming, it signals its natural evolution into a symbiotic relationship where human cognition and machine computation amplify one another.
The seamless integration of prosthetics in a human’s everyday life. Credit: Cottonbro Studio
What I this piece is generally gesturing at, is that by integrating cybernetic technologies to seamlessly couple human cognition with increasingly advanced, generative computational tools, we can build a future where human intelligence is not replaced but expanded. It will be key for the designs to enable enactive, interactive, high-bandwidth communication that fosters mutual understanding and goal alignment. On this path, the future of programming isn’t obsolescence—it means transformation into a symbiotic relationship. By embracing collaborative, iterative human-machine interactions, we can amplify human intelligence, creativity, and problem-solving, unlocking possibilities far beyond what either can achieve alone.
Human-Machine Interaction: Who Is Accountable? This image represents ethics research understanding human involvement in data labeling. Credits: Ariel Lu
Human-machine interaction is inherently bidirectional: machines provide feedback, solutions, and insights that we interpret and integrate, while humans contribute context, objectives, and ethical considerations that guide AI behavior. This continuous dialogue enhances problem-solving by combining human creativity and contextual understanding with machine efficiency and computational power. As we navigate this evolving technological landscape, focusing on responsible AI integration will be critical. Our collaboration with machines should aim to augment human capabilities while respecting societal values, goals, and, importantly, the well-being of all life.
As Norbert Wiener put it:
“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. Lewis Carroll fully expressed this notion in an episode in Sylvie and Bruno, when he showed that the only completely satisfactory map to scale of a given country was that country itself” (Wiener, 1948).
“The best material model for a cat is another, or preferably the same cat.” – Norbert Wiener. Image Credit: Antonio Lapa/ CC0 1.0
True innovation lies in complementing the complexities of life rather than merely replicating them. In order to augment human intelligence, we ought to design our new technologies—especially AI—so as to build on and harmonize with the rich tapestry of human knowledge, the richness of experience, and the physicality of natural systems, rather than attempting to replace them with superficial imitations. Our creations are only as valuable as their ability to reflect and amplify the intricate nature of life and the world itself, enabling us to pursue deeper understanding, open-ended creativity, and meaningful purpose.
The ultimate goal is clear: to be able to design a productive connective tissue for human ingenuity that would seamlessly sync up with the transformative power of machines, and a diverse set of substrates beyond today’s computing trendiest purviews. By embracing and amplifying what we already have, we may unlock new possibilities and redefine adjacent possibles that transcend the boundaries of human imagination. This approach is deeply rooted in a perspective of collaboration and mutual growth, working toward a world in which technology remains a force for empowering humanity, not replacing it.
Allow me to digress into a final thought. Earlier today, a friend shared an insightful thought about how Miyazaki’s films, such as Kiki’s Delivery Service, seem to exist outside typical Western narrative patterns – a thought itself prompted by them watching this video. While the beloved Disney films of my childhood may have offered some great morals and lessons here and there, they definitely fell short in showing what a kind world would look like. Reflecting on this, I considered how exposure to Ghibli’s compassionate, empathetic storytelling might have profoundly shaped my own learning journey as a child – although it did get to me eventually. The alternative worldview these movies offer gently nudges us toward being more charitable. By encouraging us to view all beings as inherently kind, well-intentioned, and worthy of empathy and care, perhaps this compassionate stance is exactly what we need as we strive toward more meaningful, symbiotic collaborations with technology, and certainly with other humans too.
References:
Biehl, M., & Witkowski, O. (2021). Investigating transformational complexity: Counting functions in elementary cellular automata regions. Complexity. https://doi.org/10.1155/2021/7501405
Billinghurst, M., Clark, A., & Lee, G. (2015). A survey of augmented reality. Foundations and Trends in Human–Computer Interaction, 8(2–3), 73–272. https://doi.org/10.1561/1100000049
Lebedev, M. A., & Nicolelis, M. A. L. (2017). Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiological Reviews, 97(2), 767–837. https://doi.org/10.1152/physrev.00027.2016
Rosenblueth, A., & Wiener, N. (1945). The role of models in science. Philosophy of science, 12(4), 316-321. https://www.jstor.org/stable/184253
Smart Kid Abacus Learning Pvt. Ltd. [@smartkidabacuslearningpvt.ltd]. (ca. 2022). 12th National & 5th International Competition held in Pune [Video]. YouTube. Retrieved February 22, 2025, from https://youtu.be/YtFK5Dl-bww?si=-XS164m4LnrC4L5N.
In technology, less can truly be more. Scarcity doesn’t strangle progress—it refines it. DeepSeek, cut off from high-end hardware, and Japan, facing a demographic reckoning, are proving that limitations don’t merely shape innovation—they accelerate it. From evolutionary biology to AI, history shows that the most profound breakthroughs don’t originate from excess, but from the pressure to rethink, reconfigure, and push beyond imposed limitations.
When a system—biological, economic, or digital—encounters hard limits, it is forced to adapt, sometimes in a radical way. This can lead to major breakthroughs, ones that would never arise in conditions of structural and resource abundance.
In such situations of constraint, innovation can be observed to follow a pattern—not of mere survival, but of reinvention. What determines whether a bottleneck leads to stagnation or transformation is not the limitation itself, but how it is approached. By embracing constraints as creative fuel rather than obstacles, societies can design a path where necessity doesn’t just drive invention—it defines the next frontier of intelligence.
Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Feb 11, 2024.
DeepSeek: What Ecological Substrate for a Paradigm Shift?
Recently, DeepSeek, the Chinese AI company the AI world has been watching, has achieved a considerable, yet often misrepresented in the popular media, technological feat. On January 20, 2025, DeepSeek released its R1 large language model (LLM), developed at a fraction of the cost incurred by other vendors. The company’s engineers successfully leveraged reinforcement learning with rule-based rewards, model distillation for efficiency, and emergent behavior networks, among enabling advanced reasoning despite compute constraints.
The company first published R1’s big brother V3 last December 2024, a Mixture-of-Experts (MoE) model which allowed for reduced computing costs, without compromising on performance. R1 then focused on reasoning, DeepSeek’s R1 model surpassed ChatGPT to become the top free app on the US iOS App Store just about a week after its launch. This is certainly most remarkable, for a model trained using only about 2,000 GPUs, which is about one whole order of magnitude less than current leading AI companies. The training process was completed in approximately 55 days at a cost of $6M, 10% or so of the expenditure by US tech giants like Google or Meta for comparable technologies. To many, DeepSeek’s resource-efficient approach challenges the global dominance of American AI models, leading to significant market consequences.
DeepSeek R1 vs. other LLM Architectures (Left) and Training Processes (Right). Image Credit: Analytics Vidhya.
Is a Bottleneck Necessary?
DeepSeek’s impressive achievement finds its context at the center of a technological bottleneck. Operating under severe hardware constraints—cut off from TSMC’s advanced semiconductor fabrication and facing increasing geopolitical restrictions—Chinese AI development companies such as DeepSeek have been forced to develop their models in a highly constrained compute environment. Yet, rather than stalling progress, such limitations may in fact accelerate innovation, compelling researchers to rethink architectures, optimize efficiency, and push the boundaries of what is possible with limited resources.
While the large amounts resources made available by large capital investments—especially in the US and the Western World—enable rapid iteration and the implementation of new tools that exploit scaling laws in LLMs, one must admit such efforts mostly reinforce existing paradigms rather than forcing breakthroughs. Historically, constraints have acted as catalysts, from biological evolution—where environmental pressures drive adaptation—to technological progress, where necessity compels efficiency and new architectures. DeepSeek’s success suggests that in AI, scarcity can be a driver, not a limitation, shaping models that are not just powerful, but fundamentally more resource-efficient, modular, and adaptive. However, whether bottlenecks are essential or merely accelerators remains an open question—would these same innovations have emerged without constraint, or does limitation itself define the next frontier of intelligence?
Pushing Beyond Hardware (Or How Higher-Level Emergent Computation Can Free Systems from Lower-Level Limits)
It’s far from being a secret: computation isn’t only about hardware. Instead, it’s about the emergence of higher-order patterns that exist—to a large extent—independently of lower-level mechanics. This may come across as rather obvious in everyday (computer science) life: network engineers troubleshoot at the protocol layer without needing to parse machine code; the relevant dynamics are fully contained at that abstraction. Similarly, for deep learning, a model’s architecture and loss landscape really mostly determine its behavior, not individual floating-point operations on a GPU. Nature operates no differently: biological cells function as coherent systems, executing processes that cannot be fully understood by analyzing individual molecules that form them.
These examples of separation of scale show how complexity scientists identify so-called emergent macrolevel patterns, which can be extracted by coarse graining from a more detailed microlevel description of a system’s dynamics. Under this framing, a bottleneck can be identified at the lower layer—whether in raw compute, molecular interactions, or signal transmission constraints—but is often observed to dissolve at the higher level, where emergent structures optimize flow, decision-making, and efficiency. Computation—but also intelligence, and arguably causality, but I should leave this discussion for another piece—exist beyond the hardware that runs them.
So bottlenecks in hardware can be overcome by clever software abstraction. If we were to get ahead of ourselves—but this is indeed where we’re headed—this is precisely how software ends up outperforming hardware alone. While hardware provides raw power, well-designed software over it structures it into emergent computation that is intelligent, efficient, and perhaps counterintuitively reduces complexity. A well-crafted heuristic vastly outpaces brute-force search. A transformer model’s attention mechanisms and tokenization matters more than the number of GPUs used to train it. And, in that same vein, DeepSeek, with fewer GPUs and lower computational resources, comes to rival state-of-the-art models—oversimplifiedly, with my apologies—made out of mere scaling, by incorporating a few seemingly simple tricks, which are nevertheless truly innovative. If so, let’s pause to appreciate the beautiful demonstration of intelligence not being about sheer compute—it’s about how computation is structured and optimized to produce meaningful results.
Divide and Conquer with Compute
During my postdoctoral years at Tokyo Tech (now merged and renamed Institute of Science Tokyo), one of my colleagues there brought up an interesting conundrum: If you had a vast amount of compute to use, would you use it as a single unified system, or rather divide it into smaller, specialized units to tackle a certain set of problems? What at first glance might seem like an armchair philosophical question, as it turns out touches on fundamental principles of computation and the emergent organization of complex systems. Of course, it depends on the architecture of the computational substrate, as well as the specific problem set. The challenge at play is one of optimization under uncertainty—how to best allocate computational power when navigating an unknown problem space.
The question maps naturally onto a set of scientific domains where distributed computation, hierarchical layers of cognitive systems, and major transitions in evolution intersect. In some cases, centralized computation maximizes power and coherence, leading to brute-force solutions or global optimization. In others, breaking compute into autonomous, interacting subsystems enables diverse exploration, parallel search, and modular adaptation—similar to how biological intelligence, economies, and even neural architectures function. Which strategy proves superior depends on the nature of the problem landscape: smooth and well-defined spaces favor monolithic compute, while rugged, high-dimensional, and open-ended domains can benefit from distributed, loosely coupled intelligence. The balance between specialization and generalization, like coordination vs. autonomy, selective tension vs. relaxation, and goal-drivenness vs. exploration, is one of the deepest open questions, with helpful theories in both artificial and natural realms of complex systems sciences.
Scaling Computation: Centralized vs. Distributed
Computationally, the problem can be framed simply within computational complexity theory, parallel computation, and search algorithmics in high-dimensional spaces. Given a computational resource C, should one allocate it as a single monolithic system or divide into n independent modules, each operating with C/n capacity? A unified, centralized system would run one single instance of a exhaustive search algorithm, optimal for well-structured problems where brute-force or hierarchical methods are viable (e.g., dynamic programming, alpha-beta pruning).
However, as the problem space grows exponentially, computational bottlenecks from sequential constraints prevent linear scaling (Amdahl’s Law) and the curse of dimensionality cause diminishing returns because of the sparsity of relevant solutions. Of course, distributed models introduce parallelism, exploration-exploitation trade-offs, and acknowledgedly other emergent effects too. Dividing C into n units enables decentralized problem solving, similar to multi-agent systems, where independent search processes—akin to Monte Carlo Tree Search (MCTS) or evolutionary strategies—enhance efficiency by maintaining diverse, adaptive search trajectories, particularly in unstructured problem spaces, for example in learning the known complex game of Go (Silver et al., 2016).
Emergent complexity from bottom layers of messy concurrent dynamics, to apparent problem solving at the visible top. Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Feb 11, 2024.
If the solution lies in a non-convex, high-dimensional problem space, decentralized approaches—similar to Swarm Intelligence models—tend to converge faster, provided inter-agent communication remains efficient. When overhead is minimal, distributed computation can achieve near-linear speedup, making it significantly more effective for solving complex, open-ended problems. In deep learning, Mixture of Experts (MoE) architectures exemplify this principle: rather than a single monolithic model, specialized subnetworks activate selectively, optimizing compute usage while improving generalization. Similarly, in distributed AI (e.g., federated learning, neuromorphic systems), intelligent partitioning enhances adaptability while mitigating computational inefficiencies. Thus, the core trade-off is between global coherence and parallelized adaptability—with the optimal strategy dictated by the structure of the problem space itself.
Overcoming Hardware Shortcomings
Back to DeepSeek and similar companies, who may be in situation where they increasingly need to navigate severe hardware shortages. Without access to TSMC’s cutting-edge semiconductor fabrication and facing increasing geopolitical restrictions, DeepSeek operates within a highly constrained compute environment. Yet, rather than stalling progress, such bottlenecks historically have accelerated innovation, compelling researchers to develop alternative approaches that might ultimately redefine the field. Innovation emerges from constraints.
This pattern is evident across history. The evolution of language likely arose as an adaptation to the increasing complexity of human societies, allowing for more efficient information encoding and transmission. The emergence of oxygenic photosynthesis provided a solution to energy limitations, reshaping Earth’s biosphere and enabling multicellular life. The Manhattan Project, working under extreme time and material constraints, produced groundbreaking advances in nuclear physics. Similarly, postwar Japan, despite scarce resources, became a global leader in consumer electronics, precision manufacturing, and gaming, with companies like Sony, Nintendo, and Toyota pioneering entire industries through a culture of innovation under limitation.
Japan’s Unique Approach to Innovation
I moved to Japan about two decades ago to pursue science. Having started my career as an engineer and an entrepreneur, I was drawn to Japan’s distinctive approach to life and technology—deeply rooted in balanced, principled play (in the game of go: honte / 本手 points to the concept of solid play, ensuring the balance between influence and territory), craftsmanship (takumi / 匠, refined skill and mastery in all Japanese arts), and harmonious coexistence (kyōsei / 共生, symbiosis as it is found between nature, humans, and technology). Unlike in many Western narratives, where automation and AI are often framed as competitors or disruptors of society, Japan views them as collaborators, seamlessly integrating them with humans. This openness is perhaps shaped by animistic, Shinto, Confucian and Buddhist traditions, which emphasize harmony between human and non-human agents, whether biological or artificial.
Japan’s technological trajectory has also been shaped by its relative isolation. As an island nation, it has long pursued an independent, highly specialized path, leading to breakthroughs in semiconductors, microelectronics, and precision manufacturing—industries where it remains a critical global leader in spite of a tough competition competition. The country’s deep investment in exploratory science, prioritizing long-term innovation over short-term gains, has cultivated a culture in which technology is developed with foresight and long-term reflection—albeit at times in excess—rather than mere commercial viability competition.
In recent years, Japan has initiated efforts to revitalize its semiconductor industry. Japan’s Integrated Innovation Strategy emphasizes the importance of achieving economic growth and solving social issues through advanced technologies, reflecting the nation’s dedication to long-term innovation and societal benefit (Government of Japan, 2022). The establishment of Rapidus Corporation in 2022 aims to develop a system for mass-producing next-generation 2-nanometer chips in collaboration with IBM, underscoring Japan’s commitment to maintaining its leadership in advanced technology sectors (Government of Japan, 2024). These initiatives highlight Japan’s ongoing commitment to leveraging its unique approach to technology, fostering advancements that align with both economic objectives and societal needs.
Illustration from the Report « Integrated Innovation Strategy 2022: Making Great Strides Toward Society 5.0 » – The three pillars of Japan’s strategy are innovation in science and technology, societal transformation through digitalization, and sustainable growth through green innovation. (Government of Japan, 2022).
Turning Socio-Economic Bottlenecks into Breakthroughs
Today, like China and Korea, Japan faces one of its most defining challenges: a rapidly aging population and a shrinking workforce (Schneider et al., 2018; Morikawa et al., 2024). While many view this as an economic crisis, Japan is transforming constraint into opportunity, driving rapid advancements in automation, AI-assisted caregiving, and industrial robotics. The imperative to sustain productivity without a growing labor force has made Japan a pioneer in human-machine collaboration, often pushing the boundaries of AI-driven innovation faster than many other nations.
Beyond automation, Japan is also taking the lead in AI safety. In February 2024, the government launched Japan’s AI Safety Institute (J-AISI) to develop rigorous evaluation methods for AI risks and foster global cooperation. Japan is a key participant in the International Network of AI Safety Institutes, collaborating with the US, UK, Europe, and others to shape global AI governance standards. These initiatives reflect a broader philosophy of proactive engagement: Japan signals that it does not fear AI’s risks, nor does it blindly embrace automation—it ensures that AI remains both innovative and secure.
At the same time, Japan must navigate the growing risks of open-source AI technologies. While open models have been instrumental in democratizing access and accelerating research, they also introduce new security vulnerabilities. Voice and video generation AI has already raised concerns over deepfake-driven misinformation, identity fraud, and digital impersonation, while the rise of LLM-based operating systems presents new systemic risks, creating potential attack surfaces at both infrastructural and individual levels. As AI becomes increasingly embedded in critical decision-making, securing these systems is no longer optional—it is imperative.
Japan’s history of constraint-driven innovation, its mastery of precision engineering, and its forward-thinking approach to AI safety place it in a unique position to lead the next era of secure, advanced AI development. Its current trajectory—shaped by demographic shifts, computational limitations, and a steadfast commitment to long-term technological vision—mirrors the very conditions that have historically driven some of the world’s most transformative breakthroughs. Once again, Japan is not merely adapting to the future—it is defining it.
Bottlenecks have always been catalysts for innovation—whether in evolution, where constraints drive adaptation, or in technology, where scarcity forces breakthroughs in efficiency and design. True progress emerges not from excess, but from necessity. Japan, facing a shrinking workforce, compute limitations, and an AI landscape dominated by scale, must innovate differently—maximizing intelligence with minimal resources, integrating automation seamlessly, and leading in AI safety. It is not resisting constraints; it is advancing through them. And while Japan may be the first to navigate these pressures at scale, it will not be the last. The solutions it pioneers today—born of limitation, not abundant wealth—may soon define the next era of global technological progress. In this, we can see the outlines of an innovation algorithm—one that harnesses cultural and intellectual context to transform constraints into breakthroughs.
References
Amdahl, G. M. (1967). Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities. AFIPS Conference Proceedings. https://doi.org/10.1145/1465482.1465560
Morikawa, M. (2024). Use of artificial intelligence and productivity: Evidence from firm and worker surveys (RIETI Discussion Paper 24-E-074). Research Institute of Economy, Trade and Industry. https://www.rieti.go.jp/en/columns/v01_0218.html
Silver, D., Huang, A., Maddison, C. J., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489. https://doi.org/10.1038/nature16961
In his recent blog postThe Intelligence Age, a few days ago, Sam Altman has expressed confidence in the power of neural networks and their potential to achieve artificial general intelligence (AGI—some strong form of AI reaching a median human level of intelligence and efficiency for general tasks) given enough compute. He sums it up in 15 words: “deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.” With sufficient computational power and resources, he claims, humanity should reach superintelligence within “a few thousand days (!)”
AI will, according to Altman, keep improving with scale, and this progress could lead to remarkable advances for human life, including AI assistants performing increasingly complex tasks, improving healthcare, and accelerating scientific discoveries. Of course, achieving AGI will require us to address major challenges along the way, particularly in terms of energy resources and their management to avoid inequality and conflict over AI’s use. Once all challenges are overcome, one would hope to see a future where technology unlocks limitless possibilities—fixing climate change, colonizing space, and achieving scientific breakthroughs that are unimaginable today. While this sounds compelling, one must keep in mind how the concept of AGI and its application remains vague and problematic. Intelligence, much like compute, is inherently diverse and comes with a set of constraints, biases, and hidden costs.
Historically, breakthroughs in computing and AI have been tied to specific tasks, even when they seemed more general. For example, even something as powerful as a Turing machine, capable of computing anything theoretically, still has its practical limitations. Different physical substrates, like GPUs or specialized chips, allow for faster or more efficient computation in specific tasks, such as neural networks or large language models (LLMs). These substrates demonstrate that each form of AI is bound by its physical architecture, making some tasks easier or faster to compute.
Beyond computing, this concept can be better understood in its extension to biological systems. For instance, the human brain is highly specialized for certain types of processing, like pattern recognition and language comprehension, but it is not well-suited for tasks that require high-speed arithmetic or complex simulations, in which computers excel. Reversely, biological neurons, in spite of operating much slower than their digital counterparts, achieve remarkable feats in energy efficiency and adaptability through parallel processing and evolutionary optimization. Perhaps quantum computers make for an even stronger example: while they promise enormous speedups for specific tasks like factoring large numbers or simulating molecular interactions, the idea of them being universally faster than classical computers is absolutely false. Additionally, they will also require specialized algorithms to fully leverage their potential, which may require another few decades to develop.
These examples highlight how both technological and biological forms of intelligence are fundamentally shaped by their physical substrates, each excelling in certain areas while remaining constrained in others. Whether it’s a neural network trained on GPUs or a biological brain evolved over millions of years, the underlying architecture plays a key role in determining which tasks can be efficiently solved and which remain computationally expensive or intractable.is bound by its physical architecture, making some tasks easier or faster to compute.
As we look toward the potential realization of an AGI, whatever this may formally mean—gesturing vaguely at some virtual omniscient robot overlord doing my taxes—it’s important to recognize that it will likely still be achieved in a “narrow” sense—constrained by these computational limits. Additionally, AGI, even when realized, will not represent the most efficient or intelligent form of computation; it is expected to reach only a median human level of efficiency and intelligence. While it might display general properties, it will always operate within the bounds of the physical and computational layers imposed on it. Each layer, as in the OSI picture of networking, will add further constraints, limiting the scope of the AI’s capabilities. Ultimately, the quest for AGI is not about breaking free from these constraints but finding the path of least resistance to the most efficient form of intelligent computation within these limits.
While I see Altman’s optimism about scaling deep learning as valid, one should realize that the implementation of AGI will still be shaped by physical and computational constraints. The future of AI will likely reflect these limits, functioning in a highly efficient but bounded framework. There is more to it. As Stanford computer scientist Fei-Fei Li advocates for it, embodiment, “Large World Models” and “Spatial Intelligence” are probably crucial for the next steps in human technology and may remain unresolved by a soft AGI as envisioned by Altman. Perhaps the field of artificial life too may offer tools for a more balanced and diverse approach to AGI, by incorporating the critical concepts of open-endedness, polycomputing, hybrid and unconventional substrates, precariousness, mutually beneficial interactions between many organisms and their environments, as well as the self-sustaining sets of processes defining life itself. This holistic view could enrich our understanding of intelligence, extending beyond the purely computational and human-based to include the richness of embodied and emergent intelligence as it could be.
At a point in time when technology and biology appear to converge, can we decode the mysteries of life grounded in either realm, through the lens of science and philosophy? Bridging between natural and artificial seems to challenge conventional wisdom and propel us into a wild landscape of new possibilities. Yet, the inquiry into the nature of life, regardless of its medium and the specific laws of the substrate from which it emerges, may give us an opportunity to redefine the contours of our own identity as human beings, transcending the physics, chemistry, biology, culture, and technology that are made by and constitute us.
A depiction of artificial cybernetic entity encompassing diverse layers and forms of life. Image Credit: Generated by Olaf Witkowski using DALL-E version 2, August 21, 2024.
Artificial Life, commonly referred to as ALife, is an interdisciplinary field that studies the nature and principles of living systems [1]. Similarly to its elder sibling, Artificial Intelligence (AI), ALife’s ambition is to construct intelligent systems from the ground up. However, its scope is broader. It concentrates not only on mimicking human intelligence, but instead aims at modeling and understanding the whole realm of living systems. Parallel to biology’s focus of modeling known living systems, it ventures further, exploring the concept of “Life as It Could Be”, which encompasses undiscovered or unexisting forms of life, on Earth or elsewhere. As such, it truly pushes the boundaries of our current scientific, technological, and philosophical understanding of the nature of the living state.
The study of artificial life concentrates on three main questions: (a) the emergence of life on Earth or any system, (b) its open-ended evolution and seemingly unbound increase in complexity through time, and (c) its ability of becoming aware of its own existence and of the physical laws of the universe in which it is embedded, thus closing the loop. In brief, how life emerges, grows, and becomes aware. One could also subtitle these parts as the origin, intelligence, and consciousness of the living state.
The first point, about the emergence of life, may be thought of as follows. If one were to fill a cup with innate matter – perhaps some water and other chemical elements – and leave it untouched for an extended period of time, it might end up swarming with highly complex life. This seemingly mundane observation serves as a very concrete metaphor for the vast and complex range of potentialities that reside in possible timelines of the physical world. The contents of the cup may eventually foster many forms of life, from minimal cells to the most complex, highly cognitive assemblages. Artificial life thus explores the emergence and properties of complex living systems from basic, non-living substrates. This analogy points out ALife’s first foundational question: How can life arise from the non-living? By delving into the mechanisms that enable the spontaneous emergence of life-like properties and behaviors, ALife researchers strive to understand the mechanisms of self-organization (appearance of order from local interactions), autopoiesis (or the capacity of an entity to produce itself), robustness (resilience to change), adaptation (ability to adjust in response to environmental change), and morphogenesis (developing and shifting shape), all key processes that appear to animate the inanimate.
This, in turn, paves the way for our understanding of the open-ended evolution of living systems, which tend to acquire increasing amounts of complexity through time. This begs the second foundational question: How does life indefinitely invent novel solutions to its own survival and striving? Or, in its more practical declension: How can we design an algorithm that captures the essence of open-ended evolution, enabling the continuous, autonomous generation of novel and increasingly complex forms of life and intelligence in any environment? Unlocking the mechanism behind this open-endedness is crucial because it embodies the ultimate creative process setting us on the path of infinite innovation [2]. It represents the potential to harness the generative power of nature itself, enabling the discovery and creation of unforeseen solutions, technologies, and forms of intelligence that could address some of humanity’s most enduring challenges. At its core, it also connects with the very ability of living systems to learn, which brings us to our third and final point.
The world as we see it is a reflection of life’s ability to represent it. Here, an illustration painted by a form of life—my two-year-old daughter, Alys.
Not only do some of systems learn, but they also appear to acquire – assuming they didn’t possess this faculty at some earlier stage, or at least not to the same extent – a knack for rich, high-definition, vivid sensing, perception, experience, understanding, and interaction with their own reality with goals. How do these increasingly complex pieces and patterns forming on the Universe’s chess board become aware of their own, and other beings’ existence? The third foundational question of ALife delves into the consciousness and self-awareness of living systems: How do complex living systems become aware of their existence and the fundamental laws of the universe they inhabit? This question explores the transition from mere biological complexity to the emergence of cognitive processes that allow life to reflect upon itself and its surroundings. ALife investigates the principles underlying this awareness feature of life, and aims to replicate such phenomena within artificial systems. This inquiry not only broadens our understanding of consciousness but also challenges us to recreate systems that are not only alive and intelligent, but are also aware of their own aliveness and intelligence, closing the loop of life’s emergence, evolution, and self-awareness.
All three questions are investigated through Feynman’s synthetic, engineering angle: What I cannot create, I do not understand. By aiming at not only explaining, but also effectively creating and recreating life-like characteristics in computational, chemical, mechanical, or other physical systems, the research endeavor instantiates itself as a universal synthetic biology field of philosophy, science and technology. This includes the development of software simulations that exhibit behaviors associated with life—such as reproduction, metabolism, adaptation, and evolution—and the creation of robotic or chemical systems that mimic life’s physical and chemical processes. Through these components, ALife seeks to understand the essential properties that define life by creating systems that exhibit these properties in controlled settings, thus providing insights into the mechanisms underlying biological complexity and the potential for life in environments vastly different from those encountered so far on Earth, and also exploring the condition of possibility of other creatures combining known and unknown patterns of life in any substrate. This in turn, should allow us to better understand the uniqueness and awesome nature of life, human or other, on the map of all possible life, and perhaps will also inform our ethics for all beings [3].
References
[1] Bedau, M. A., & Cleland, C. E. (Eds.). (2018). The Nature of Life. Cambridge University Press.
[2] Stanley, K. O. (2019). Why open-endedness matters. Artificial life, 25(3), 232-235.
[3] Witkowski, O., and Schwitzgebel, E. (2022). Ethics of Artificial Life: The Moral Status of Life as It Could Be. ALIFE 2022: The 2022 Conference on Artificial Life. MIT Press.
Further Reading
Scharf, C. et al. (2015). A strategy for origins of life research.
Baltieri, M., Iizuka, H., Witkowski, O., Sinapayen, L., & Suzuki, K. (2023). Hybrid Life: Integrating biological, artificial, and cognitive systems. Wiley Interdisciplinary Reviews: Cognitive Science, 14(6), e1662.
Witkowski, O., Doctor, T., Solomonova, E., Duane, B., & Levin, M. (2023). Toward an ethics of autopoietic technology: Stress, care, and intelligence. Biosystems, 231, 104964.
Dorin, A., & Stepney, S. (2024). What Is Artificial Life Today, and Where Should It Go?. Artificial Life, 30(1), 1-15.
International Society for Artificial life: Artificial Life https://alife.org/
This piece is cross-posted here, as a part of a compendium of short essays edited by the Center for Study of Apparent Selves after a workshop at Tufts University in Boston in 2023.
As the AI landscape keeps updating itself at the greatest speed, so does the relationship between humans and technology. By paying attention to the autopoietic nature of this relationship, we may work towards building ethical AI systems that respect both the unique particularities of being a human, and the unique emerging qualities that our technology displays as it evolves. I’d like to share some thoughts about how autopoiesis and care, via the pursuit of an ethics of our relationship with technology, can help us cultivate and grow a valuable society to create a better, healthier, and more ethical ecosystem for AI, with a natural human perspective.
The term ‘autopoiesis’ – or ‘self-creation’ (from Greek αὐτo- (auto-) ‘self’, and ποίησις (poiesis) ‘creation, production’) was first introduced by Maturana and Varela (1981), describing a system capable of maintaining its own existence within a boundary. This principle highlights the importance of understanding the relationship between self and environment, as well as the dynamic process of self-construction that gives rise to complex organisms (Levin, 2022; Clawson, 2022).
The main components for ethical AI governance. Here, we suggest that these ingredients naturally emerge from an autopoietic communication design, focused on companionship instead of alignment.
To build and operate AI governance systems that are ethical and effective, we must first acknowledge that technology should not be seen as a mere tool serving human needs. Instead, we should view it as a partner in a rich relationship with humans, where integration and mutual respect are the default for their engagements. Philosophers like Martin Heidegger or Martin Buber have warned us against reducing our relationship with technology to mere tool use, as this narrow view can lead to a misunderstanding of the true nature of our relationship with technological agents, including both potential dangers and values. Heidegger (1954) emphasized the need to view technology as a way of understanding the world and revealing its truths, and suggested a free relationship with technology would respect its essence. Buber (1958) argued that a purely instrumental view of technology would reduce the human scope to mere means to an end, which in turn leads to a dehumanizing effect on society itself. Instead, one may see the need for a more relational view of technology that recognizes the interdependence between humans and the technological world. This will require a view of technology that is embedded in our shared human experience and promotes a sense of community and solidarity between all beings, under a perspective that may benefit from including the technological beings – or, better, hybrid ones.
Illustration of care light cones through space and time, showing a shift in possible trajectories of agents through made possible by integrated cooperation between AI and humans. Figure extracted from our recent paper on an ethics of autopoietic technology. Design by Jeremy Guay.
In a recent paper, we have presented an approach through the lens of a feedback loop of stress, care, and intelligence (or SCI loop), which can be seen as a perspective on agency that does not rely on burdensome notions of permanent and singular essences (Witkowski et al., 2023). The SCI loop emphasizes the integrative and transformational nature of intelligent agents, regardless of their composition – biological, technological, or hybrid. By recognizing the diverse, multiscale embodiments of intelligence, we can develop a more expansive model of ethics that is not bound by artificial, limited criteria. To address the risks associated with AI ethics, we can start by first identifying these risks by working towards an understanding of the interactions between humans and technology, as well as the potential consequences of these interactions. We can then analyze these risks by examining their implications within the broader context of the SCI loop and other relevant theoretical frameworks, such as Levin’s cognitive light cone (in biology; see Levin & Dennett (2020)) and the Einstein-Minkowski light cone (in physics).
Poster of the 2013 movie “Her”, created by Spike Jonze, illustrating the integration between AI and humans, as companions, not tools.
Take a popular example, in the 2013 movie “Her” by Spike Jonze, in which Theodore, a human, goes to form a close emotional connection with his AI assistant, Samantha, with the complexity of their relationship challenging the concept of what it means to be human. The story, although purely fictitious and highly simplified, depicts a world in which AI becomes integrated with human lives in a deeply relational way, pushing a view of AI as a companion, rather than a mere tool serving human needs. Instead, it gives a crip vision of how AI can be viewed as a full companion, to be treated with empathy and respect, helping us question our assumptions about the nature of AI and our relation to it.
One may have heard it all before, in some – possibly overly optimistic – posthumanistic utopic scenarios. But one may defend that the AI companionship view, albeit posthumanistic, constitutes a complex and nuanced theoretical framework drawing from the interplay between the fields of artificial intelligence, philosophy, psychology, sociology, and more fields studying the complex interaction of humans and technology (Wallach & Allen, 2010; Johnson, 2017; Clark, 2019). This different lens radically challenges traditional human-centered perspectives and opens up new possibilities for understanding the relationship between humans and technology.
This leads us to very practical steps for the AI industry to move towards a more companionate relationship with humans include recognizing the interdependence between humans and technology, building ethical AI governance systems, and promoting a sense of community and solidarity between all beings. For example, Japan, a world leader in the development of AI, is increasing its efforts to educate and train its workforce on the ethical intricacies of AI and foster a culture of AI literacy and trust. The “Society 5.0” vision aims to leverage AI to create a human-centered, sustainable society that emphasizes social inclusivity and well-being. The challenge now is to ensure that these initiatives translate into concrete actions and that AI is developed and used in a way that respects the autonomy and dignity of all stakeholders involved.
AI Strategic Documents Timeline by UNICRI AI Center (2023). For more information on the AI regulations timeline, please see here.
Japan is taking concrete steps towards building ethical AI governance systems and promoting a more companionate relationship between humans and technology. One example of such steps is the creation of the AI Ethics Guidelines by the Ministry of Internal Affairs and Communications (MIC) in 2019. These guidelines provide ethical principles for the development and use of AI. Additionally, the Center for Responsible AI and Data Intelligence was established at the University of Tokyo in 2020, aiming to promote responsible AI development and use through research, education, and collaboration with industry, government, and civil society. Moreover, Japan has implemented a certification system for AI engineers to ensure that they are trained in the ethical considerations of AI development. The “AI Professional Certification Program” launched by the Ministry of Economy, Trade, and Industry (METI) in 2017 aims to train and certify AI engineers in the ethical and social aspects of AI development. These initiatives demonstrate Japan’s commitment to building ethical AI governance systems, promoting a culture of AI literacy and trust, and creating a human-centered, sustainable society that emphasizes social inclusivity and well-being.
A creative illustration of robotic progress automation (RPA) based on AI companionship theory instead of artificial alignment control policies.
AI is best seen as a companion rather than a tool. This positive way of viewing the duet we form with technology may in turn lead to a more relational and ethical approach to AI development and operation, helping us to build a more sustainable and just future for both humans and technology. By fostering a culture of ethical AI development and operation, we can work to mitigate these risks and ensure that the impact on stakeholders is minimized. This includes building and operating AI governance systems within organizations, both domestic and overseas, across various business segments. In doing so, we will be better equipped to navigate the challenges and opportunities that lie ahead, ultimately creating a better, healthier, and more ethical AI ecosystem for all. It is our responsibility to take concrete steps to build ethical and sustainable systems that prioritize the well-being of all. This is a journey for two close companions.
References
Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2008). Autonomy: An Information Theoretic Perspective. In BioSystems.
Buber, M. (1958). I and Thou. Trans. R. G. Smith. New York: Charles Scribner’s Sons.
Clawson, R. C., & Levin, M. (2022). The Endless Forms of Self-construction: A Multiscale Framework for Understanding Agency in Living Systems.
Haraway, D. (2013). The Cyborg Manifesto. In The International Handbook of Virtual Learning Environments.
Heidegger, M. (1954). The Question Concerning Technology. Trans. W. Lovitt. New York: Harper Torchbooks.
Huttunen, T. (2022). Heidegger, Technology, and Artificial Intelligence. In AI & Society.
Johnson, D. G. (2017). Humanizing the singularity: The role of literature in AI ethics. IEEE Technology and Society Magazine, 36(2), 6-9. https://ieeexplore.ieee.org/document/7882081
Latour, B. (1990). Technology is Society Made Durable. In The Sociological Review.
Levin, M., & Dennett, D. C. (2020). Cognition all the way down. Aeon Essays.
Maturana, H. R., & Varela, F. J. (1981). Autopoiesis and Cognition: The Realization of the Living.
Varela, F. J., Maturana, H. R., & Uribe, R. (1981). Autopoiesis: The Organization of Living Systems.
Waddington, C. H. (2005). The Field Concept in Contemporary Science. In Semiotica.
Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford University Press.
Witkowski, O., Doctor, T., Solomonova, E., Duane, B., & Levin, M. (2023). Towards an Ethics of Autopoietic Technology: Stress, Care, and Intelligence. https://doi.org/10.31234/osf.io/pjrd2
Witkowski, O., & Schwitzgebel, E. (2022). Ethics of Artificial Life: The Moral Status of Life as It Could Be. In ALIFE 2022: The 2022 Conference on Artificial Life. MIT Press. https://doi.org/10.1162/isal_a_00531
This opinion piece was prompted by the recent publication of Stephen Hawking’s last writings, where he mentioned some ideas on superintelligence. Although I have the most utter respect for his work and vision, I am afraid some of it may be read in a very misleading way.
I’ve been pondering whether or not I should write on the topic of the current “AI anxiety” for a while, but always concluded there would be no reason to, since I don’t have any strong opinion to convey about it. Nevertheless, there is just a number of myths I believe are easy to debunk. This is what I’ll try to do here. So off we go, let’s talk about AI, transhumanism, the evolution of intelligence, and self-reflective AI.
The late physicist Stephen Hawking was really wary of the dangers of AI. His last writings were just published in the UK’s Sunday Times, where he raises the well-known problem of alignment. The issue is about regulating AI, since in the future, once AI develops a will of its own, its will might conflict with ours. The following quote is very representative of this type of idea:
“In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.
– Stephen Hawking
As Turing’s colleague Irving Good pointed out in 1965, once intelligent machines are able to design even more intelligent ones, the process could be repeated over and over: “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”. Vernor Vinge, an emeritus professor of computer science at San Diego State University and a science fiction author, said in his 1993 essay “The Coming Technological Singularity” that this very phenomenon could mean the end of the human era, as the new superintelligence advance technologically at an incomprehensible rate and potentially outdo any human feat. At this point, we caught the essence of what is scary to the reader, and it is exactly what feeds the fear on this topic, including for deep thinkers such as Stephen Hawking.
Photographs by Anders Lindén/Agent Bauer (Tegmark); by Jeff Chiu/A.P. Images (Page, Wozniak); by Simon Dawson/Bloomberg (Hassabis), Michael Gottschalk/Photothek (Gates), Niklas Halle’n/AFP (Hawking), Saul Loeb/AFP (Thiel), Juan Mabromata/AFP (Russell), David Paul Morris/Bloomberg (Altman), Tom Pilston/The Washington Post (Bostrom), David Ramos (Zuckerberg), all from Getty Images; by Frederic Neema/Polaris/Newscom (Kurzwell); by Denis Allard/Agence Réa/Redux (LeCun); Ariel Zambelich/ Wired (Ng); Bobby Yip/Reuters/Zuma Press (Musk), graphics by VanityFair/Condé Nast.
Hawking is only one among many whistleblowers, from Elon Musk to Stuart Russel, including AI experts too. Elizier Yudkowsky, in particular, remarked that AI doesn’t have to take over the whole world with robot or drones or any guns or even the Internet. He says: “It’s simply dangerous because it’s smarter than us. Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines.”. Essentially, the danger of AI goes beyond the specificities of its possible embodiments, straight to the properties attached to its superior intelligent capacity.
Hawking says he fears the consequences of creating something that can match or surpass humans. Humans, he adds, who are limited by slow biological evolution, couldn’t compete and would be superseded. In the future AI could develop a will of its own, a will that is in conflict with ours. Although I understand the importance of being as careful as possible, I tend to disagree with this claim. In particular, there is no reason human evolution has to be slower. Not only can engineer our own genes, but we also augment ourselves in many other ways. Now, I want to make it clear that I’m not advocating for any usage of such technologies without due reflection on societal and ethical consequences. I want to point out that such doors will be open to our society, and are likely to become usable in the future.
Augmented humans
Let’s talk more about ways of augmenting humans. This goes through defining carefully what technological tools are. In my past posts, I have mentioned how technology can be any piece of machinery that augments a system’s capacity in its potential action space. The human tools such as hammers and nails fall under this category. So do the inventions of democracy and agriculture, respectively a couple of thousand and around 10,000 years ago. If we go further back, more than 100 million years ago, animals invented eusocial societies. Even earlier, around 2 billion years ago, single cells surrounded by membranes incorporated other membrane-bound organelles such as mitochondria, and sometimes chloroplasts too, forming the first eukaryotic cells. In fact, each transition in evolution corresponds to the discovery of some sort of technology too. All these technologies are to be understood in the acception of augmentation of the organism’s capacity.
Human augmentation, or robot humanization? Credit: Comfreak/Pixabay.
Humans can be augmented not only by inventing tools that change their physical body, but also their whole extended embodiment, including the clothes they wear, the devices they use, and even the cultural knowledge they hold, for all pieces are constituents of who they are and how they affect their future and the future of their environment. It’s not a given that any of the extended human’s parts will be slower to evolve slower than AI, which is most evident for the cultural part. It’s not clear either that they will evolve faster, but we realize how one must not rush to conclusions.
On symbiotic relationships
Let’s come back a moment on the eukaryotic cell, one among many of the nature’s great inventions. An important point about eukaryotes, is that they did not kill mitochondria, or vice-versa. Nor did some of them enslave choloroplasts. In fact, there is no such clear cut in nature. The correct term is symbiosis. In the study of biological organisms, symbioses qualify interactions between different organisms sharing the physical medium in which they live, often (though not necessarily) to the advantage of both. It may be important to note that symbiosis, in the biological sense, does not imply a mutualistic, i.e. win-win, situation. I here use symbiosis as any interaction, beneficial or not for each party.
Symbiosis seems very suitable to delineate the phenomenon by which an entity such as a human invents a tool by incorporating in some way elements of its environment, or simply ideas that are materialized into a scientific innovation. If that’s the case, it is natural to consider AI and humans as just pieces able to interact with each other.
Example of symbiosis, between honeybees and flowers. In the process of collecting pollen, bees pollinate flowers, helping in the formation of seeds. In return, flowers produce pollen, which provides bees with all the nutrients they need.
There are several types of symbiosis. Mutualism, such as a clownfish living in a sea anemone, allows two partners to benefit from the relationship, here by protecting each other. In commensalism, only one species benefits while the other is neither helped nor harmed. An example of that is remora fish which attach themselves to whales, sharks, or rays and eat the scraps their hosts leave behind. The remora fish gets a meal, while their host arguably gets nothing. The last big one is parasitism, where an organism gains, while another loses. For example, the deer tick (which happens to be very present here in Princeton) is a parasite that attaches to the warmblooded animal, and feeds on its blood, adding up risks of Lyme disease to the loss of blood and nutrients.
Once technology, AI, becomes autonomous, it’s easy to imagine that all three scenarios (just to stick to these three) could happen. And that would be more than fair to be worried that the worst one could happen: the AI could become the parasite, and the human could lose in fitness, eventually dying off. It’s natural to envisage the worst case scenario. Now, in the same way we learned in our probability classes, it’s important to weigh it against the best case scenarios, with respective chances that they will happen.
Let’s note here that probabilities are tough to estimate, and humans have a famously bad intuition of it. There might always been a certain value in overestimating risks, as has been demonstrated repeatedly in the psychology literature. Not to mention the Pascal Wager’s argument, which blatantly overestimate risks in the most ridiculous ways, while still duping a vast, vast audience. But let me not get into that. We don’t want to make me Ang Lee (yes, I’m a fan of Stewart Lee, saw my chance here and went for it).
The notebook is the archetype of one’s mind extension. Credit: Fred Cummins’ blog.
The point is that inventions of tools result in symbiotic relationships, and in such relationships, the parts become tricky to distinguish from each other. This is not without reminding us of the extended mind problem, approached by Andy Clark (Clark & Chalmers 1998). The idea, somewhat rephrased, is that it’s hard for anyone to locate boundaries between intelligent beings. If we consider just the boundaries of our skin, and say that outside the body is outside the intelligent entity, what are tools such as notebooks, without which we wouldn’t be able to act the same way? Clark and Chalmers proposed an approach called active externalism, or extended cognition, based on the environment driving cognitive processes. Such theories are to be taken with a grain of salt, but surely apply nicely to the way we can think of such symbioses and their significance.
Integrated tools
Our tools are part of ourselves. When we use a tool, such as a blind person’s cane or an “enactive torch” (Froese et al. 2012), it’s hard to tell where the body boundary ends, and where the tool begins. In fact, the reports we make using those tools are often that the limit of the body moves to the edge of the tool, instead of remaining contained within the skin.
Blind people’s cane becomes an extension of their body. Credit: Blind Fields/Flickr.
Now, one could say that AI is a very complex object, which can’t be considered as a mere tool like the aforementioned cases. This is why it’s helpful to thought-experimentally replace the tool by a human. An example would be psychological manipulation, through some abusive or deceptive tactics, such as a psychopathic businessman bullying his insecure colleague into extra work for him, or a clever young boy grooming his mother into buying him what he wants. Since the object of the manipulation is an autonomous, goal-driven human, one can now ask them how they feel as well. And in fact, it has been reported by psychology specialists like George Simon (Simon & Foley 2011) that people being manipulated do feel a perceived loss of their sense of agency, and struggle in finding the reasons why they acted in certain ways. In most cases, they will invent fictitious causes, which they will swear are real. Other categories of examples could be as broad as social obligations, split-brain patients or any external mechanisms that force people (or living entities for that matter, as these examples are innumerable in biology) to act in a certain way, without them having a good reason of their own for it.
The Blind Robot is an art installation as a direct reference to the works of Merleau-Ponty and his example of the body extension of the blind man’s cane. Credit: Louis-Philippe Demers.
As small remark, I heard some people tell me the machine could be more than a human, in some way, breaking the argument before. Is it really? To me, once it is autonomously goal-driven, the machine comes close enough to a human being for the purpose of comparing the human-machine interaction to the human-human one. Surely, one may be endowed with a better prediction ability in certain given situations, but I don’t believe anything is conceptually different.
Delusions of control
It seems appropriate to open a parenthesis about control. We, human, seem to have the tendency to feel in control even when not. This persists even where AI is already in control. If we take the example of Uber, where an algorithm is responsible for assigning drivers to their next mission. Years earlier, Amazon, YouTube and many other platforms were already recommending their users what to watch, listen to, buy, or do next. In the future, these types of recommendation algorithms are likely to only expand their application domain, as it becomes more and more efficient and useful for an increasing number of domains to incorporate the machine’s input in decision-making and management. One last important example is the automatic medical advice which machine learning is currently becoming very efficient at. Based on increasing amounts of medical data, it is easier and safer in many cases to at least base a medical calls, from identification of lesions to decisions to perform surgery, on the machine’s input. We reached the point where it clearly would not be ethical to ignore it.
However, the impression of free will is not an illusion: in most examples of recommendation algorithms, we still can make the call. It becomes similar to choices of cooperation in nature. They are the result of a free choice (or rather, their evolutionary closely related analog), as the agent may choose not to couple its behavior to the other agent.
Dobby is free
The next question is naturally: what does the tool become, once detached from the control of a human? Well, what happens to the victim of a manipulative act, once released from the control of their manipulator? Effectively, they just come back in control again. Whether they perceive it or not, their autonomy is regained, making their action caused again (more) by their own internal states (than when under control).
AI, once given the freedom to act on its own, will do just that. If it has become a high form of intelligence, it will be free to act as such. The fear is here well justified: if the machine is free, it may be dangerous. Again, the mental exercise of replacing the AI with a human is helpful.
Dobby receives a sock, which frees him from his masters. Image credit: 2002 film “Harry Potter and the Chamber of Secrets”, adapted from J. K. Rowling’s novels.
Homo homini lupus. Humans are wolves to each other. How many situations can we find in our daily lives in which we witnessed someone choose a selfish act instead of the nice, selfless option? When I walk on the street, take the train, go to a soccer match, how do I even know that all those humans around me won’t hurt me, or worse? Even nowadays, crime and war plague our biosphere. Dangerous fast cars, dangerous manipulations of human pushed to despair, anger, fear, suffering surround us wherever we go, if we look closely enough. Why are we not afraid? Habituation is certainly one explanation, but the other is that society shields us. I believe the answer lies in Frans de Waal’s response to “homo homini lupus”. The primatologist how the proverb, beyond failing to do justice to canids (among the most gregarious and cooperative animals on the planet (Schleidt and Shalter 2003)), denies the inherently social nature of our own species.
The answer seems to lie indeed in the social nature of human-to-human relations. The power of society, which uses a great number of concomitant regulatory systems, each composed of multiple layers of cooperative mechanisms, is exactly what keeps each individual’s selfish behavior in check. This is not to say anything close to the “veneer theory” of altruism, which claims that life is fundamentally selfish with an icing of pretending to care on top. On the contrary, rich altruistic systems are fundamental, and central in the sensorimotor loop of each and every individual in groups. Numerous simulations of such altruism have been reproduced in silico, that show a large variety of mechanisms for their evolution (Witkowski & Ikegami 2015).
Dobby is this magical character from J. K. Rowling’s series of novels, who is the servant (or rather the slave) of some wizard. His people, if offered a piece of clothing from his masters, are magically set free. So what happens once “Dobby is free”, which in our case, corresponds to some AI, somewhere, beings made autonomous? Again, the case is no different from symbiotic relationships in the rest of nature. Offered degrees of freedom independent from human control, AIs get to simply share humans’ medium: the biosphere. They are left interacting together to extract free energy from it while preserving it, and preparing for the future of their destinies combined.
Autonomous AI = hungry AI
Not everyone thinks machines will be autonomous. In fact, Yann Lecun expressed, as reported by BBC, that there was “no reason why machines would have any self-preservation instinct”. At the AI conference I attended, organized by David Chalmers at NYU, in 2017, Lecun also mentioned that we would be able to control AI with appropriate goal functions.
I understand where Lecun is coming from. AI intelligence is not like human intelligence. Machines don’t need to be built with human drives, such as hunger, fear,lust and thirst for power. However, believing AI can be kept self-preservation-free is fundamentally misguided. One simple reason has been pointed out by Stuart Russel, who explains how drives can emerge from simple computer programs. If you program a robot to bring you coffee, it can’t bring you coffee if it’s dead. As I’d put it, as soon as you code an objective function into an AI, you potentially create subgoals in it, which can be comparable to human emotions or drives. Those drives can be encoded in many ways, included in the most implicit way. In artificial life experiments, from software to wetware, the emergence of mechanisms akin to self-preservation in emerging patterns is very frequent, and any students fooling around with simulations for some time can realize that early on.
So objective functions drive to drives. Because every machine possesses some form of objective function, even implicitly, it will make for a reason to preserve its own existence to achieve that goal. And the objective function can be as simple as self-preservation, some function that appeared early on in the first autonomous systems, i.e. the first forms of life on Earth. Is there really a way around it? I think it’s worth thinking about, but I doubt it’s the case.
How to control an autonomous AI
If any machine has drives, then how to control it? Most popular thinkers, specializing in the existential problem and dangers of future AI, seem to be interested in alignment of purposes, between humans and machines. I see how the reasoning goes: if we want similar things, we’ll all be friends in the best of worlds. Really, I don’t believe that is sufficient or necessary.
The further space exploration goes, and the more autonomy is required, as remotely controlling the machine would take too long delays. This picture shows the late Opportunity rover, that recently entered hibernation, on June 12, 2018, due to a dust storms. Credit: NASA/JPL-Caltech.
The solution that usually comes up is something along the off switch. We build all machines with an off switch, and if the goal function is not aligned with human goals, we switch the device off. The evident issue is to make sure that the machine, in the course of self-improving its intelligence, doesn’t eliminate the off switch or make it inaccessible.
What other options are we left with? If the machine’s drives are not aligned with its being controlled by humans, then the next best thing is to convince it to change. We are back on the border between agreement and manipulation, both based on our discussion above about symbiotic relationships.
Communication, not control
It is difficult to assess the amount of cooperation in symbioses. One way to do so is to observe communication patterns, as they are key to the integration of a system, and, arguably, its capacity to compute, learn and innovate. I touched upon this topic before in this blog.
The idea is that anyone with an Internet connection already has access to all the information needed to conduct research, so in theory, scientists could do their work alone locked up in their office. Yet, there seems to be a huge intrinsic value to exchanging ideas with peers. Through repeated transfers from mind to mind, concepts seem to converge towards new theorems, philosophical concepts, and scientific theories.
Recent progress in deep learning, combined with social learning simulations, offers us new tools to model these transfers from the bottom up. However, in order to do so, communication research needs to focus on communication within systems. The best communication systems not only serve as good information maps onto useful concepts (knowledge in mathematics, physics, etc.) but they are also shaped so as to be able to naturally evolve into even better maps in the future. With the appropriate communication system, any entity or group of entities has the potential to completely outdo another one.
Drone swarm robotics
Kilobots
Self-assembling robotics
Intel drone swarm at China Olympics.
A project I am working on in my daytime research, is to develop models of evolvable communication between AI agents. By simulating a communication-based community of agents learning with deep architectures, we can examine the dynamics of evolvability in communication codes. This type of system may have important implications for the design of communication-based AI capable of generalizing representations through social learning. This also has the potential to yield new theories on the evolution of language, insights for the planning of future communication technology, a novel characterization of evolvable information transfers in the origin of life, and new insights for communication with extraterrestrial intelligence. But most importantly, the ability to gain explicit insight about its own states, and being able to internally communicate about them, should allow an AI to teach itself to be wiser through self-reflection.
Shortcomings of human judgment about AI
Globally, it’s hard to emit a clear judgment on normative values in systems. Several branches of philosophy spent a lot of effort in that domain, without any impressive insights. It’s hard to dismiss the idea that humans might be stuck in their biases and errors, rendering it impossible to make an informed decision on what constitutes a “bad” or “good” decision in the design of a superintelligence.
Also, it’s very easy to draw on people’s fears. I’m afraid that this might be driving most of the research, in the near future. We saw how easy it was to fund several costly institutes to think about “existential risks”. Of course, it is only naturally sane for biological systems to act this way. The human mind is famously bad at statistics, which, among other weaknesses, makes it particularly risk averse. And indeed, on the smaller scale, it’s often better to be safe than sorry, but at the scale of technological advance, being safe may mean stagnate for a long time. I don’t believe we have so much time to waste. Fortunately, there are people thinking that way too, who make the science progress. Whether they act for the right reasons or not would be a different discussion.
AI anxiety, again.
Now, I’m actually glad the community that is thinking deeply about these questions is blooming lately. As long as they can hold off a bit on the whistleblowing and crazy writing, and focus on the research, and pondered reflection, I’ll be happy. What would make it better is the capacity to integrate knowledge from different fields of sciences, by creating common languages, but that’s also for another post.
Win-win, really
The game doesn’t have to be zero or negative sum. A positive-sum game, in game theory, refers to interactions between agents in which the total of gains and losses is greater than zero. A positive sum typically happens when the benefits of an interaction somehow increase for everyone, for example when two parties both gain financially by participating in a contest, no matter who wins or loses.
In nature, there are plenty of such positive sum games, especially in higher cognitive species. It was even proposed that evolutionary dynamics favoring positive-sum games drove the major evolutionary transitions, such as the emergence of genes, chromosomes, bacteria, eukaryotes, multicellular organisms, eusociality and even language (Szathmáry & Maynard Smith 1995). For each transition, biological agents entered into larger wholes in which they specialized, exchanged benefits, and developed safeguard systems to prevent freeloading to kill off the assemblies.
In the boardgame “Settlers of Catan”, individual trades are positive-sum for the two players involved, but the game as a whole is zero-sum, since only one player can win. This a simple example of multiple levels of games happening simultaneously.
Naturally, this happens at the scale of human life too, where a common example is the trading of surpluses, as when herders and farmers exchange wool and milk for grain and fruit, is a quintessential example, as is the trading of favors, as when people take turns baby-sitting each others’ children.
Earlier, we have mentioned the metaphor of ants, which get trampled on while the humans accomplish tasks that they would deem far too important to care about the insignificant loss of a few ants’ lives.
What is missing in the picture? The ants don’t reason at a level anywhere close to the way we do. As a respectful form of intelligence, I’d love to communicate my reasons to ants, but I feel like it would be a pure waste of time. One generally supposes this would this still be the case if we transpose to the relationship between humans and AI. Any AI supposedly wouldn’t waste their time showing respect to human lives, so that if higher goals were to be at stake, it would sacrifice humans in a heartbeat. Or would they?
I’d argue there are at least two significant differences between these two situations. I concede that the following considerations are rather optimistic, as they presuppose a number of assumptions: the AI must share a communication system with humans, must value some kind of wisdom in its reasoning, and maintain high cooperative levels. The bottom line is that I find this optimism more than justified, and I will probably expand on this in future posts on this blog.
Differentiable neural computers (Graves et al. 2016) are recurrent artificial neural network architectures with an autoassociative memory. Along with neural Turing machines (Graves, Wayne & Danihelka, 2014) they are nice candidates to produce reasoning-level AI.
The first reason is that humans are reasoning creatures. The second is that humans live in close symbiosis with AI, which is far from being the case between ants and humans. About the first point, reasoning constitutes an important threshold of intelligence. Before that, you can’t produce complex knowledge of logic, inference. You can’t construct complicated knowledge of mathematics, or scientific method.
As for the second reason, close symbiotic relation, it seems important to notice that AI came as an invention from humans, a tool that they use. Even if the AI becomes autonomous, it is unlikely that it would remove itself right away from human control. In fact, it is likely, just like many forms of life before it, that it will leave a trace of partially mutated forms on the way. Those forms will be only partially autonomous, and constitute a discrete but dense spectrum along which autonomy will rise. Even after the emergence of the first autonomous AI, each of the past forms is likely to survive and still be used as a tool by humans in the future. This track record may act as a buffer, which will ensure that the any superintelligent AI can still communicate, and cooperate.
Two entities that are tightly connected won’t break their links easily. Think of a long-time friend. Say one of you becomes suddenly much more capable, or richer than the other. Would you all of a sudden start ignoring, abusing or torturing your friend? If that’s not your intention, the AI is no different.
Hopeful future, outside the Solar System
I’d like to end this piece on ideas from Hawking’s writings with which I wholeheartedly agree. We definitely need to take care of our Earth, the cradle of human life. We should also definitely explore the stars, not to leave all our eggs in only one basket. To accomplish both, we should use all technologies available, which I’d classify in two categories: human-improvement, and design of replacements for humans. The former, by using gene editing and sciences that will lead to the creation of superhumans, may allow us to survive interstellar travel. But the latter, helped by energy engineering, nanorobotics and machine learning, will certainly allow us to do it much earlier, by designing ad-hoc self-replicating machines capable of landing on suitable planets and mining material to produce more colonizing machines, to be sent on to yet more stars.
These technologies are something my research field, Artificial Life, has been contributing to for more than three decades. By designing what seems mere toy models, or pseudo-forms of life in wetware, hardware and software, the hope is to soon enough understand the fundamental principles of life, to design life that will propel itself towards the stars and explore more of our universe.
Which role of AI in leaving Earth? Image credit: GotFuturama
Why is it crucial to leave Earth? One important cause, beyond mere human curiosity, is to survive possible meteorite impacts on our planet. Piet Hut, the director of the interdisciplinary program I am in at the moment, at the Institute for Advanced Study, published a seminal paper explaining how mass extinctions can be caused by cometary impacts (Hut et al. 1987). The collision of a rather smaller bodies with the Earth, about 66 million years ago, is thought to have been responsible for the extinction of the dinosaurs, along with any large form of life.
Such collisions are rare, but not so rare that we should not be worried.Asteroids with a 1 km diameter strike Earth every 500,000 years on average, while 5 km bodies hit out planet approximately once every 20 million years (Bostrom 2002, Marcus 2010). Again, quoting Hawking, if this is all correct, it could mean intelligent life on Earth has developed only because of the lucky chance that there have been no large collisions in the past 66 million years. Other planets may not have had a long enough collision-free period to evolve intelligent beings.
If abiogenesis, the emergence of life on Earth, wasn’t so hard to produce, the gift of the right conditions for long enough periods of time on our planet was probably essential. Not only good conditions for a long time, but also the right pace of change of these conditions through time too, to get mechanisms to learn to memorize such patterns, as they impact on free energy foraging (Witkowski 2015, Ofria 2016). After all, our Earth is around 4.6 billion years old, and it took only a few hundred millions years at most for life to appear on its surface, in relatively high variety. But much longer was necessary for complex intelligence to evolve, 2 billion years for rich, multicellular forms of life, and 2 more billion years to get to the Anthropocene and the advent of human technology.
Reflective AI at the service of humankind. Image credit: XPrize/YouTube.
To me, the evolution of intelligence and the fundamental laws of its machinery is the most fascinating question to explore as a scientist. The simple fact that we are able to make sense of our own existence is remarkable. And surely, our capacity to deliberately design the next step in our own evolution, that will transcend our own intelligence, is literally mind-blowing.
There may be many ways to achieve this next step. It starts with humility in our design of AI, but the effort we will invest in our interaction with it, and the amount of reflection we will dedicate to the integration with each other are definitely essential to our future as lifeforms, in this corner of the universe.
I’ll end on Hawking’s quote: “Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins”.
References
(by order of appearance)
Hawking, S. (2018). Last letters on the future of life on planet Earth. The Sunday Times, October 14, 2018.
Eörs Szathmáry and John Maynard Smith. The major evolutionary transitions. Nature, 374(6519):227–232, 1995.
Clark, A. (2015). 2011: What Scientific Concept Would Improve Everybody’s Cognitive Toolkit?.
Froese, T., McGann, M., Bigge, W., Spiers, A., & Seth, A. K. (2012). The enactive torch: a new tool for the science of perception. IEEE Transactions on Haptics, 5(4), 365-375.
Clark, A., & Chalmers, D. (1998). The extended mind. analysis, 58(1), 7-19.
Simon, G. K., & Foley, K. (2011). In sheep’s clothing: Understanding and dealing with manipulative people. Tantor Media, Incorporated.
Hut, P., Alvarez, W., Elder, W. P., Hansen, T., Kauffman, E. G., Keller, G., … & Weissman, P. R. (1987). Comet showers as a cause of mass extinctions. Nature, 329(6135), 118.
Bostrom, N. (2002). “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”, Journal of Evolution and Technology, 9.
Marcus, R., Melosh, H. J., Collins, G. (2010). “Earth Impact Effects Program”. Imperial College London / Purdue University. Retrieved 2013-02-04.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., … & Badia, A. P. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626), 471.
Witkowski, Olaf (2015). Evolution of Coordination and Communication in Groups of Embodied Agents. Doctoral dissertation, University of Tokyo.
Ofria, C., Wiser, M. J., & Canino-Koning, R. (2016). The evolution of evolvability: Changing environments promote rapid adaptation in digital organisms. In Proceedings of the European Conference on Artificial Life 13 (pp. 268-275).
In this post, I write about the problem of sphere packing and augmented communication in the future of the bio- and technosphere.
Multidimensional sphere-packing: how does it relate to the evolution of communication? Image credit: Paul Bourke.
Previously, I approached the topic of transitions in intelligence. I developed in some detail how minimal living systems becoming distributed can accelerate the evolution towards higher levels of intelligence, by bootstrapping the learning process within a network of computing nodes.
In the history of life, through the formation of the first social networks, living systems learned to accumulate information in a distributed way. Instead of having to sacrifice individuals from their population in exchange for information relevant to their survival, biological species became able to learn by simply exchanging ideas. A few millions of generations later, we see the start of the emergence of machine intelligence, which has arguably already managed to bring learning at levels never achieved before.
Evolutionary timeline, from simple life through some major evolutionary transitions towards higher orders of intelligence in living systems.
In this post, we will explore how connecting these intelligent machines in the future, through an increasingly interconnected and extremely high-bandwidth network, can bring about new paradigms of learning. I’ll try to flesh out the reasons for the power of this new learning, and why it may make for technology with even faster learning than current levels. The secret ingredient may be found in the advent of optimal communication protocols, developed by AIs for AIs.
By designing their own languages to communicate between each other to solve specific problems, AIs may undergo significant phase transitions in the way they represent information. These representations would then effectively become projections of reality that can propel them to unveiled levels of problem solving.
The theories that I rely on in the following are based on computational learning, complexity, formal linguistics, mathematical sphere-packing and coding theories.
About AI
As Max Tegmark notes it in his recent book, life is now entering its third age. Through research advances in artificial intelligence (AI), life becomes capable of modifying not only its own software via learning and culture, but it can now also edit its own hardware. As an ALifer (Artificial Life researcher), this hits particularly close to home.
Life 3.0: Being Human in the Age of Artificial Intelligence is a book by Swedish-American cosmologist Max Tegmark from MIT, discussing Artificial Intelligence and its impact on the future of life on Earth and beyond.
Hyperconnected society
Combined with the advent of the Internet, half a century ago, human society has undergone a crucial transition in connectivity, which I’d argue has the power to drastically alter the structure of communication, in very unpredictable ways.
Largely unpredicted, the advent of the Internet technology made the biosphere more interconnected than ever before, and in a very different way.
Communication
What is the nature of communication? How does signaling vary across existing and past species in biology? What will it be like to speak to each other in the future, with the advances of AI technology? How will future forms of intelligence communicate, whether they are natural, artificial, or a mixture of both? How distant will their communication system be from human language?
There is a large amount of literature on the evolution of communication, from simple signaling systems to complex, fully-fledged languages (Christiansen 2003; Cangelosi 2012). However, while most research in biology focuses on the natural evolution of communication systems, computer science has for a long time been engineering and optimizing protocols for specific tasks, for example for applications in robotics and computer networks (Corne 2000). Underneath and across all these systems, lives a fundamental theory of communication which studies its rich structure and fascinating properties, as pioneered by Shannon (1948). Later, Chomsky (2002) and Minsky (1974) would contribute with formal theories about the structure, rules, and dynamics of language and the mind. In the following, I propose we look at communication from the perspective of sphere packing in high-dimensional spaces.
Multichannel Communication
With communication becoming largely digital, humankind has constructed itself a new niche, which has the power to change its cognitive capacity, like never before. The fact that communication is becoming free. Of course, like for most attempts of futuristic predictions, the impact of multiple channels on the future of communication being highly multichannel. One may wonder the effects of a highly connected society.
This is a question we can ask using tools from artificial life and coding theory. Here, I propose a combination of evolutionary computation with insights from coding theory, in order to show the effect of broadening channels on communication systems.
Sphere Packing Theory
Sphere packing in Euclidian spaces has a direct interpretation in error-correcting codes with continuous communication channels (Balakrishnan 1961). Since real-world communication channels can be modeled using high-dimensional vector spaces, high-dimensional sphere-packing is very relevant to modern communication.
The dimensionality of a code, i.e. the number of dimensions in which it encodes information, corresponds to the number of measurements describing codewords. Radio signals, for example, use two dimensions: amplitude and frequency.
The general idea, when one desires to arrange communications so as to remove the effects of noise, is to build a vocabulary of codewords to send, where is an error-correcting code.
Illustration of an error-correcting code C as a set of 1-spheres in 2 dimensions.
If two distinct codewords satisfy , where is the level of noise, the received codeword could be ambiguous, as the noise level may bring it beyond its sphere of correction.
The challenge is to pack as many -balls as possible into a larger ball of radius , with R the maximal power radius allowed to achieve with given amounts of energy to send signals over the channel, which amounts to the sphere packing problem (Cohn 2016). With high-dimensional spaces, the usual packing models seem to break down, and apart from cases exploiting specific properties of symmetry (Adami 1995), largely unsolved.
Example of simulated result for 100 codewords after 500 generations: agents have to cope with small volume due to the 2-dimensional space.
An Evolutionary Simulation
To get a feel of a problem of some complexity, my sense is usually to start coding and talk later. I therefore coded up a simulation, an evolutionary toy model in which to explore the influence of increasingly high dimensional channels of communication on structures of languages used by a network of agents to communicate over them.
The problem dense sphere packing in multiple dimensions is closely related to finding optimal communication codes. Image credit: Design Emergente.
In the simulation, agents need to optimize a fitness function equal to the sum of successfully transmitted messages of large importance to other agents, over a variety of channels over a given range of dimensions, organized in randomly generated small-world networks, over their lifetime. Each agent’s genotype encodes a specific set of points distributed over a multidimensional space of a fixed range of sizes between m and n. The simulation then runs over many generations of agents adapting their communication protocol through mutation and selection by the genetic algorithm. I varied the values of m and n between 1 and 100 dimensions.
The simulation yields a sphere packing as illustrated below, which shows a packing for a two-dimensional channel, after 500 generations. Note that visualizing gets much trickier after three dimensions. You can squeeze a fourth and a fifth dimension in with a clever use of colors and types of strokes, but they usually don’t help the intuition. I personally find cuts and projections much more helpful to think about these problems, but that can be the topic for a future post. The point is, one notices that the more the simulation progresses, the more it improves its chances to asymptotically get to an optimally dense packing.
Visualization of collective communication optimization runs, 100 codewords in 2 dimensions, after 500 generations.
VS. Numerical Optimization
I compared these results to a collision-driven packing generation algorithm, using a variant on both the Lubachevsky–Stillinger algorithm (Lubachevsky 1990) and the Torquato-Jiao algorithm (Torquato 2009), so that it would be easily generalizable to n dimensions. This numerical procedure simulates a physical process of rearranging and compressing an assembly of hard hyperspheres, in order to find their densest spatial arrangement within given constraints, by progressively growing the particles’ size and adapting parameters such as spring constant and friction. The comparison showed that the solution reached by evolutionary simulations was consistently suboptimal, for the whole range of experiments.
Simulation results indicate that for higher dimensionality, the density ratio undergoes several transitions, in a very irregular fashion, which we can visualize in the form of difference in derivative of densities with respect to number of dimensions.
This plot shows the logarithm of sphere packing density as a function of dimension (Cohn 2016). The green curve is the linear programming bound, the blue curve is the best packing currently known, and the red curve is the lower bound. Note the equality of upper and best bounds for dimensions 8 and 24.
This may actually be expected, based on known solutions (analytical and numerical estimates) from sphere packing theory for dimensions up to 36 (Cohn 2016, see Figure above). Nevertheless, the existence of optimal packing solutions does not preclude from inherent difficulty to reach them within the framing of a particular dynamical system, and evolutionary computation depends strongly on simplicity and evolvability of encodings in the genotypic space.
So what?
An interesting property observed across these preliminary results is the frequency of jammed codes, that is, codes for which the balls are locked into place. This seems to be especially the case with spheres of different dimensions, although this is a hypothesis deserving further investigation. Further analysis will be required to fully interpret this result, and assess whether higher dimensions end up in crystalline distributions or fluid arrangements.
One important consideration is the fact that the evolutionary simulation may prefer dynamical encoding of solutions, but that’s also something to detail in its own post.
Illustration of sphere packing with several imposed sizes. Image credit: fdecomite on Flickr.
Beyond AI
This post was initially written thinking with in mind the ALIFE 2018 conference in Tokyo this year, which I was co-organizing.
I had the honor of being a Program Chair for the ALIFE 2018 conference in Tokyo.
The present post ois related to a piece of work worked on earlier this year, and on which I actually presented early results at the conference. The theme of ALIFE 2018 inspired research that goes “beyond AI”, using artificial life culture to ask the futuristic questions about the next transition in the evolution of human society.
I co-organized the 2018 Conference on Artificial Life (ALIFE 2018), the first of a series of unified international conferences on Artificial Life. It took place in Tokyo, just two weeks ago! This new series will become the unique hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE), gathering all alifers like me every year to present their science and art.
The preliminary results suggest that future intelligent lifeforms, natural or artificial, from their interaction over largely broadband-channel networks, may invent novel linguistic structures in high-dimensional spaces. With new ways to communicate, future life may achieve unanticipated cognitive jumps in problem solving.
References
[1] Eörs Szathmáry and John Maynard Smith. The major evolutionary transitions. Nature, 374(6519):227–232, 1995.
[2] Max Tegmark. Life 3.0. Being Human in the Age of Artificial Intelligence. NY: Allen Lane, 2017.
[3] Claude E Shannon. A mathematical theory of communication (parts i and ii). Bell System Tech. J., 27:379–423, 1948.
[4] Nihat Ay. Information geometry on complexity and stochastic interaction. Entropy, 17(4):2432–2458, 2015.
[5] AV Balakrishnan. A contribution to the sphere-packing problem of communication theory. Journal of Mathematical Analysis and Applications, 3(3):485–506, 1961.
[6] Henry Cohn. Packing, coding, and ground states. arXiv preprint arXiv:1603.05202, 2016.
[7] Boris D Lubachevsky and Frank H Stillinger. Geometric properties of random disk packings. Journal of statistical Physics, 60(5-6):561–583, 1990.
[8] Salvatore Torquato and Yang Jiao. Dense packings of the platonic and archimedean solids. Nature, 460(7257):876, 2009.
[9] Günter P Wagner and Lee Altenberg. Perspective: complex adaptations and the evolution of evolvability. Evolution, 50(3):967–976, 1996.
What is intelligence? How did it evolve? Is there such thing as being “intelligent together”? How much does it help to speak to each other? Is there an intrinsic value to communication? Attempting to address these questions brings us back to the origins of intelligence.
Intelligence back from the origins
Since the origin of life on our planet, the biosphere – a.k.a. the sum of all living matter on our planet – has undergone numerous evolutionary transitions (John Maynard Smith and Eörs Szathmáry, Oxford University Press, 1995). From the first chemical reaction networks, it has successively reached higher and higher stages of organization, from compartmentalized replicating molecules, to eukaryotic cells, multicellular organisms, colonies, and finally (but one can’t assume it’s nearly over) cultural societies.
For at least 3.5 billion years, the biosphere has been modifying and recombining the living entities that composed it, to form higher layers of organization, and transferring bottom-layer features and functions to the larger scale. For example, cells that now compose our body do not serve directly their own purpose, but rather work to contribute to our successful life goals as humans. Through every transition in evolution, life has drastically modified the way it stored, processed and transmitted information. This often led to new protocols of communication, such as DNA, cell heredity, epigenesis, or linguistic grammar, which will be the central focus further in this post.
Life on Earth’s illustrated timeline, from its origins to nowadays.
Every living system as a computer
The first messy networks of chemical reactions that managed to maintain themselves were already “computers”, in the sense that they were processing information inputs from the surrounding chemical environment, and effecting this environment in return. Under that perspective, they already possessed a certain amount of intelligence. This may require a short parenthesis.
If everything is a computer, and every computer has a certain power, than life should be on a scale from a scale from stupid to intelligent. This is a rather simplistic, one-dimensional picture, which ignores both the richness of existing problems and types of computations. Image credit: 33rd Square
What do we mean by intelligence?
Intelligence, in a computational terms, is nothing else but the capacity of solving difficult problems, with the minimal amount of energy. For example, any search problem can be solved by looking exhaustively at every possible place where a solution can hide. If instead, a method allows us to look just in a few places before finding a solution, it should be called more intelligent than the exhaustive search. Of course, you could put more “searching agents” on the task, but the intelligence measure remains the same: the least time required by the search, divided by the number of agents employed, the more efficient the algorithm, and the more intelligent the whole physical mechanism. This is not to say that intelligence is only one-dimensional. We are obviously ignoring very important parts of the story. This is all part of a larger topic which I’m intending to writing about in more detail soon, but you could summarize it for now by saying that intelligence consists in “turning a difficult problem into an easy one”.
Octopodes show great dexterity and problem solving skills: they know how to turn certain difficult problems into easy ones. (Note that they also tend to hold Rubik’s cubes in their favored arm, indicating that they are not “octidextrous”.) Image credit: Bournemouth News.
Transitions in intelligence
Let’s now backtrack a little, to where we were discussing evolutionary transitions. We now see the picture in which the first chemical processes already possessed some computational intelligence, in the sense we just framed. Does this intelligence grow through each transition? Did the transitions make it easier to solve problems? Did it turn difficult problems into easy ones?
The main problem for life to solve is typically the one of finding sources of free energy and converting them efficiently into work that helps the living entity preserve its own continued existence. If this is the case, then yes: the transitions seem to have made the problem easier. Each transition made living systems climb steeper gradients. Each transition modified information storage, processing and transmission so as to ensure that the overall processing was beneficial to preserve life, in the short or longer term (an argument by Dawkins on evolution of evolvability, which I’ll also write more about in another post). And each transition made the problem into an easier one for living systems.
Image credit: Trends in Ecology and Evolution
Bloody learning
A few billion years ago, when life was still made of individual organisms, learning was achieved mostly by bloodshed. With Darwinian selection, the basic way for a species to incorporate useful information in its genetic pool, was to have part of its population die. Very roughly, for half of its population, the species could get about one bit of information about the environment. It is obvious how inefficient this is, and this is of course still the case for all of life nowadays, from bacteria to fungi, and from plants to vertebrates. However, living organisms progressively learned to use different types of learning, based on communication. Instead of killing individuals in their populations, the processes started to “kill” useless information, and keep transferring the relevant pieces. Examples of new learning paradigms were for example connectionist learning: a set of interacting entities which were able to encode and update memories within a network. This permitted learning to evolve on much shorter timescales than replication cycles, which boosted substantially the ability of organisms to learn adapt to new ecological niches, recognize efficient behaviors, and predict environmental changes. The This is, in a nutshell, how intelligence became distributed.
The evolution of distributed intelligence: the jump from the Darwinian paradigm to connectionist learning allowed for learning to evolve on much shorter timescales.
Distributed intelligence
The general intuition is you can always accomplish more with two brains than just one. In an ideal world, you could divide the computation time by two. One condition though is that those two brains should be connected, and able to exchange information. The way to achieve that is through the establishment of some form of language to allow for concepts to be replicated from one mind to another, which can range from the use of basic signals to complex communication protocols.
Another intuition is that, in a society of specialists, all knowledge (information storage), thinking (information processing) and communication (information transmission) is distributed over individuals. To be able to extract the right piece of knowledge and apply it to the problem at hand, one should be able to query about any information, and have it transferred from one place to another in the network. This is essentially another way to formulate the communication problem. Given the right communication protocol, information transfers can significantly improve the power of computation. Recent advances have been suggesting that by allowing concepts to reorganize while they are being sent back and forth from mind to mind, one can drastically improve the complexity of problem-solving algorithms.
Given the right communication protocol, information transfers can significantly improve the power of computation. By allowing concepts to reorganize while they are being sent back and forth from mind to mind, one can drastically improve the complexity of problem-solving algorithms.
Raison d’Être of a Highly Connected Society
There is a reason why, as a scientist, I am constantly interacting with my colleagues. First, I have to point out that it doesn’t have to be the case. Scientists could be working alone, locked in individual offices. Why bother talking to each other, after all? Anyone with an internet connection already has access to all the information needed to conduct research. Wouldn’t isolating yourself all the time increase your focus and productivity?
As a matter of fact, almost no field of research really does that. Apart from very few exceptions, everyone seems to find a huge intrinsic value to exchanging ideas with their peers. The reason for that may be that through repeated transfers from mind to mind, concepts seem to converge towards new ideas, theorems, and scientific theories.
That is not to say that no process needs to be isolated for a certain time. It might be helpful to isolate and take time to reflect for a while, just the way I am doing myself writing this post. But ultimately, to maximize its usefulness, information needs to be passed on, and spread to relevant nodes in the network. Waiting for your piece of work to be completely perfect before sharing it back to society may seem tempting, but there is value to doing it early. For those who would be interested in reading more about this, I have ongoing research which should get published soon, examining the space of networks in which communication helps achieving optimal results, under a certain set of conditions.
In the high connectivity network of human society, communication has the hidden potential to improve lives on a global scale. Image credit: Milvanuevatec
Evolvable Communication
In order to do so, one intriguing property appears to be that communication needs to be sufficiently “evolvable”, which was confirmed by some early results from my own work. The best communication systems not only serve as good information maps onto useful concepts (knowledge in mathematics, physics, etc.) but they are also shaped so as to be able to naturally evolve into even better maps in the future. One should note that these results, although very exciting, are however preliminary, and will need further formal computational proof. But if confirmed, this may have very significant implications for the future of communication systems, for example in artificial intelligence (AI – I don’t know how useful it is to spell that one out nowadays).
Illustration of fitness landscape gradient descent. The communication code B can evolve into two optima hills, but at each bifurcation lies a choice which should be pondered with the maximum of information.
To give you an idea, evolvable-communication-based AI would have the potential to generalize representations through social learning. This means that such an AI could have different parts of itself talk to each other, in turn becoming wiser through this process of “self-reflection”. Pushing it just a bit further, this same paradigm may also lead to many more results, such as a new theory of the evolution of language, insights for the planning of future communication technology, a novel characterization of evolvable information transfers in the origin of life, and even new insights for a hypothetical communication system with extraterrestrial intelligence.
Evolvable communication is definitely a topic that I’ll be developing more in my next posts (I hate to be teasing again, but adding full details would make this post too lengthy). Stay tuned for more, and in the meantime, I’d be happy to answer any question in the comments.
The problem dense sphere packing in multiple dimensions is closely related to finding optimal communication codes. To be continued in the next post!
Up next: hyperconnected AIs, language and sphere-packing
In my next post, I will tackle the problem of finding optimal communication protocols, in a society where AI has become omnipresent. I will show how predicting future technology requires accurate analysis from machine learning, sphere-packing, and formal language and coding theories.