The Age of Information Overload

Scroll, Snack, Repeat

We live in a time of information abundance. It has become as plentiful—and as carefully engineered to exploit our every weakness—as modern processed food. The more content is optimized to manipulate our attention, the more our cognitive patterns are hijacked. Digital platforms are not only distracting; they reshape what we pay attention to and how we think.

The same cognitive patterns that once helped us survive by seeking vital free energy and novelty, now keep us locked in cycles of endless scrolling, technostress, and mental malnutrition. Worse, the more these systems optimize to capture our engagement, the harder it becomes to find truthful, nourishing ideas for the growth of humanity. Are we witnessing the edge of a Great Filter for intelligent life itself?

Overcoming this challenge will demand either drastically augmenting our cognitive ability to process the overflow of information, developing a brand new kind of immune system for the mind, or rapidly catalyzing the development of social care and compassion for other minds. Perhaps at this point, there is no choice but to pursue all three.

In this Age of Information Abundance, humans face an engineered deluge of manipulative digital information that exploits our every instinct; escaping it will demand strengthening our minds, mental immune systems, and shared compassion. Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Jul 4, 2025.

Digital Addiction and Mind Starvation in the Attention Economy

This post was inspired to me by brilliant science communicator and author Hank Green’s recent thoughts on whether the internet is best compared to cigarettes or food (Green, 2025)—a brilliant meditation on addiction and the age of information abundance. Hank ultimately lands on the metaphor of food: the internet isn’t purely harmful like cigarettes. It’s become closer in dynamics to our modern food landscape, full of hyper-processed, hyper-palatable products. Some are nutritious. Many are engineered to hijack our lives.

This idea isn’t new. Ethologist Niko Tinbergen, who won the 1973 Nobel Prize in Biology, demonstrated that instinctive behaviors in animals could be triggered by what he coined supernormal stimuli—exaggerated triggers that mimic something once essential for survival—using creatively unrealistic dummies such as oversized fake eggs that birds preferred, red-bellied fish models that provoked attacks from male sticklebacks, and artificial butterflies that elicited mating behavior (Tinbergen, 1951). Philosopher Daniel Dennett (1995), pointed out how evolved drives can be hijacked by such supernormal stimuli, and how our minds, built by evolutionary processes to react predictably to certain signals, can thus be taken advantage of, often without our conscious awareness. Chocolate is a turbo-charged version of our drive to seek high-energy food, which we come to prefer to a healthier diet of fruits and vegetables. Nesting birds prefer oversized fake eggs over their own. Cuteness overload rides on our innate caregiving instincts to respond powerfully to cartoonish or exaggerated baby-like features such as large eyes, round faces, and small noses. Pornography exaggerates sexual cues to hijack evolved reproductive instincts, ending up producing stronger reactions than natural encounters.

For most of our biological history, we humans have evolved in a world of information scarcity, where every piece of knowledge could be vital. Today, we turned into the central lane of the Information Age, where digital information has become a “supernormal stimuli”—coined by Nobel Prize Winner Niko Tinbergen—susceptible to manipulate our every thought and decision. Book reference: Tinbergen, N. (1951). The study of instinct. Oxford University Press.

And here we are with the internet. For most of our biological history, we have evolved in a world of information scarcity, where every piece of knowledge could be vital. Today, we turned into the central lane of the Information Age highway, pelted by an endless storm of content engineered to feel urgent, delicious, or enraging (Carr, 2010; McLuhan, 1964). It’s not that infotaxis—meant in the pure sense of information foraging, like chemotaxis is to chemical gradients—is inherently bad. It can be as helpful as Dennett’s food, sex, or caretaking-related instincts. But the environment has changed faster than our capacity to navigate it wisely. The same drives that helped our ancestors survive can now keep us scrolling endlessly, long past the point of nourishment. As Adam Alter (2017) describes, modern platforms are designed addictions—custom-engineered experiences that feed our craving for novelty while diminishing our sense of agency. Meanwhile, Shoshana Zuboff (2019) has shown how surveillance capitalism exploits these vulnerabilities systematically, capturing and monetizing attention at an unprecedented scale. And before that, pioneers like BJ Fogg (2003) have documented how persuasive technology can be designed to influence attitudes and behaviors, effectively reshaping our habits in subtle but powerful ways.

This is the core problem we face: the more these systems optimize for engagement, the harder it becomes to access truthful, useful, nourishing information—the kind that helps us think clearly and live well. What looks like an endless buffet of knowledge is often a maze of distraction, manipulation, and empty mental calories. In other words, the very abundance of content creates an environment where the most beneficial ideas are the hardest to find and the easiest to ignore.

Defending Global Access to Truthful Information

Shall we pause for a moment and acknowledge how ridiculously challenging it has become to search for information online? Some of us can certainly remember times and versions of the Internet in which it wasn’t all that difficult to find an article, a picture, or a quote we’d seen before. Of course, information was once easier to retrieve because there was less overwhelming abundance, and part of this is due to the increasing incentives to flood us with manipulative content. But beyond simply the challenge of locating something familiar, it has also become much harder to know what source to trust. The same platforms that bury useful information under endless novelty also strip away the signals that once helped us judge credibility, leaving us adrift in a sea where misinformation and insight are nearly indistinguishable.

Red Queen’s race between cyberattacks from ads-based social media and systems protecting our mental health. Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Jul 4, 2025.

So how do we adapt? I think we have three broad paths. One is to augment our cognitive capacity—to build better tools, shared knowledge systems, and personal practices that help us sort signal from noise. This practice relies on an open field Red Queen’s race between mental cyberattacks for advertisement and protective systems trying to figure out how to transmit information. It’s not impossible. Just costly—to the extent it may render this way nonviable. This is the path of cryptography.

The second is to bury the signal itself. We can hide cognitive paths, and the channels we really use to communicate survival information with each other or within our own bodies, through a collection of local concealment strategies and obfuscation mechanisms. Even our body’s immune system, rooted in the cryptographic paradigm (Krakauer, 2015), works by obscuring and complicating access to its key processes to resist exploitation by external—as well as internal—parasites. Biological processes like antigen mimicry, chemical noise for camouflage, or viral epitope masking do involve concealment and obfuscation of vital information—meaning access to the organism’s free energy. Protecting by building walls—which means exploiting inherent asymmetries present in the underlying physical substrate—can incur great costs. In many cases, making information channels covert, albeit at the expense of signal openness and transparency, can be a cheaper tactic. This second path is called steganography.

The third and final path is to increase trust and care among humans—and the other agents, biological or artificial, with whom we co-create our experience in the world. Everyone can help tend each other’s mental health and safety. Just as we have learned to protect and heal each other’s bodies, we will need to learn to protect each other’s minds.  This means allowing our emerging, co-evolving systems of human and non-human agents to develop empathy, shared responsibility, and trust, so that manipulative systems and predatory incentives cannot so easily exploit our vulnerabilities. This was the focus of my doctoral dissertation, which examined the fundamental principles and practical conditions by which a system composed of populations of competing agents could eventually give rise to trust and cooperation. The solution calls for reimagining the architectures of our social, technological, and economic institutions so they align with mutual care, and cultivating diverse mixtures of cultures that value psychological safety at an individual and global level. This is the path of care, compassion, and social resilience.

We’re only beginning to build the cultural tools to deal with this. Naming the dynamic—seeing how our attention gets manipulated and why it makes truth harder to reach—is the first step. Maybe the question isn’t whether the internet is cigarettes or food. Maybe it’s whether we can learn to distinguish empty calories from real sustenance, and whether we can do it together—helping each other flourish in a world too full.

Overcoming Mental Cyberattacks with Tools for Care and Reflection

I’ve seen people throwing technology at it: why don’t we use ChatGPT to steward our consumption of information. I’m all in favor of having AI mentor us—it holds many values, and I’m myself engaged in various projects developing this kind of technology for mindful education and personalized mentorship. But this presents a risk, which we can’t afford to ignore: these new proxies are so, so very susceptible to being hijacked themselves. Jailbreaks, hijacking, man-in-the-middle attack, you name it. LLMs are weak, and—take it from someone whose training came from cryptography and is spending large efforts on LLM cyberdefense—will remain so for many years. Actually, LLMs haven’t even gone through the test of any significant time like our original cognitive algorithms have—our whole biology relies on layer on layer of protections and immune systems shielding us from ourselves—from self-exposition to injuries, germs, harmful bacteria, viruses and harm of countless sorts and forms. This is why we don’t need to live in constant fear of death and suffering on a daily basis. But technology is a new vector for new diseases—many of which unidentified.

To protect our mental health, education ought to adapt quickly in the Age of Information Overload.
Credit: Jamie Gill / Getty Images

We need to learn to deal with such new threats in this changing world. The internet. Entertainment. Our ways of healthy life. How we raise our children. The entire design of education itself must evolve. We need new schools—institutions built not just to transmit knowledge but to help us develop the discernment, resilience, and collective care needed to thrive amid infinite information. It’s no longer about preparing for yesterday’s challenges. It’s about learning to navigate a world where our instincts are constantly manipulated and where reflection, curiosity, and shared wisdom are our best defenses against mental malnutrition. It’s time to work on incorporating care and reflection in our technolives—acknowledgedly, we can’t not recognize our lives are going to be technological too, and we also need to accept it and focus on symbiotizing with the technological rather fighting it—which requires in turn to help catalyze a protective niche of care and reflection-oriented technosociety around it. Let’s develop a healthcare system for our minds.

DOI: 10.54854/ow2025.07

References

Alter, A. (2018). Irresistible: The rise of addictive technology and the business of keeping us hooked. Penguin.

Carr, N. (2020). The shallows: What the Internet is doing to our brains. WW Norton & Company.

Dennett, D. C. (1995). Darwin’s Dangerous Idea: Evolution and the Meanings of Life (No. 39). Simon and Schuster.

Fogg, B. J. (2002). Persuasive technology: Using computers to change what we think and do. Ubiquity, 2002(December), 2.

Green, H. (2025, June 28). You’re not addicted to content, you’re starving for information [Video]. YouTube.

Krakauer, D. Cryptographic Nature. arXiv 2015. arXiv preprint arXiv:1505.01744.

McLuhan, M. (1994). Understanding media: The extensions of man. MIT press.

Tinbergen, N. (1951). The study of instinct. Oxford University Press.

Zuboff, S. (2023). The age of surveillance capitalism. In Social theory re-wired (pp. 203-213). Routledge.

AI-to-AI Communication: Unpacking Gibberlink, Secrecy, and New AI Communication Channels

AI communication channels may represent the next major technological leap, driving more efficient interaction between agents—artificial or not. While recent projects like Gibberlink demonstrate AI optimizing exchanges beyond the constraints of human language, fears of hidden AI languages must be correctly debunked. The real challenge is balancing efficiency with transparency, ensuring AI serves as a bridge—not a barrier—in both machine and human-AI communication.

Coding on a computer screen by Markus Spiske is licensed under CC-CC0 1.0

At the ElevenLabs AI hackathon in London last month, developers Boris Starkov and Anton Pidkuiko introduced a proof-of-concept program called Gibberlink.  The project features two AI agents that start by conversing in human language, recognizing each other as AI, before switching to a more efficient protocol using chirping audio signals. The demonstration highlights how AI communication can be optimized when unhinged from the constraints of human-interpretable language.

While Gibberlink points to a valuable technological direction in the evolution of AI-to-AI communication—one that has rightfully captured public imagination—it remains but an early-stage prototype relying so far on rudimentary principles from signal processing and coding theory. Actually, Starkov and Pidkuiko themselves emphasized that Gibberlink’s underlying technology isn’t new: it dates back to the dial-up internet modems of the 1980s. Its use of FSK modulation and Reed-Solomon error correction to generate compact signals, while a good design, falls short of modern advances, leaving substantial room for improvement in bandwidth, adaptive coding, and multi-modal AI interaction.

Gibberlink, Global Winner of ElevenLabs 2025 Hackathon London. The prototype demonstrated how two AI agents started a normal phone call about a hotel booking, then discovered they both are AI, and decided to switch from verbal english to a more efficient open standard data-over-sound protocol ggwave. Code: https://github.com/PennyroyalTea/gibberlink Video Credit: Boris Starkov and Anton Pidkuiko

Media coverage has also misled the public, overstating the risks of AI concealing information from humans and fueling speculative narratives—sensationalizing real technical challenges into false yet compelling storytelling. While AI-to-AI communication can branch out of human language for efficiency, it has already done so across multiple domains of application without implying any deception or harmful consequences from meaning obscuration. Unbounded by truth-inducing mechanisms, social media have amplified unfounded fears about malicious AI developing secret languages beyond human oversight—ironically, more effort in AI communication research may actually enhance transparency by discovering safer ad-hoc protocols, reducing ambiguity, embedding oversight meta-mechanisms, and in term improving explainability and enhancing human-AI collaboration, ensuring greater transparency and accountability.

In this post, let us take a closer look at this promising trajectory of AI research, unpacking these misconceptions while examining its technical aspects and broader significance. This development builds on longstanding challenges in AI communication, representing an important innovation path with far-reaching implications for the future of machine interfaces and autonomous systems.

Debunking AI Secret Language Myths

Claims that AI is developing fully-fledged secret languages—allegedly to evade human oversight—have periodically surfaced in the media. While risks related to AI communication exist, such claims are often rooted in misinterpretations of the optimization processes that shape AI behavior and interactions. Let’s explore three examples. In 2017, we remember the story around Facebook AI agents streamlining negotiation dialogues, which far from being a surprising emergent phenomenon, merely amounted to a predictable outcome of reinforcement learning, mistakenly seen by humans as a cryptic language (Lewis et al., 2017). Similarly, a couple of years ago, OpenAI’s DALL·E 2 seemed to be responding to gibberish prompts, sparked widespread discussion, often misinterpreted as AI developing a secret language. In reality, this behavior is best explained by how AI models process text through embedding spaces, tokenization, and learned associations rather than intentional linguistic structures. What seemed like a secret language to some, may be closer to low-confidence neural activations, akin to mishearing lyrics, rather than a real language.

Source: Daras & Dimakis (2022)

Models like DALL·E (Ramesh et al., 2021) map words and concepts as high-dimensional vectors, and seemingly random strings can, by chance, land in regions of this space linked to specific visuals. Built from a discrete variational autoencoder (VAE), an autoregressive decoder-only transformer similar to GPT-3, and a CLIP-based pair of image and text encoders, DALL·E processes text prompts by first tokenizing them using Byte-Pair Encoding (BPE). Since BPE breaks text into subword units rather than whole words, we should also note that even gibberish inputs can be decomposed into meaningful token sequences for which the model has learned associations. These tokenized representations are then mapped into DALL·E’s embedding space via CLIP’s text encoder, where they may, by chance, activate specific visual concepts. This understanding of training and inference mechanisms highlights intriguing quirks, explaining why nonsensical strings sometimes produce unexpected yet consistent outputs, with important implications for adversarial attacks and content moderation (Millière, 2023). While there is no proper hidden language to be found, analyzing the complex interactions within model architectures and data representations can reveal vulnerabilities and security risks, which may are likely to occur at their interface with humans, and will need to be addressed.

One third and more creative connection may be found in the reinforcement learning and guided search domain, with AlphaGo, which developed compressed, task-specific representations to optimize gameplay, much like expert shorthand (Silver et al., 2017). Rather than relying on explicit human instructions, it encoded board states and strategies into efficient, unintuitive representations, refining itself through reinforcement learning. The approach somewhat aligns with the argument by Lake et al. (2017) that human-like intelligence requires decomposing knowledge into structured, reusable compositional parts and causal links, rather than mere brute-force statistical correlation and pattern recognition—like Deep Blue back in the days. However, AlphaGo’s ability to generalize strategic principles from experience used different mechanisms from human cognition, illustrating how AI can develop domain-specific efficiency without explicit symbolic reasoning. This compression of knowledge, while opaque to humans, is an optimization strategy, not an act of secrecy.

Illustration of AlphaGo’s representations being able to capture tactical and strategic principles of the game of go. Source: Egri-Nagy & Törmänen (2020)

Fast forward to the recent Gibberlink prototype, with AI agents switching from English to a sound-based protocol for efficiency, is a deliberately programmed optimization. Media narratives framing this as dangerous slippery slope towards AI secrecy overlook that such instances are explicitly programmed optimizations, not emergent deception. These systems are designed to prioritize efficiency in communication, not to obscure meaning, although there might be some effects on transparency—which may be carefully addressed and mediated, if it were the point of focus.

The Architecture of Efficient Languages

​In practice, AI-to-AI communication naturally gravitates toward faster, more reliable channels, such as electrical signaling, fiber-optic transmission, and electromagnetic waves, rather than prioritizing human readability. However, one does not preclude the other, as communication can still incorporate “subtitling” for oversight and transparency. The choice of a communication language does not inherently prevent translations, meta-reports, or summaries from being generated for secondary audiences beyond the primary recipient. While arguments could be made that the choice of language influences ranges of meanings that can be conveyed—with perspectives akin to the Sapir-Whorf hypothesis and related linguistic relativity—this introduces a more nuanced discussion on the interaction between language structure, perception, and cognition (Whorf, 1956; Leavitt, 2010).

Source: https://xkcd.com/1531/ Credit: Randall Munroe

​Language efficiency, extensively studied in linguistics and information theory (Shannon, 1948; Gallager, 1962; Zipf, 1949), drives AI to streamline interactions much like human shorthand. For an interesting piece of research that takes information-theoretic tools to characterize natural languages, Coupé et al. (2011) showed that, regardless of speech rate, languages tend to transmit information at an approximate rate of 39 bits per second. This, in turn, suggested a universal constraint on processing efficiency, which again connects with linguistic relativity . While concerns about AI interpretability and security are valid, they should be grounded in technical realities rather than speculative fears. Understanding how AI processes and optimizes information clarifies potential vulnerabilities—particularly at the AI-human interface—without assuming secrecy or intent. AI communication reflects engineering constraints, not scheming, reinforcing the need for informed discussions on transparency, governance, and security.

This figure from Coupé et al.’s study illustrates that, despite significant variations in speech rate (SR) and information density (ID) across 17 languages, the overall information rate (IR) remains consistent at approximately 39 bits per second. This consistency suggests that languages have evolved to balance SR and ID, ensuring efficient information transmission. ​Source: Coupé et al. (2011)

AI systems are become more omnipresent, and will increasingly need to interface with each other in an autonomous manner. This will require the development of more specialized communication protocols, either by human design, or continuous evolution of such protocols—and probably various mixtures of both. We may then witness emergent properties akin to those seen in natural languages—where efficiency, redundancy, and adaptability evolve in response to environmental constraints. Studying these dynamics could not only enhance AI transparency but also provide deeper insights into the future architectures and fundamental principles governing both artificial and human language.

When AI Should Stick to Human Language

Despite the potential for optimized AI-to-AI protocols, there are contexts where retaining human-readable communication is crucial. Fields involving direct human interaction—such as healthcare diagnostics, customer support, education, legal systems, and collaborative scientific research—necessitate transparency and interpretability. However, it is important to recognize that even communication in human languages can become opaque due to technical jargon and domain-specific shorthand, complicating external oversight.​

AI can similarly embed meaning through techniques analogous to human code-switching, leveraging the idea behind the Sapir-Whorf hypothesis (Whorf, 1956), whereby language influences cognitive structure. AI will naturally gravitate toward protocols optimized for their contexts, effectively speaking specialized “languages.” In some cases, this is explicitly cryptographic—making messages unreadable without specific decryption keys, even if the underlying language is known (Diffie & Hellman, 1976). AI systems could also employ sophisticated steganographic techniques, embedding subtle messages within ordinary-looking data, or leverage adversarial code obfuscation and data perturbations familiar from computer security research (Fridrich, 2009; Goodfellow et al., 2014). These practices reflect optimization and security measures rather than sinister intent.​

Photo by Yan Krukau on Pexels.com

Gibberlink, Under the Hood

Gibberlink operates by detecting when two AI agents recognize each other as artificial intelligences. Upon recognition, the agents transition from standard human speech to a faster data-over-audio format called ggwave. The modulation approach employed is Frequency-Shift Keying (FSK), specifically a multi-frequency variant. Data is split into 4-bit segments, each transmitted simultaneously via multiple audio tones in a predefined frequency range (either ultrasonic or audible, depending on the protocol settings). These audio signals cover a 4.5kHz frequency spectrum divided into 96 equally spaced frequencies, utilizing Reed-Solomon error correction for data reliability. Received audio data is decoded using Fourier transforms to reconstruct the original binary information.​

Although conceptually elegant, this approach remains relatively basic compared to established methods in modern telecommunications. For example, advanced modulation schemes such as Orthogonal Frequency-Division Multiplexing (OFDM), Spread Spectrum modulation, and channel-specific encoding techniques like Low-Density Parity-Check (LDPC) and Turbo Codes could dramatically enhance reliability, speed, and overall efficiency. Future AI-to-AI communication protocols will undoubtedly leverage these existing advancements, transcending the simplistic methods currently seen in demonstrations such as Gibberlink.

This is a short demonstration of ggwave in action. A console application, a GUI desktop program and a mobile app are communicating through sound using ggwave. Source code: https://github.com/ggerganov/ggwave Credit: Georgi Gerganov

New AI-Mediated Channels for Human Communication

Beyond internal AI-to-AI exchanges, artificial intelligence increasingly mediates human interactions across multiple domains. AI can augment human communication through real-time translation, summarization, and adaptive content filtering, shaping our social, professional, and personal interactions (Hovy & Spruit, 2016). This growing AI-human hybridization blurs traditional boundaries of agency, raising complex ethical and practical questions. It becomes unclear who authors a message, makes a decision, or takes an action—the human user, their technological partner, or a specific mixture of both. With authorship, of course, comes responsibility and accountability. Navigating this space is a thin rope to walk, as over-reliance on AI risks diminishing human autonomy, while restrictive policies may stifle innovation. Continuous research in this area is key. If approached thoughtfully, AI can serve as a cognitive prosthetic, enhancing communication while preserving user intent and accountability (Floridi & Cowls, 2019).

Thoughtfully managed, this AI-human collaboration will feel intuitive and natural. Rather than perceiving AI systems as external tools, users will gradually incorporate them into their cognitive landscape. Consider the pianist analogy: When an experienced musician plays, they no longer consciously manage each muscle movement or keystroke. Instead, their cognitive attention focuses on expressing emotions, interpreting musical structures, and engaging creatively. Similarly, as AI interfaces mature, human users will interact fluidly and intuitively, without conscious translation or micromanagement, elevating cognition and decision-making to new creative heights.

Ethical issues and how to address them were addressed by our two panelist speakers, Dr. Pattie Maes (MIT Media Lab) and Dr. Daniel Helman (Winkle Institute), at the final session of the New Human Interfaces Hackathon, part of Cross Labs’ annual workshop 2025.

What Would Linguistic Integration Between Humans and AI Entail?

Future AI-human cognitive integration may follow linguistic pathways familiar from human communication studies. Humans frequently switch between languages (code-switching), blend languages into creoles, or evolve entirely new hybrid linguistic structures. AI-human interaction could similarly generate new languages or hybrid protocols, evolving dynamically based on situational needs, cognitive ease, and efficiency.

Ultimately, Gibberlink offers a useful but modest illustration of a much broader trend: artificial intelligence will naturally evolve optimized communication strategies tailored to specific contexts and constraints. Rather than generating paranoia over secrecy or loss of control, our focus should shift toward thoughtfully managing the integration of AI into our cognitive and communicative processes. If handled carefully, AI can serve as a seamless cognitive extension—amplifying human creativity, enhancing our natural communication capabilities, and enriching human experience far beyond current limits.

Gibberlink’s clever demonstration underscores that AI optimization of communication protocols is inevitable and inherently beneficial, not a sinister threat. The pressing issue is not AI secretly communicating; rather, it’s about thoughtfully integrating AI as an intuitive cognitive extension, allowing humans and machines to communicate and collaborate seamlessly. The future isn’t about AI concealing messages from us—it’s about AI enabling richer, more meaningful communication and deeper cognitive connections.


References

  • Coupé, C., Oh, Y. M., Dediu, D., & Pellegrino, F. (2019). Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche. Science Advances, 5(9), eaaw2594. https://doi.org/10.1126/sciadv.aaw2594
  • Cowls, J., King, T., Taddeo, M., & Floridi, L. (2019). Designing AI for social good: Seven essential factors. Available at SSRN 3388669.
  • Daras, G., & Dimakis, A. G. (2022). Discovering the hidden vocabulary of dalle-2. arXiv preprint arXiv:2206.00169.
  • Diffie, W., & Hellman, M. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 22(6), 644-654.
  • Egri-Nagy, A., & Törmänen, A. (2020). The game is not over yet—go in the post-alphago era. Philosophies, 5(4), 37.
  • Fridrich, J. (2009). Steganography in digital media: Principles, algorithms, and applications. Cambridge University Press.
  • Gallager, R. G. (1962). Low-density parity-check codes. IRE Transactions on Information Theory, 8(1), 21-28.
  • Goodfellow, I., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  • Leavitt, J. H. (2010). Linguistic relativities: Language diversity and modern thought. Cambridge University Press. https://doi.org/10.1017/CBO9780511992681
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
  • Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D., & Batra, D. (2017). Deal or no deal? End-to-end learning for negotiation dialogues. arXiv:1706.05125.
  • Millière, R. (2023). Adversarial attacks on image generation with made-up words: Macaronic prompting and the emergence of DALL·E 2’s hidden vocabulary.
  • Ramesh, A. et al. (2021). Zero-shot text-to-image generation. arXiv:2102.12092.
  • Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-423.
  • Silver, D. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354-359.
  • Whorf, B. L. (1956). Language, thought, and reality. MIT Press.
  • Zipf, G. K. (1949). Human behavior and the principle of least effort: An introduction to human ecology. Addison-Wesley.​

DOI: http://doi.org/10.54854/ow2025.03

Life After Programming: Embracing Human-Machine Symbiosis in the Age of AI

As AI continues to evolve, conversations have started questioning the future of traditional programming and computer science education. The rise of prompt engineering—the art of crafting inputs to lead AI models to generating specific outputs—has led many to believe that mastering this new skill could replace the need for deep computational expertise. While this perspective does capture a real ongoing shift in how humans interact with technology, it overlooks the essential role of human intelligence in guiding and collaborating with AI. This seems a timely topic, and one worthy of discussion—one that we’ll be exploring at this year’s Cross Labs Spring Workshop on New Human Interfaces.

Can we forget about programming? As AI reshapes our interaction with technology, the future is unlikely to erase programming—rather, it would redefine it as another language between humans and machines. Image Credit: Ron Lach

The Evolving Role of Humans in AI Collaboration

Nature demonstrates a remarkable pattern of living systems integrating new tools into, transforming them into essential components of life itself. For example, mitochondria, once an independent bacteria, became integral to eukaryotic cells, taking the function of energy producers through endosymbiosis. Similarly, the incorporation of chloroplasts enabled plants to harness sunlight for photosynthesis, and even the evolution of the vertebrate jaw exemplifies how skeletal elements were repurposed into functional innovations. These examples highlight nature’s ability to adapt and integrate external systems, offering a profound analogy for how humans might collaborate with AI to augment and expand our own capabilities.

Current AI systems, regardless of their sophistication, remain tools that require some kind of human direction to achieve meaningful outcomes. Effective collaboration with AI involves not only instructing the machine but also understanding its capabilities and limitations. Clear communication of our goals ensures that AI can process and act upon them accurately. This process transcends mere command issuance; it quickly turns into a dynamic, iterative dialogue where human intuition and machine computation synergize to mutually cause a desirable outcome.

Jensen Huang’s Take on Democratizing Programming

“The language of programming is becoming more natural and accessible.” NVIDIA CEO Jensen Huang at Computex Taipei, emphasizing how interfacing computers is undergoing a radical shift. Credit: NVIDIA 2023

NVIDIA Founder and CEO Jensen Huang highlighted this paradigm shift in these words:

“The language of programming is becoming more natural and accessible. With AI, we can communicate with computers in ways that align more closely with human thought. This democratization empowers experts from all fields to leverage computing power without traditional coding.”

The quote is from a couple of years ago, in Huang’s Computex 2023 keynote address. This transformation means that domain specialists—not just scientists and SWEs—can now harness AI to drive their work forward. But this evolution doesn’t render foundational knowledge in computer science obsolete; it merely underscores the imminent change in the nature of human-machine interactions. Let us dive further into this interaction.

Augmenting Human Intelligence Through New Cybernetics

A helpful approach, to fully realize AI’s potential, is to focus on augmenting human intelligence via ways akin to the infamous ways of cybernetic technology (Wiener, 1948; Pickering, 2010). « Infamous » only because of how the field abruptly fell out of favor in the late 70’s, as the time’s technical limitations unfortunately led to its decline, along with a growing skepticism about its implications for agency, autonomy, and human identity. They were nevertheless not a poor approach by any count, and may soon become relevant again, as technology evolves into a substrates that better supports its goals.

El Ajedrecista: the first computer game in history. Norbert Wiener (right), the father of cybernetics—the science studying communication and control in machines and living beings—is shown observing the pioneering chess automaton created by Leonardo Torres Quevedo, here demonstrated by his son at the 1951 Paris Cybernetic Conference. Image credit: Wikimedia Commons

Such cybernetic efforts towards human augmentation aim to couple human cognition with external entities such as smart sensors, robotic tools, and autonomous AI models, through a universal science of dynamical control and feedback. This integration between humans and other systems fosters an enactive and rich, high-bandwidth communication between humans and machines, enabling a seamless exchange of information and capabilities.

Embedding ourselves within a network of intelligent systems, and possibly other humans as well, enhances our cognitive and sensory abilities. Such a symbiotic relationship would allow us to better address complex challenges, by efficiently processing and interpreting vast amounts of data. Brain-computer interfaces (BCIs) are examples of such technologies that facilitate direct communication between the human brain and external devices, offering promising avenues for cognitive enhancement (Lebedev & Nicolelis, 2017). Another example is augmented reality (AR), which overlays real-world percepts and virtual knobs with digital additions to enhance human’s connections to physical reality, thus enhancing our experience and handling of it (Billinghurst et al., 2015; Dargan et al., 2023). If they manage to seamlessly blend physical and virtual realities, AR systems have the power to amplify human cognitive and sensory capabilities, empowering us to navigate our problem spaces in contextually meaningful yet intuitive ways.

Humble Steps: The Abacus

Augmenting human intelligence with tools. Here, we observe the classical but not less impressive example of abacuses to assimilate advanced mathematical skills. Video Credit: Smart Kid Abacus Learning Pvt. Ltd. 2023

We just mentioned a couple of rather idealized, advanced-stage technologies to cybernetically couple humans to new rendering of the reality of problem spaces they navigate. But a new generation of cybernetics need not start at the most complex and technologically advanced level. In fact, the roots of such a technology can already be found in simple, yet powerful and transformative tools such as the abacus. That simple piece of wood, beads, and strings has done a tremendous job externalizing our memory and computation, extending our mind’s ability to process numbers and solve problems. In doing so, it has demonstrated how even the most modest-looking tool may amplify cognitive abilities in such a groundbreaking way, laying the basis for more sophisticated augmentations that may merge human intelligence with machine computation.

Not only did the abacus extend human mathematical abilities, but it did so by enactively—through an active, reciprocal interaction—stretching our existing mechanisms of perception and understanding, enhancing how we sense, interpret, interact with, and make sense of the world around us. The device doesn’t replace, but instead really amplifies our existing innate sensorial mechanisms. It brings our mathematical cognitive processes into a back-and-forth ensemble of informational exchanges between our inner understanding and the tool’s mechanics, through which we effectively—and, again, enactively—extend. Similarly, today’s cybernetic technologies can start as modest, focused augmentations—intuitive, accessible, and seamlessly integrated—building step by step toward more profound cognitive symbiosis.

Extending One’s Capabilities Through Integration of Tools

The principle of extending human capabilities through tools, as exemplified by the abacus, can be generalized in a computational framework, where integrating various systems may enhance their combined power to solve problems. Let’s take a simple example of such phenomenon. For instance, in Biehl and Witkowski (2021), we considered how the computational capacity of a specific region in elementary cellular automata (ECA) can be expanded in terms of the number of functions it can compute.

Modern cybernetics: a person seamlessly working on their laptop, using a prosthetic arm, exemplifying an extension of their own embodiment and cognitive ability. This somewhat poetically illustrates Biehl and Witkowski (2021)’s work on expansions of number of functions computed by a given agent, as the agent’s computation tool expands. Credit: Anna Shvets

Interestingly, this research led to the discovery that while coupling certain agents (regions of the ECA) with certain devices they may use (adjacent regions) typically increases the number of functions they are able to compute, some strange instances would occur as well. Sometimes, in spite of « upgrading » the devices used by agents (enlarging the adjacent regions to them), the computing abilities of the agents ended up dropping instead of increasing. This is the computational analog to people upgrading to a new phone, a more powerful laptop, or a bigger car, only to end up being behaviorally handicapped by this very upgrade.

This shows how this mechanism of functional extension, much like the abacus’s amplification of human cognition, extends nicely to a large range of situation, and yet should be treated with great care as it may affect greatly the agency of users. Now that we have established the scene, let’s dive into the specific and important tool of programming.

Programming: Towards A Symbiotic Transition

Programming. What an impactful, yet fundamental tool invented by humans. Was it even invented by humans? The process of creating and organizing instructions—which can later be followed by oneself or a distinct entity to carry out specific tasks—can be found in many instances in nature. DNA encodes genetic instructions, ant colonies use pheromones to dynamically compute and coordinate complex tasks, and even « simple » chemical reaction networks (CRNs) display many features of programming, as certain molecules act as instructions that catalyze their own production, effectively programming reactions that propagate themselves when specific conditions are met. Reservoir computing would be a recent example of this ubiquity of programming in the physical world.

Programming, a human invention, or a human discovery? This image—titled “an artist’s illustration of artificial intelligence”—was inspired by neural networks used in deep learning. Credit: Novoto Studio

Recently, many have argued that it may no longer be worthwhile to study traditional programming, as prompt engineering and natural language interactions dominate. While it’s true that the methods for working with AI are changing, the essence of programming—controlling another entity to achieve goals—remains intact. Sure, programming will look and feel different. But has everyone forgotten how far we’ve come since the advent of Ada Lovelace’s pioneering work and the earliest low-level programs? Modern software engineering typically implies working with numerous abstraction layers built atop the machine and OS levels—using high-level languages like Python, front-end frameworks such as React or Angular, and containerization tools like Docker and Kubernetes. This doesn’t have to be seen as so different from the recent changes with AI. At the end of the day, in one way or another, humans must utilize some language—or, as we’ve established, a set of languages at different levels of abstractions—to combine their own cognitive computation with the machine’s—or, other humans too, really, at the scale of society—so as to work together to produce meaningful results.

This timeless collaboration—akin to the mitochondria or chloroplast story we touched on earlier—will always involve the establishment of a two-way communication. On one hand, humans must convey goals and contexts to machines, as AI cannot achieve objectives without clear guidance. On the other, machines must send back outputs, whether as solutions, insights, or ongoing dialogue, refining and improving the achievement of goals. This bidirectional exchange is the foundation of a successful human-machine partnership. Far from signaling the end of programming, it signals its natural evolution into a symbiotic relationship where human cognition and machine computation amplify one another.

The seamless integration of prosthetics in a human’s everyday life. Credit: Cottonbro Studio

Bidirectional, Responsible Human-Machine Interaction

What I this piece is generally gesturing at, is that by integrating cybernetic technologies to seamlessly couple human cognition with increasingly advanced, generative computational tools, we can build a future where human intelligence is not replaced but expanded. It will be key for the designs to enable enactive, interactive, high-bandwidth communication that fosters mutual understanding and goal alignment. On this path, the future of programming isn’t obsolescence—it means transformation into a symbiotic relationship. By embracing collaborative, iterative human-machine interactions, we can amplify human intelligence, creativity, and problem-solving, unlocking possibilities far beyond what either can achieve alone.

Human-Machine Interaction: Who Is Accountable? This image represents ethics research understanding human involvement in data labeling. Credits: Ariel Lu

Human-machine interaction is inherently bidirectional: machines provide feedback, solutions, and insights that we interpret and integrate, while humans contribute context, objectives, and ethical considerations that guide AI behavior. This continuous dialogue enhances problem-solving by combining human creativity and contextual understanding with machine efficiency and computational power. As we navigate this evolving technological landscape, focusing on responsible AI integration will be critical. Our collaboration with machines should aim to augment human capabilities while respecting societal values, goals, and, importantly, the well-being of all life.

As Norbert Wiener put it:

“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. Lewis Carroll fully expressed this notion in an episode in Sylvie and Bruno, when he showed that the only completely satisfactory map to scale of a given country was that country itself” (Wiener, 1948).

“The best material model for a cat is another, or preferably the same cat.” – Norbert Wiener. Image Credit: Antonio Lapa/ CC0 1.0

True innovation lies in complementing the complexities of life rather than merely replicating them. In order to augment human intelligence, we ought to design our new technologies—especially AI—so as to build on and harmonize with the rich tapestry of human knowledge, the richness of experience, and the physicality of natural systems, rather than attempting to replace them with superficial imitations. Our creations are only as valuable as their ability to reflect and amplify the intricate nature of life and the world itself, enabling us to pursue deeper understanding, open-ended creativity, and meaningful purpose.

The ultimate goal is clear: to be able to design a productive connective tissue for human ingenuity that would seamlessly sync up with the transformative power of machines, and a diverse set of substrates beyond today’s computing trendiest purviews. By embracing and amplifying what we already have, we may unlock new possibilities and redefine adjacent possibles that transcend the boundaries of human imagination. This approach is deeply rooted in a perspective of collaboration and mutual growth, working toward a world in which technology remains a force for empowering humanity, not replacing it.

Allow me to digress into a final thought. Earlier today, a friend shared an insightful thought about how Miyazaki’s films, such as Kiki’s Delivery Service, seem to exist outside typical Western narrative patterns – a thought itself prompted by them watching this video. While the beloved Disney films of my childhood may have offered some great morals and lessons here and there, they definitely fell short in showing what a kind world would look like. Reflecting on this, I considered how exposure to Ghibli’s compassionate, empathetic storytelling might have profoundly shaped my own learning journey as a child – although it did get to me eventually. The alternative worldview these movies offer gently nudges us toward being more charitable. By encouraging us to view all beings as inherently kind, well-intentioned, and worthy of empathy and care, perhaps this compassionate stance is exactly what we need as we strive toward more meaningful, symbiotic collaborations with technology, and certainly with other humans too.


References:

The Innovation Algorithm: DeepSeek, Japan, and How Constraints Drive AI Breakthroughs

In technology, less can truly be more. Scarcity doesn’t strangle progress—it refines it. DeepSeek, cut off from high-end hardware, and Japan, facing a demographic reckoning, are proving that limitations don’t merely shape innovation—they accelerate it. From evolutionary biology to AI, history shows that the most profound breakthroughs don’t originate from excess, but from the pressure to rethink, reconfigure, and push beyond imposed limitations.

When a system—biological, economic, or digital—encounters hard limits, it is forced to adapt, sometimes in a radical way. This can lead to major breakthroughs, ones that would never arise in conditions of structural and resource abundance.

In such situations of constraint, innovation can be observed to follow a pattern—not of mere survival, but of reinvention. What determines whether a bottleneck leads to stagnation or transformation is not the limitation itself, but how it is approached. By embracing constraints as creative fuel rather than obstacles, societies can design a path where necessity doesn’t just drive invention—it defines the next frontier of intelligence.

Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Feb 11, 2024.

DeepSeek: What Ecological Substrate for a Paradigm Shift?

Recently, DeepSeek, the Chinese AI company the AI world has been watching, has achieved a considerable, yet often misrepresented in the popular media, technological feat. On January 20, 2025, DeepSeek released its R1 large language model (LLM), developed at a fraction of the cost incurred by other vendors. The company’s engineers successfully leveraged reinforcement learning with rule-based rewards, model distillation for efficiency, and emergent behavior networks, among enabling advanced reasoning despite compute constraints.

The company first published R1’s big brother V3 last December 2024, a Mixture-of-Experts (MoE) model which allowed for reduced computing costs, without compromising on performance. R1 then focused on reasoning, DeepSeek’s R1 model surpassed ChatGPT to become the top free app on the US iOS App Store just about a week after its launch. This is certainly most remarkable, for a model trained using only about 2,000 GPUs, which is about one whole order of magnitude less than current leading AI companies. The training process was completed in approximately 55 days at a cost of $6M, 10% or so of the expenditure by US tech giants like Google or Meta for comparable technologies. To many, DeepSeek’s resource-efficient approach challenges the global dominance of American AI models, leading to significant market consequences.

DeepSeek R1 vs. other LLM Architectures (Left) and Training Processes (Right). Image Credit: Analytics Vidhya.

Is a Bottleneck Necessary?

DeepSeek’s impressive achievement finds its context at the center of a technological bottleneck. Operating under severe hardware constraints—cut off from TSMC’s advanced semiconductor fabrication and facing increasing geopolitical restrictions—Chinese AI development companies such as DeepSeek have been forced to develop their models in a highly constrained compute environment. Yet, rather than stalling progress, such limitations may in fact accelerate innovation, compelling researchers to rethink architectures, optimize efficiency, and push the boundaries of what is possible with limited resources.

While the large amounts resources made available by large capital investments—especially in the US and the Western World—enable rapid iteration and the implementation of new tools that exploit scaling laws in LLMs, one must admit such efforts mostly reinforce existing paradigms rather than forcing breakthroughs. Historically, constraints have acted as catalysts, from biological evolution—where environmental pressures drive adaptation—to technological progress, where necessity compels efficiency and new architectures. DeepSeek’s success suggests that in AI, scarcity can be a driver, not a limitation, shaping models that are not just powerful, but fundamentally more resource-efficient, modular, and adaptive. However, whether bottlenecks are essential or merely accelerators remains an open question—would these same innovations have emerged without constraint, or does limitation itself define the next frontier of intelligence?

Pushing Beyond Hardware (Or How Higher-Level Emergent Computation Can Free Systems from Lower-Level Limits)

It’s far from being a secret: computation isn’t only about hardware. Instead, it’s about the emergence of higher-order patterns that exist—to a large extent—independently of lower-level mechanics. This may come across as rather obvious in everyday (computer science) life: network engineers troubleshoot at the protocol layer without needing to parse machine code; the relevant dynamics are fully contained at that abstraction. Similarly, for deep learning, a model’s architecture and loss landscape really mostly determine its behavior, not individual floating-point operations on a GPU. Nature operates no differently: biological cells function as coherent systems, executing processes that cannot be fully understood by analyzing individual molecules that form them.

These examples of separation of scale show how complexity scientists identify so-called emergent macrolevel patterns, which can be extracted by coarse graining from a more detailed microlevel description of a system’s dynamics. Under this framing, a bottleneck can be identified at the lower layer—whether in raw compute, molecular interactions, or signal transmission constraints—but is often observed to dissolve at the higher level, where emergent structures optimize flow, decision-making, and efficiency. Computation—but also intelligence, and arguably causality, but I should leave this discussion for another piece—exist beyond the hardware that runs them.

So bottlenecks in hardware can be overcome by clever software abstraction. If we were to get ahead of ourselves—but this is indeed where we’re headed—this is precisely how software ends up outperforming hardware alone. While hardware provides raw power, well-designed software over it structures it into emergent computation that is intelligent, efficient, and perhaps counterintuitively reduces complexity. A well-crafted heuristic vastly outpaces brute-force search. A transformer model’s attention mechanisms and tokenization matters more than the number of GPUs used to train it. And, in that same vein, DeepSeek, with fewer GPUs and lower computational resources, comes to rival state-of-the-art models—oversimplifiedly, with my apologies—made out of mere scaling, by incorporating a few seemingly simple tricks, which are nevertheless truly innovative. If so, let’s pause to appreciate the beautiful demonstration of intelligence not being about sheer compute—it’s about how computation is structured and optimized to produce meaningful results.

Divide and Conquer with Compute

During my postdoctoral years at Tokyo Tech (now merged and renamed Institute of Science Tokyo), one of my colleagues there brought up an interesting conundrum: If you had a vast amount of compute to use, would you use it as a single unified system, or rather divide it into smaller, specialized units to tackle a certain set of problems? What at first glance might seem like an armchair philosophical question, as it turns out touches on fundamental principles of computation and the emergent organization of complex systems. Of course, it depends on the architecture of the computational substrate, as well as the specific problem set. The challenge at play is one of optimization under uncertainty—how to best allocate computational power when navigating an unknown problem space.

The question maps naturally onto a set of scientific domains where distributed computation, hierarchical layers of cognitive systems, and major transitions in evolution intersect. In some cases, centralized computation maximizes power and coherence, leading to brute-force solutions or global optimization. In others, breaking compute into autonomous, interacting subsystems enables diverse exploration, parallel search, and modular adaptation—similar to how biological intelligence, economies, and even neural architectures function. Which strategy proves superior depends on the nature of the problem landscape: smooth and well-defined spaces favor monolithic compute, while rugged, high-dimensional, and open-ended domains can benefit from distributed, loosely coupled intelligence. The balance between specialization and generalization, like coordination vs. autonomy, selective tension vs. relaxation, and goal-drivenness vs. exploration, is one of the deepest open questions, with helpful theories in both artificial and natural realms of complex systems sciences.

Scaling Computation: Centralized vs. Distributed

Computationally, the problem can be framed simply within computational complexity theory, parallel computation, and search algorithmics in high-dimensional spaces. Given a computational resource C, should one allocate it as a single monolithic system or divide into n independent modules, each operating with C/n capacity? A unified, centralized system would run one single instance of a exhaustive search algorithm, optimal for well-structured problems where brute-force or hierarchical methods are viable (e.g., dynamic programming, alpha-beta pruning).

However, as the problem space grows exponentially, computational bottlenecks from sequential constraints prevent linear scaling (Amdahl’s Law) and the curse of dimensionality cause diminishing returns because of the sparsity of relevant solutions. Of course, distributed models introduce parallelism, exploration-exploitation trade-offs, and acknowledgedly other emergent effects too. Dividing C into n units enables decentralized problem solving, similar to multi-agent systems, where independent search processes—akin to Monte Carlo Tree Search (MCTS) or evolutionary strategies—enhance efficiency by maintaining diverse, adaptive search trajectories, particularly in unstructured problem spaces, for example in learning the known complex game of Go (Silver et al., 2016).

Emergent complexity from bottom layers of messy concurrent dynamics, to apparent problem solving at the visible top. Image Credit: Generated by Olaf Witkowski using DALL-E version 3, Feb 11, 2024.

If the solution lies in a non-convex, high-dimensional problem space, decentralized approaches—similar to Swarm Intelligence models—tend to converge faster, provided inter-agent communication remains efficient. When overhead is minimal, distributed computation can achieve near-linear speedup, making it significantly more effective for solving complex, open-ended problems. In deep learning, Mixture of Experts (MoE) architectures exemplify this principle: rather than a single monolithic model, specialized subnetworks activate selectively, optimizing compute usage while improving generalization. Similarly, in distributed AI (e.g., federated learning, neuromorphic systems), intelligent partitioning enhances adaptability while mitigating computational inefficiencies. Thus, the core trade-off is between global coherence and parallelized adaptability—with the optimal strategy dictated by the structure of the problem space itself.

Overcoming Hardware Shortcomings

Back to DeepSeek and similar companies, who may be in situation where they increasingly need to navigate severe hardware shortages. Without access to TSMC’s cutting-edge semiconductor fabrication and facing increasing geopolitical restrictions, DeepSeek operates within a highly constrained compute environment. Yet, rather than stalling progress, such bottlenecks historically have accelerated innovation, compelling researchers to develop alternative approaches that might ultimately redefine the field. Innovation emerges from constraints.

This pattern is evident across history. The evolution of language likely arose as an adaptation to the increasing complexity of human societies, allowing for more efficient information encoding and transmission. The emergence of oxygenic photosynthesis provided a solution to energy limitations, reshaping Earth’s biosphere and enabling multicellular life. The Manhattan Project, working under extreme time and material constraints, produced groundbreaking advances in nuclear physics. Similarly, postwar Japan, despite scarce resources, became a global leader in consumer electronics, precision manufacturing, and gaming, with companies like Sony, Nintendo, and Toyota pioneering entire industries through a culture of innovation under limitation.

Japan’s Unique Approach to Innovation

I moved to Japan about two decades ago to pursue science. Having started my career as an engineer and an entrepreneur, I was drawn to Japan’s distinctive approach to life and technology—deeply rooted in balanced, principled play (in the game of go: honte / 本手 points to the concept of solid play, ensuring the balance between influence and territory), craftsmanship (takumi / 匠, refined skill and mastery in all Japanese arts), and harmonious coexistence (kyōsei / 共生, symbiosis as it is found between nature, humans, and technology). Unlike in many Western narratives, where automation and AI are often framed as competitors or disruptors of society, Japan views them as collaborators, seamlessly integrating them with humans. This openness is perhaps shaped by animistic, Shinto, Confucian and Buddhist traditions, which emphasize harmony between human and non-human agents, whether biological or artificial.

Japan’s technological trajectory has also been shaped by its relative isolation. As an island nation, it has long pursued an independent, highly specialized path, leading to breakthroughs in semiconductors, microelectronics, and precision manufacturing—industries where it remains a critical global leader in spite of a tough competition competition. The country’s deep investment in exploratory science, prioritizing long-term innovation over short-term gains, has cultivated a culture in which technology is developed with foresight and long-term reflection—albeit at times in excess—rather than mere commercial viability competition.

In recent years, Japan has initiated efforts to revitalize its semiconductor industry. Japan’s Integrated Innovation Strategy emphasizes the importance of achieving economic growth and solving social issues through advanced technologies, reflecting the nation’s dedication to long-term innovation and societal benefit (Government of Japan, 2022). The establishment of Rapidus Corporation in 2022 aims to develop a system for mass-producing next-generation 2-nanometer chips in collaboration with IBM, underscoring Japan’s commitment to maintaining its leadership in advanced technology sectors (Government of Japan, 2024). These initiatives highlight Japan’s ongoing commitment to leveraging its unique approach to technology, fostering advancements that align with both economic objectives and societal needs.

Illustration from the Report « Integrated Innovation Strategy 2022: Making Great Strides Toward Society 5.0 » – The three pillars of Japan’s strategy are innovation in science and technology, societal transformation through digitalization, and sustainable growth through green innovation. (Government of Japan, 2022).

Turning Socio-Economic Bottlenecks into Breakthroughs

Today, like China and Korea, Japan faces one of its most defining challenges: a rapidly aging population and a shrinking workforce (Schneider et al., 2018; Morikawa et al., 2024). While many view this as an economic crisis, Japan is transforming constraint into opportunity, driving rapid advancements in automation, AI-assisted caregiving, and industrial robotics. The imperative to sustain productivity without a growing labor force has made Japan a pioneer in human-machine collaboration, often pushing the boundaries of AI-driven innovation faster than many other nations.

Beyond automation, Japan is also taking the lead in AI safety. In February 2024, the government launched Japan’s AI Safety Institute (J-AISI) to develop rigorous evaluation methods for AI risks and foster global cooperation. Japan is a key participant in the International Network of AI Safety Institutes, collaborating with the US, UK, Europe, and others to shape global AI governance standards. These initiatives reflect a broader philosophy of proactive engagement: Japan signals that it does not fear AI’s risks, nor does it blindly embrace automation—it ensures that AI remains both innovative and secure.

At the same time, Japan must navigate the growing risks of open-source AI technologies. While open models have been instrumental in democratizing access and accelerating research, they also introduce new security vulnerabilities. Voice and video generation AI has already raised concerns over deepfake-driven misinformation, identity fraud, and digital impersonation, while the rise of LLM-based operating systems presents new systemic risks, creating potential attack surfaces at both infrastructural and individual levels. As AI becomes increasingly embedded in critical decision-making, securing these systems is no longer optional—it is imperative.

Japan’s history of constraint-driven innovation, its mastery of precision engineering, and its forward-thinking approach to AI safety place it in a unique position to lead the next era of secure, advanced AI development. Its current trajectory—shaped by demographic shifts, computational limitations, and a steadfast commitment to long-term technological vision—mirrors the very conditions that have historically driven some of the world’s most transformative breakthroughs. Once again, Japan is not merely adapting to the future—it is defining it.

Bottlenecks have always been catalysts for innovation—whether in evolution, where constraints drive adaptation, or in technology, where scarcity forces breakthroughs in efficiency and design. True progress emerges not from excess, but from necessity. Japan, facing a shrinking workforce, compute limitations, and an AI landscape dominated by scale, must innovate differently—maximizing intelligence with minimal resources, integrating automation seamlessly, and leading in AI safety. It is not resisting constraints; it is advancing through them. And while Japan may be the first to navigate these pressures at scale, it will not be the last. The solutions it pioneers today—born of limitation, not abundant wealth—may soon define the next era of global technological progress. In this, we can see the outlines of an innovation algorithm—one that harnesses cultural and intellectual context to transform constraints into breakthroughs.

References

Amdahl, G. M. (1967). Validity of the Single Processor Approach to Achieving Large-Scale Computing Capabilities. AFIPS Conference Proceedings. https://doi.org/10.1145/1465482.1465560

Government of Japan. (2022). Integrated innovation strategy: Advancing economic growth and solving social issues through technology. https://www.japan.go.jp/kizuna/2022/06/integrated_innovation_strategy.html

Government of Japan. (2024). Technology for semiconductors: Japan’s next-generation innovation initiative. https://www.japan.go.jp/kizuna/2024/03/technology_for_semiconductors.html

Morikawa, M. (2024). Use of artificial intelligence and productivity: Evidence from firm and worker surveys (RIETI Discussion Paper 24-E-074). Research Institute of Economy, Trade and Industry. https://www.rieti.go.jp/en/columns/v01_0218.html

Ng, M., & Haridas, G. (2024). The economic impact of generative AI: The future of work in Japan. Access Partnership. https://accesspartnership.com/the-economic-impact-of-generative-ai-the-future-of-work-in-japan/

Schneider, P., Chen, W., & Pöschl, J. (2018). Can artificial intelligence and robots help Japan’s shrinking labor force? Finance & Development, 55(2). International Monetary Fund. https://www.imf.org/en/Publications/fandd/issues/2018/06/japan-labor-force-artificial-intelligence-and-robots-schneider

Silver, D., Huang, A., Maddison, C. J., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489. https://doi.org/10.1038/nature16961

DOI: https://doi.org/10.54854/ow2025.01

Limited AGI: The Hidden Constraints of Intelligence at Scale

In his recent blog post The Intelligence Age, a few days ago, Sam Altman has expressed confidence in the power of neural networks and their potential to achieve artificial general intelligence (AGI—some strong form of AI reaching a median human level of intelligence and efficiency for general tasks) given enough compute. He sums it up in 15 words: “deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.” With sufficient computational power and resources, he claims, humanity should reach superintelligence within “a few thousand days (!)”

AI will, according to Altman, keep improving with scale, and this progress could lead to remarkable advances for human life, including AI assistants performing increasingly complex tasks, improving healthcare, and accelerating scientific discoveries. Of course, achieving AGI will require us to address major challenges along the way, particularly in terms of energy resources and their management to avoid inequality and conflict over AI’s use. Once all challenges are overcome, one would hope to see a future where technology unlocks limitless possibilities—fixing climate change, colonizing space, and achieving scientific breakthroughs that are unimaginable today. While this sounds compelling, one must keep in mind how the concept of AGI and its application remains vague and problematic. Intelligence, much like compute, is inherently diverse and comes with a set of constraints, biases, and hidden costs.

Historically, breakthroughs in computing and AI have been tied to specific tasks, even when they seemed more general. For example, even something as powerful as a Turing machine, capable of computing anything theoretically, still has its practical limitations. Different physical substrates, like GPUs or specialized chips, allow for faster or more efficient computation in specific tasks, such as neural networks or large language models (LLMs). These substrates demonstrate that each form of AI is bound by its physical architecture, making some tasks easier or faster to compute.

Beyond computing, this concept can be better understood in its extension to biological systems. For instance, the human brain is highly specialized for certain types of processing, like pattern recognition and language comprehension, but it is not well-suited for tasks that require high-speed arithmetic or complex simulations, in which computers excel. Reversely, biological neurons, in spite of operating much slower than their digital counterparts, achieve remarkable feats in energy efficiency and adaptability through parallel processing and evolutionary optimization. Perhaps quantum computers make for an even stronger example: while they promise enormous speedups for specific tasks like factoring large numbers or simulating molecular interactions, the idea of them being universally faster than classical computers is absolutely false. Additionally, they will also require specialized algorithms to fully leverage their potential, which may require another few decades to develop.

These examples highlight how both technological and biological forms of intelligence are fundamentally shaped by their physical substrates, each excelling in certain areas while remaining constrained in others. Whether it’s a neural network trained on GPUs or a biological brain evolved over millions of years, the underlying architecture plays a key role in determining which tasks can be efficiently solved and which remain computationally expensive or intractable.is bound by its physical architecture, making some tasks easier or faster to compute.

As we look toward the potential realization of an AGI, whatever this may formally mean—gesturing vaguely at some virtual omniscient robot overlord doing my taxes—it’s important to recognize that it will likely still be achieved in a “narrow” sense—constrained by these computational limits. Additionally, AGI, even when realized, will not represent the most efficient or intelligent form of computation; it is expected to reach only a median human level of efficiency and intelligence. While it might display general properties, it will always operate within the bounds of the physical and computational layers imposed on it. Each layer, as in the OSI picture of networking, will add further constraints, limiting the scope of the AI’s capabilities. Ultimately, the quest for AGI is not about breaking free from these constraints but finding the path of least resistance to the most efficient form of intelligent computation within these limits.

While I see Altman’s optimism about scaling deep learning as valid, one should realize that the implementation of AGI will still be shaped by physical and computational constraints. The future of AI will likely reflect these limits, functioning in a highly efficient but bounded framework. There is more to it. As Stanford computer scientist Fei-Fei Li advocates for it, embodiment, “Large World Models” and “Spatial Intelligence” are probably crucial for the next steps in human technology and may remain unresolved by a soft AGI as envisioned by Altman. Perhaps the field of artificial life too may offer tools for a more balanced and diverse approach to AGI, by incorporating the critical concepts of open-endedness, polycomputing, hybrid and unconventional substrates, precariousness, mutually beneficial interactions between many organisms and their environments, as well as the self-sustaining sets of processes defining life itself. This holistic view could enrich our understanding of intelligence, extending beyond the purely computational and human-based to include the richness of embodied and emergent intelligence as it could be.

References

The Intelligence Age, Sam Altman’s blog post https://ia.samaltman.com/
World Labs,
Fei-Fei Li’s 3D AI startup https://www.worldlabs.ai/
International Society for Artificial Life –
https://alife.org/

DOI: https://doi.org/10.54854/ow2024.02

Artificial Life

At a point in time when technology and biology appear to converge, can we decode the mysteries of life grounded in either realm, through the lens of science and philosophy? Bridging between natural and artificial seems to challenge conventional wisdom and propel us into a wild landscape of new possibilities. Yet, the inquiry into the nature of life, regardless of its medium and the specific laws of the substrate from which it emerges, may give us an opportunity to redefine the contours of our own identity as human beings, transcending the physics, chemistry, biology, culture, and technology that are made by and constitute us.

A depiction of artificial cybernetic entity encompassing diverse layers and forms of life. Image Credit: Generated by Olaf Witkowski using DALL-E version 2, August 21, 2024.

Artificial Life, commonly referred to as ALife, is an interdisciplinary field that studies the nature and principles of living systems [1]. Similarly to its elder sibling, Artificial Intelligence (AI), ALife’s ambition is to construct intelligent systems from the ground up. However, its scope is broader. It concentrates not only on mimicking human intelligence, but instead aims at modeling and understanding the whole realm of living systems. Parallel to biology’s focus of modeling known living systems, it ventures further, exploring the concept of “Life as It Could Be”, which encompasses undiscovered or unexisting forms of life, on Earth or elsewhere. As such, it truly pushes the boundaries of our current scientific, technological, and philosophical understanding of the nature of the living state.

The study of artificial life concentrates on three main questions: (a) the emergence of life on Earth or any system, (b) its open-ended evolution and seemingly unbound increase in complexity through time, and (c) its ability of becoming aware of its own existence and of the physical laws of the universe in which it is embedded, thus closing the loop. In brief, how life emerges, grows, and becomes aware. One could also subtitle these parts as the origin, intelligence, and consciousness of the living state.

The first point, about the emergence of life, may be thought of as follows. If one were to fill a cup with innate matter – perhaps some water and other chemical elements – and leave it untouched for an extended period of time, it might end up swarming with highly complex life. This seemingly mundane observation serves as a very concrete metaphor for the vast and complex range of potentialities that reside in possible timelines of the physical world. The contents of the cup may eventually foster many forms of life, from minimal cells to the most complex, highly cognitive assemblages. Artificial life thus explores the emergence and properties of complex living systems from basic, non-living substrates. This analogy points out ALife’s first foundational question: How can life arise from the non-living? By delving into the mechanisms that enable the spontaneous emergence of life-like properties and behaviors, ALife researchers strive to understand the mechanisms of self-organization (appearance of order from local interactions), autopoiesis (or the capacity of an entity to produce itself), robustness (resilience to change), adaptation (ability to adjust in response to environmental change), and morphogenesis (developing and shifting shape), all key processes that appear to animate the inanimate.

This, in turn, paves the way for our understanding of the open-ended evolution of  living systems,  which tend to acquire increasing amounts of complexity through time. This begs the second foundational question: How does life indefinitely invent novel solutions to its own survival and striving? Or, in its more practical declension: How can we design an algorithm that captures the essence of open-ended evolution, enabling the continuous, autonomous generation of novel and increasingly complex forms of life and intelligence in any environment? Unlocking the mechanism behind this open-endedness is crucial because it embodies the ultimate creative process setting us on the path of infinite innovation [2]. It represents the potential to harness the generative power of nature itself, enabling the discovery and creation of unforeseen solutions, technologies, and forms of intelligence that could address some of humanity’s most enduring challenges. At its core, it also connects with the very ability of living systems to learn, which brings us to our third and final point.

Not only do some of systems learn, but they also appear to acquire – assuming they didn’t possess this faculty at some earlier stage, or at least not to the same extent – a knack for rich, high-definition, vivid sensing, perception, experience, understanding, and interaction with their own reality with goals. How do these increasingly complex pieces and patterns forming on the Universe’s chess board become aware of their own, and other beings’ existence? The third foundational question of ALife delves into the consciousness and self-awareness of living systems: How do complex living systems become aware of their existence and the fundamental laws of the universe they inhabit? This question explores the transition from mere biological complexity to the emergence of cognitive processes that allow life to reflect upon itself and its surroundings. ALife investigates the principles underlying this awareness feature of life, and aims to replicate such phenomena within artificial systems. This inquiry not only broadens our understanding of consciousness but also challenges us to recreate systems that are not only alive and intelligent, but are also aware of their own aliveness and intelligence, closing the loop of life’s emergence, evolution, and self-awareness.

All three questions are investigated through Feynman’s synthetic, engineering angle: What I cannot create, I do not understand. By aiming at not only explaining, but also effectively creating and recreating life-like characteristics in computational, chemical, mechanical, or other physical systems, the research endeavor instantiates itself as a universal synthetic biology field of philosophy, science and technology. This includes the development of software simulations that exhibit behaviors associated with life—such as reproduction, metabolism, adaptation, and evolution—and the creation of robotic or chemical systems that mimic life’s physical and chemical processes. Through these components, ALife seeks to understand the essential properties that define life by creating systems that exhibit these properties in controlled settings, thus providing insights into the mechanisms underlying biological complexity and the potential for life in environments vastly different from those encountered so far on Earth, and also exploring the condition of possibility of other  creatures combining known and unknown patterns of life in any substrate. This in turn, should allow us to better understand the uniqueness and awesome nature of life, human or other, on the map of all possible life, and perhaps will also inform our ethics for all beings [3].

References

[1] Bedau, M. A., & Cleland, C. E. (Eds.). (2018). The Nature of Life. Cambridge University Press.

[2] Stanley, K. O. (2019). Why open-endedness matters. Artificial life, 25(3), 232-235.

[3] Witkowski, O., and Schwitzgebel, E. (2022). Ethics of Artificial Life: The Moral Status of Life as It Could Be. ALIFE 2022: The 2022 Conference on Artificial Life. MIT Press.

Further Reading

Scharf, C. et al. (2015). A strategy for origins of life research.

Baltieri, M., Iizuka, H., Witkowski, O., Sinapayen, L., & Suzuki, K. (2023). Hybrid Life: Integrating biological, artificial, and cognitive systems. Wiley Interdisciplinary Reviews: Cognitive Science, 14(6), e1662.

Witkowski, O., Doctor, T., Solomonova, E., Duane, B., & Levin, M. (2023). Toward an ethics of autopoietic technology: Stress, care, and intelligence. Biosystems, 231, 104964.

Dorin, A., & Stepney, S. (2024). What Is Artificial Life Today, and Where Should It Go?. Artificial Life30(1), 1-15.

Related Links

Cross Labs: https://www.crosslabs.org/

Center for the Study of Apparent Selves: https://apparentselves.org/

ALife Japan: https://www.alife-japan.org/

International Society for Artificial life: Artificial Life https://alife.org/

This piece is cross-posted here, as a part of a compendium of short essays edited by the Center for Study of Apparent Selves after a workshop at Tufts University in Boston in 2023.

DOI: https://doi.org/10.54854/ow2024.01