Life After Programming: Embracing Human-Machine Symbiosis in the Age of AI

As AI continues to evolve, conversations have started questioning the future of traditional programming and computer science education. The rise of prompt engineering—the art of crafting inputs to lead AI models to generating specific outputs—has led many to believe that mastering this new skill could replace the need for deep computational expertise. While this perspective does capture a real ongoing shift in how humans interact with technology, it overlooks the essential role of human intelligence in guiding and collaborating with AI. This seems a timely topic, and one worthy of discussion—one that we’ll be exploring at this year’s Cross Labs Spring Workshop on New Human Interfaces.

Can we forget about programming? As AI reshapes our interaction with technology, the future is unlikely to erase programming—rather, it would redefine it as another language between humans and machines. Image Credit: Ron Lach

The Evolving Role of Humans in AI Collaboration

Nature demonstrates a remarkable pattern of living systems integrating new tools into, transforming them into essential components of life itself. For example, mitochondria, once an independent bacteria, became integral to eukaryotic cells, taking the function of energy producers through endosymbiosis. Similarly, the incorporation of chloroplasts enabled plants to harness sunlight for photosynthesis, and even the evolution of the vertebrate jaw exemplifies how skeletal elements were repurposed into functional innovations. These examples highlight nature’s ability to adapt and integrate external systems, offering a profound analogy for how humans might collaborate with AI to augment and expand our own capabilities.

Current AI systems, regardless of their sophistication, remain tools that require some kind of human direction to achieve meaningful outcomes. Effective collaboration with AI involves not only instructing the machine but also understanding its capabilities and limitations. Clear communication of our goals ensures that AI can process and act upon them accurately. This process transcends mere command issuance; it quickly turns into a dynamic, iterative dialogue where human intuition and machine computation synergize to mutually cause a desirable outcome.

Jensen Huang’s Take on Democratizing Programming

“The language of programming is becoming more natural and accessible.” NVIDIA CEO Jensen Huang at Computex Taipei, emphasizing how interfacing computers is undergoing a radical shift. Credit: NVIDIA 2023

NVIDIA Founder and CEO Jensen Huang highlighted this paradigm shift in these words:

“The language of programming is becoming more natural and accessible. With AI, we can communicate with computers in ways that align more closely with human thought. This democratization empowers experts from all fields to leverage computing power without traditional coding.”

The quote is from a couple of years ago, in Huang’s Computex 2023 keynote address. This transformation means that domain specialists—not just scientists and SWEs—can now harness AI to drive their work forward. But this evolution doesn’t render foundational knowledge in computer science obsolete; it merely underscores the imminent change in the nature of human-machine interactions. Let us dive further into this interaction.

Augmenting Human Intelligence Through New Cybernetics

A helpful approach, to fully realize AI’s potential, is to focus on augmenting human intelligence via ways akin to the infamous ways of cybernetic technology (Wiener, 1948; Pickering, 2010). « Infamous » only because of how the field abruptly fell out of favor in the late 70’s, as the time’s technical limitations unfortunately led to its decline, along with a growing skepticism about its implications for agency, autonomy, and human identity. They were nevertheless not a poor approach by any count, and may soon become relevant again, as technology evolves into a substrates that better supports its goals.

El Ajedrecista: the first computer game in history. Norbert Wiener (right), the father of cybernetics—the science studying communication and control in machines and living beings—is shown observing the pioneering chess automaton created by Leonardo Torres Quevedo, here demonstrated by his son at the 1951 Paris Cybernetic Conference. Image credit: Wikimedia Commons

Such cybernetic efforts towards human augmentation aim to couple human cognition with external entities such as smart sensors, robotic tools, and autonomous AI models, through a universal science of dynamical control and feedback. This integration between humans and other systems fosters an enactive and rich, high-bandwidth communication between humans and machines, enabling a seamless exchange of information and capabilities.

Embedding ourselves within a network of intelligent systems, and possibly other humans as well, enhances our cognitive and sensory abilities. Such a symbiotic relationship would allow us to better address complex challenges, by efficiently processing and interpreting vast amounts of data. Brain-computer interfaces (BCIs) are examples of such technologies that facilitate direct communication between the human brain and external devices, offering promising avenues for cognitive enhancement (Lebedev & Nicolelis, 2017). Another example is augmented reality (AR), which overlays real-world percepts and virtual knobs with digital additions to enhance human’s connections to physical reality, thus enhancing our experience and handling of it (Billinghurst et al., 2015; Dargan et al., 2023). If they manage to seamlessly blend physical and virtual realities, AR systems have the power to amplify human cognitive and sensory capabilities, empowering us to navigate our problem spaces in contextually meaningful yet intuitive ways.

Humble Steps: The Abacus

Augmenting human intelligence with tools. Here, we observe the classical but not less impressive example of abacuses to assimilate advanced mathematical skills. Video Credit: Smart Kid Abacus Learning Pvt. Ltd. 2023

We just mentioned a couple of rather idealized, advanced-stage technologies to cybernetically couple humans to new rendering of the reality of problem spaces they navigate. But a new generation of cybernetics need not start at the most complex and technologically advanced level. In fact, the roots of such a technology can already be found in simple, yet powerful and transformative tools such as the abacus. That simple piece of wood, beads, and strings has done a tremendous job externalizing our memory and computation, extending our mind’s ability to process numbers and solve problems. In doing so, it has demonstrated how even the most modest-looking tool may amplify cognitive abilities in such a groundbreaking way, laying the basis for more sophisticated augmentations that may merge human intelligence with machine computation.

Not only did the abacus extend human mathematical abilities, but it did so by enactively—through an active, reciprocal interaction—stretching our existing mechanisms of perception and understanding, enhancing how we sense, interpret, interact with, and make sense of the world around us. The device doesn’t replace, but instead really amplifies our existing innate sensorial mechanisms. It brings our mathematical cognitive processes into a back-and-forth ensemble of informational exchanges between our inner understanding and the tool’s mechanics, through which we effectively—and, again, enactively—extend. Similarly, today’s cybernetic technologies can start as modest, focused augmentations—intuitive, accessible, and seamlessly integrated—building step by step toward more profound cognitive symbiosis.

Extending One’s Capabilities Through Integration of Tools

The principle of extending human capabilities through tools, as exemplified by the abacus, can be generalized in a computational framework, where integrating various systems may enhance their combined power to solve problems. Let’s take a simple example of such phenomenon. For instance, in Biehl and Witkowski (2021), we considered how the computational capacity of a specific region in elementary cellular automata (ECA) can be expanded in terms of the number of functions it can compute.

Modern cybernetics: a person seamlessly working on their laptop, using a prosthetic arm, exemplifying an extension of their own embodiment and cognitive ability. This somewhat poetically illustrates Biehl and Witkowski (2021)’s work on expansions of number of functions computed by a given agent, as the agent’s computation tool expands. Credit: Anna Shvets

Interestingly, this research led to the discovery that while coupling certain agents (regions of the ECA) with certain devices they may use (adjacent regions) typically increases the number of functions they are able to compute, some strange instances would occur as well. Sometimes, in spite of « upgrading » the devices used by agents (enlarging the adjacent regions to them), the computing abilities of the agents ended up dropping instead of increasing. This is the computational analog to people upgrading to a new phone, a more powerful laptop, or a bigger car, only to end up being behaviorally handicapped by this very upgrade.

This shows how this mechanism of functional extension, much like the abacus’s amplification of human cognition, extends nicely to a large range of situation, and yet should be treated with great care as it may affect greatly the agency of users. Now that we have established the scene, let’s dive into the specific and important tool of programming.

Programming: Towards A Symbiotic Transition

Programming. What an impactful, yet fundamental tool invented by humans. Was it even invented by humans? The process of creating and organizing instructions—which can later be followed by oneself or a distinct entity to carry out specific tasks—can be found in many instances in nature. DNA encodes genetic instructions, ant colonies use pheromones to dynamically compute and coordinate complex tasks, and even « simple » chemical reaction networks (CRNs) display many features of programming, as certain molecules act as instructions that catalyze their own production, effectively programming reactions that propagate themselves when specific conditions are met. Reservoir computing would be a recent example of this ubiquity of programming in the physical world.

Programming, a human invention, or a human discovery? This image—titled “an artist’s illustration of artificial intelligence”—was inspired by neural networks used in deep learning. Credit: Novoto Studio

Recently, many have argued that it may no longer be worthwhile to study traditional programming, as prompt engineering and natural language interactions dominate. While it’s true that the methods for working with AI are changing, the essence of programming—controlling another entity to achieve goals—remains intact. Sure, programming will look and feel different. But has everyone forgotten how far we’ve come since the advent of Ada Lovelace’s pioneering work and the earliest low-level programs? Modern software engineering typically implies working with numerous abstraction layers built atop the machine and OS levels—using high-level languages like Python, front-end frameworks such as React or Angular, and containerization tools like Docker and Kubernetes. This doesn’t have to be seen as so different from the recent changes with AI. At the end of the day, in one way or another, humans must utilize some language—or, as we’ve established, a set of languages at different levels of abstractions—to combine their own cognitive computation with the machine’s—or, other humans too, really, at the scale of society—so as to work together to produce meaningful results.

This timeless collaboration—akin to the mitochondria or chloroplast story we touched on earlier—will always involve the establishment of a two-way communication. On one hand, humans must convey goals and contexts to machines, as AI cannot achieve objectives without clear guidance. On the other, machines must send back outputs, whether as solutions, insights, or ongoing dialogue, refining and improving the achievement of goals. This bidirectional exchange is the foundation of a successful human-machine partnership. Far from signaling the end of programming, it signals its natural evolution into a symbiotic relationship where human cognition and machine computation amplify one another.

The seamless integration of prosthetics in a human’s everyday life. Credit: Cottonbro Studio

Bidirectional, Responsible Human-Machine Interaction

What I this piece is generally gesturing at, is that by integrating cybernetic technologies to seamlessly couple human cognition with increasingly advanced, generative computational tools, we can build a future where human intelligence is not replaced but expanded. It will be key for the designs to enable enactive, interactive, high-bandwidth communication that fosters mutual understanding and goal alignment. On this path, the future of programming isn’t obsolescence—it means transformation into a symbiotic relationship. By embracing collaborative, iterative human-machine interactions, we can amplify human intelligence, creativity, and problem-solving, unlocking possibilities far beyond what either can achieve alone.

Human-Machine Interaction: Who Is Accountable? This image represents ethics research understanding human involvement in data labeling. Credits: Ariel Lu

Human-machine interaction is inherently bidirectional: machines provide feedback, solutions, and insights that we interpret and integrate, while humans contribute context, objectives, and ethical considerations that guide AI behavior. This continuous dialogue enhances problem-solving by combining human creativity and contextual understanding with machine efficiency and computational power. As we navigate this evolving technological landscape, focusing on responsible AI integration will be critical. Our collaboration with machines should aim to augment human capabilities while respecting societal values, goals, and, importantly, the well-being of all life.

As Norbert Wiener put it:

“The best material model for a cat is another, or preferably the same cat. In other words, should a material model thoroughly realize its purpose, the original situation could be grasped in its entirety and a model would be unnecessary. Lewis Carroll fully expressed this notion in an episode in Sylvie and Bruno, when he showed that the only completely satisfactory map to scale of a given country was that country itself” (Wiener, 1948).

“The best material model for a cat is another, or preferably the same cat.” – Norbert Wiener. Image Credit: Antonio Lapa/ CC0 1.0

True innovation lies in complementing the complexities of life rather than merely replicating them. In order to augment human intelligence, we ought to design our new technologies—especially AI—so as to build on and harmonize with the rich tapestry of human knowledge, the richness of experience, and the physicality of natural systems, rather than attempting to replace them with superficial imitations. Our creations are only as valuable as their ability to reflect and amplify the intricate nature of life and the world itself, enabling us to pursue deeper understanding, open-ended creativity, and meaningful purpose.

The ultimate goal is clear: to be able to design a productive connective tissue for human ingenuity that would seamlessly sync up with the transformative power of machines, and a diverse set of substrates beyond today’s computing trendiest purviews. By embracing and amplifying what we already have, we may unlock new possibilities and redefine adjacent possibles that transcend the boundaries of human imagination. This approach is deeply rooted in a perspective of collaboration and mutual growth, working toward a world in which technology remains a force for empowering humanity, not replacing it.

Allow me to digress into a final thought. Earlier today, a friend shared an insightful thought about how Miyazaki’s films, such as Kiki’s Delivery Service, seem to exist outside typical Western narrative patterns – a thought itself prompted by them watching this video. While the beloved Disney films of my childhood may have offered some great morals and lessons here and there, they definitely fell short in showing what a kind world would look like. Reflecting on this, I considered how exposure to Ghibli’s compassionate, empathetic storytelling might have profoundly shaped my own learning journey as a child – although it did get to me eventually. The alternative worldview these movies offer gently nudges us toward being more charitable. By encouraging us to view all beings as inherently kind, well-intentioned, and worthy of empathy and care, perhaps this compassionate stance is exactly what we need as we strive toward more meaningful, symbiotic collaborations with technology, and certainly with other humans too.


References:

Redefining our relationship with AI: shifting from alignment to companionship

As the AI landscape keeps updating itself at the greatest speed, so does the relationship between humans and technology. By paying attention to the autopoietic nature of this relationship, we may work towards building ethical AI systems that respect both the unique particularities of being a human, and the unique emerging qualities that our technology displays as it evolves. I’d like to share some thoughts about how autopoiesis and care, via the pursuit of an ethics of our relationship with technology, can help us cultivate and grow a valuable society to create a better, healthier, and more ethical ecosystem for AI, with a natural human perspective.

The term ‘autopoiesis’ – or ‘self-creation’ (from Greek αὐτo- (auto-) ‘self’, and ποίησις (poiesis) ‘creation, production’) was first introduced by Maturana and Varela (1981), describing a system capable of maintaining its own existence within a boundary. This principle highlights the importance of understanding the relationship between self and environment, as well as the dynamic process of self-construction that gives rise to complex organisms (Levin, 2022; Clawson, 2022).

Ethical Artificial Intelligence. Photo By: DOD Graphic
The main components for ethical AI governance. Here, we suggest that these ingredients naturally emerge from an autopoietic communication design, focused on companionship instead of alignment.

To build and operate AI governance systems that are ethical and effective, we must first acknowledge that technology should not be seen as a mere tool serving human needs. Instead, we should view it as a partner in a rich relationship with humans, where integration and mutual respect are the default for their engagements. Philosophers like Martin Heidegger or Martin Buber have warned us against reducing our relationship with technology to mere tool use, as this narrow view can lead to a misunderstanding of the true nature of our relationship with technological agents, including both potential dangers and values. Heidegger (1954) emphasized the need to view technology as a way of understanding the world and revealing its truths, and suggested a free relationship with technology would respect its essence. Buber (1958) argued that a purely instrumental view of technology would reduce the human scope to mere means to an end, which in turn leads to a dehumanizing effect on society itself. Instead, one may see the need for a more relational view of technology that recognizes the interdependence between humans and the technological world. This will require a view of technology that is embedded in our shared human experience and promotes a sense of community and solidarity between all beings, under a perspective that may benefit from including the technological beings – or, better, hybrid ones.

Illustration of care light cones through space and time, showing a shift in possible trajectories of agents through made possible by integrated cooperation between AI and humans. Figure extracted from our recent paper on an ethics of autopoietic technology. Design by Jeremy Guay.

In a recent paper, we have presented an approach through the lens of a feedback loop of stress, care, and intelligence (or SCI loop), which can be seen as a perspective on agency that does not rely on burdensome notions of permanent and singular essences (Witkowski et al., 2023). The SCI loop emphasizes the integrative and transformational nature of intelligent agents, regardless of their composition – biological, technological, or hybrid. By recognizing the diverse, multiscale embodiments of intelligence, we can develop a more expansive model of ethics that is not bound by artificial, limited criteria. To address the risks associated with AI ethics, we can start by first identifying these risks by working towards an understanding of the interactions between humans and technology, as well as the potential consequences of these interactions. We can then analyze these risks by examining their implications within the broader context of the SCI loop and other relevant theoretical frameworks, such as Levin’s cognitive light cone (in biology; see Levin & Dennett (2020)) and the Einstein-Minkowski light cone (in physics).

Poster of the 2013 movie “Her”, created by Spike Jonze, illustrating the integration between AI and humans, as companions, not tools.

Take a popular example, in the 2013 movie “Her” by Spike Jonze, in which Theodore, a human, goes to form a close emotional connection with his AI assistant, Samantha, with the complexity of their relationship challenging the concept of what it means to be human. The story, although purely fictitious and highly simplified, depicts a world in which AI becomes integrated with human lives in a deeply relational way, pushing a view of AI as a companion, rather than a mere tool serving human needs. Instead, it gives a crip vision of how AI can be viewed as a full companion, to be treated with empathy and respect, helping us question our assumptions about the nature of AI and our relation to it.

One may have heard it all before, in some – possibly overly optimistic – posthumanistic utopic scenarios. But one may defend that the AI companionship view, albeit posthumanistic, constitutes a complex and nuanced theoretical framework drawing from the interplay between the fields of artificial intelligence, philosophy, psychology, sociology, and more fields studying the complex interaction of humans and technology (Wallach & Allen, 2010; Johnson, 2017; Clark, 2019). This different lens radically challenges traditional human-centered perspectives and opens up new possibilities for understanding the relationship between humans and technology.

This leads us to very practical steps for the AI industry to move towards a more companionate relationship with humans include recognizing the interdependence between humans and technology, building ethical AI governance systems, and promoting a sense of community and solidarity between all beings. For example, Japan, a world leader in the development of AI, is increasing its efforts to educate and train its workforce on the ethical intricacies of AI and foster a culture of AI literacy and trust. The “Society 5.0” vision aims to leverage AI to create a human-centered, sustainable society that emphasizes social inclusivity and well-being. The challenge now is to ensure that these initiatives translate into concrete actions and that AI is developed and used in a way that respects the autonomy and dignity of all stakeholders involved.

AI Strategic Documents Timeline by UNICRI AI Center (2023). For more information on the AI regulations timeline, please see here.

Japan is taking concrete steps towards building ethical AI governance systems and promoting a more companionate relationship between humans and technology. One example of such steps is the creation of the AI Ethics Guidelines by the Ministry of Internal Affairs and Communications (MIC) in 2019. These guidelines provide ethical principles for the development and use of AI. Additionally, the Center for Responsible AI and Data Intelligence was established at the University of Tokyo in 2020, aiming to promote responsible AI development and use through research, education, and collaboration with industry, government, and civil society. Moreover, Japan has implemented a certification system for AI engineers to ensure that they are trained in the ethical considerations of AI development. The “AI Professional Certification Program” launched by the Ministry of Economy, Trade, and Industry (METI) in 2017 aims to train and certify AI engineers in the ethical and social aspects of AI development. These initiatives demonstrate Japan’s commitment to building ethical AI governance systems, promoting a culture of AI literacy and trust, and creating a human-centered, sustainable society that emphasizes social inclusivity and well-being.

Creator: IR_Stone 
| 
Credit: Getty Images/iStockphoto
A creative illustration of robotic progress automation (RPA) based on AI companionship theory instead of artificial alignment control policies.

AI is best seen as a companion rather than a tool. This positive way of viewing the duet we form with technology may in turn lead to a more relational and ethical approach to AI development and operation, helping us to build a more sustainable and just future for both humans and technology. By fostering a culture of ethical AI development and operation, we can work to mitigate these risks and ensure that the impact on stakeholders is minimized. This includes building and operating AI governance systems within organizations, both domestic and overseas, across various business segments. In doing so, we will be better equipped to navigate the challenges and opportunities that lie ahead, ultimately creating a better, healthier, and more ethical AI ecosystem for all. It is our responsibility to take concrete steps to build ethical and sustainable systems that prioritize the well-being of all. This is a journey for two close companions.

References

Bertschinger, N., Olbrich, E., Ay, N., & Jost, J. (2008). Autonomy: An Information Theoretic Perspective. In BioSystems.

Buber, M. (1958). I and Thou. Trans. R. G. Smith. New York: Charles Scribner’s Sons.

Clark, A. (2019). Where machines could replace humans—and where they can’t (yet). Harvard Business Review. https://hbr.org/2019/03/where-machines-could-replace-humans-and-where-they-cant-yet

Clawson, R. C., & Levin, M. (2022). The Endless Forms of Self-construction: A Multiscale Framework for Understanding Agency in Living Systems.

Haraway, D. (2013). The Cyborg Manifesto. In The International Handbook of Virtual Learning Environments.

Heidegger, M. (1954). The Question Concerning Technology. Trans. W. Lovitt. New York: Harper Torchbooks.

Huttunen, T. (2022). Heidegger, Technology, and Artificial Intelligence. In AI & Society.

Johnson, D. G. (2017). Humanizing the singularity: The role of literature in AI ethics. IEEE Technology and Society Magazine, 36(2), 6-9. https://ieeexplore.ieee.org/document/7882081

Latour, B. (1990). Technology is Society Made Durable. In The Sociological Review.

Levin, M., & Dennett, D. C. (2020). Cognition all the way down. Aeon Essays.

Maturana, H. R., & Varela, F. J. (1981). Autopoiesis and Cognition: The Realization of the Living.

Varela, F. J., Maturana, H. R., & Uribe, R. (1981). Autopoiesis: The Organization of Living Systems.

Waddington, C. H. (2005). The Field Concept in Contemporary Science. In Semiotica.

Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford University Press.

Witkowski, O., Doctor, T., Solomonova, E., Duane, B., & Levin, M. (2023). Towards an Ethics of Autopoietic Technology: Stress, Care, and Intelligence. https://doi.org/10.31234/osf.io/pjrd2

Witkowski, O., & Schwitzgebel, E. (2022). Ethics of Artificial Life: The Moral Status of Life as It Could Be. In ALIFE 2022: The 2022 Conference on Artificial Life. MIT Press. https://doi.org/10.1162/isal_a_00531

Links

Center for the Study of Apparent Selves
https://www.csas.ai/blog/biology-buddhism-and-ai-care-as-a-driver-of-intelligence

Initiatives for AI Ethics by JEITA Members
https://www.jeita.or.jp/english/topics/2022/0106.html

Japan’s Society 5.0 initiative: Cabinet Office, Government of Japan. (2016). Society 5.0. https://www8.cao.go.jp/cstp/english/society5_0/index.html

What Ethics for Artificial Beings? A Workshop Co-organized by Cross Labs
https://www.crosslabs.org/blog/what-ethics-for-artificial-beings

DOI: https://doi.org/10.54854/ow2023.01