One could be forgiven for feeling that AI is all but being shoved down our collective throats during the past couple of years. In an age where AI is discussed with a kind of breathless inevitability, the cold truth is that this new world of ours is not only brave, but also filled with shame—less a utopia of progress than a mirror to our metaphysical insecurities. As each next headline or LinkedIn post outshouts the last in forecasting either salvation or collapse of civilisation to AI, it is worth pausing to consider whether what all this quietly exposes is little more than our ongoing, crumbling faith in being human.

Indeed, we are not merely living through an era of stunning technological breakthrough, but also of stunning psychological unravelling; one that philosopher Günther Anders anticipated nearly three-quarters of a century ago when he described the phenomenon of Promethean shame—that peculiar modern emotion of embarrassment, even inferiority, before our own creations.

Instead of looking at our creations and marvelling at our own ingenuity from having devised them in the first place, we measure ourselves against them and, unsurprisingly, often find ourselves lacking.

As Anders poetically put it, we are ‘inverse utopians’—unable to imagine the worlds we create (as opposed to traditional utopians, who are unable to create the worlds they imagine) and, as such, finding ourselves devoid of meaning.

This, however, is nothing new or unique to AI—it echoes earlier techno-fetishes, from steam engines to cars to the internet to, in essence, any technology at all, where the very things meant to assist and augment human practice instead came to dictate it. In each case, the tool quickly transcended its status as instrument and became a standard of legitimacy, an ontological mirror to reflect our glaring obsolescence back at us—if you are not using that latest thing, it is YOU who must be the problem. The machines we claim to design as extensions of our capacities instead appear to eclipse us entirely, their fluency outshining our stammering, their precision mocking our ambiguity, their tirelessness a mirror to our fatigue.

This disjunction is not just about scale, but about tempo. As Anders noted, our moral, political, and psychological apparatus develops far more slowly than our technical capabilities. We manufacture artefacts the implications of which we fail to metabolise emotionally. As a result, we do not resist, reflect or comprehend. In this reversal of admiration, awe curdles into inadequacy and, eventually, into shame. If machines can write books, solve problems, pass difficult exams, compose images, speech and music—often in seconds—what, then, becomes of thinking, effort, struggle and, perhaps most critically, of doubt? When we are told that AI can not only do what we do, but also do it better, faster, and more reliably, the question becomes not whether AI is replacing us, but whether there is any point to us in the long run at all.

This cultural mood—part foaming at the mouth with excitement, part panicked at the impending apocalypse—has less to do with the technology itself and more with metaphysical retreat into the universe of objectivity. It reflects the internalisation of a long-standing but deeply flawed materialist metaphysics. One that equates experience with cognition, postulates that consciousness is computation, that thought is reducible to information, and that meaning is just a matter of output efficiency. In a sense, materialism is not merely a metaphysical claim but a creed where more is not only better but essential. An ethos of more text, more content, more speed, more anything and everything reshapes not only what we do, but why we do at all. Within this framework, value is measured by volume, speed, and objectivity rather than depth, ambiguity, or subjectivity.

If any of this sounds familiar, it is because this is precisely the ethos that animates the current celebration of LLMs. They do not help us think better but allow us to produce more and faster. Like Huxley’s assembly-line reproduction of culture in the Brave New World, today’s AI hype represents a hyper-materialist machine logic applied to the essence of what it means to be human itself. And nowhere is this more evident than in our response to generative language models. LLMs, trained not to understand but to simulate, are being readily granted epistemic authority simply because they appear to resemble us. And yet, in a culture already primed to equate function with value, resemblance alone is good enough.

Günther Anders foresaw this trajectory with chilling clarity. As our tools became more refined, we would increasingly model our own behaviour on theirs—not because they are human, but because we are afraid that we are not as good as they are. We create machines whose workings exceed our understandings and then conclude that it must be we who are inadequate. Rather than interrogating the design, we interrogate ourselves. This inversion of agency—where man sees themself as the weak link in the system they created—was what Anders termed “the outdatedness of the human.” It marks the shift from self-affirmation to self-abandonment, from mastery to mimicry, from invention to inferiority. The tradition Anders draws from—phenomenology—reminds us that being-in-the-world is a complex, historically contextual and thoroughly embodied experience. Our existence, growth, and the derivation of meaning is a result of our physical participation in the world—of trials, errors, discoveries and, most critically, efforts.

To take writing—the bread and butter of LLMs—as an example. A common trope that LLMs such as ChatGPT will make ‘writing better’ is a misnomer. To be sure, they will probably make reading better and spare all of us a great deal of barely comprehensible gibberish but, in so far as writing is concerned, anyone who’s ever taken the care to extract even a semi-decent thought from their mind to paper will attest that writing is not merely a means of content creation but a deeply existential struggle. In this respect, Joan Didion hits the nail on the head with “I don’t know what I think until I write it down.” This is highly perceptive because knowledge is not only an embodied experience but also one that emerges out of action and participation. Sorry, Descartes, your existence is not predicated on you thinking alone—it is predicated on your action and agency. LLMs cannot do that for they do not experience their writing and, hence, they do not know.

However, we seem to be hell bent on reducing that key aspect of existence—call it hard work, struggle, suffering, whatever—as much as possible. Huxley, following Nietzsche and many others, described this as disappearance of tragedy—a cultural shift where suffering, contradiction, and ambiguity are not simply avoided, but engineered out of our epistemic practices. Why were ancient cultures addicted to tragedy? Why does the heroine in an opera always die? And why are most compelling stories we have always revolve around death and rebirth in one way or another? Because these are the stories that resonate with our existential wiring. Martin Heidegger, one of Anders’ Professors, famously stated that we only become aware of living when we become aware of death. Not because we discover finality, but because that is when the struggle to live begins. LLMs do not struggle, suffer, nor work hard—they generate. To declare that this is superior to the struggles which validate existence is to surrender our epistemic dignity—the right to struggle for meaning rather than receive it pre-digested.

However, it is not all our individual fault. The rhetoric that pushes adoption of AI tools across workplaces, universities, public discourse, and even politics is not just a technological trend—it is a symptom of a materialistic worldview put on steroids of neck breaking speed at which LLMs develop capabilities that would seem like science fiction a mere decade ago. As a result, we have become so comfortable in the language of outputs and inputs, of productivity and scale, that we scarcely remember how (or why) to defend the values of method, uncertainty, or complex thought. More concerningly still, the romantic humanist tradition—rooted in inwardness and irreducibility—has been so thoroughly marginalised that invoking it now not only feels quaint but even reactionary.

The sense of shame that gives rise to the myth that our creations render us obsolete exposes a deeper metaphysical crisis of the 20th century, one that most of us will be all too happy to brush under the carpet—that, in Nietzschean terms, god is dead and we are the ones who killed him on the altar of materialism. The price? Existential sloth. Even a cursory look across the world’s primary theological traditions will reveal what we instinctively know to be true—that hard work is the crucible of growth, the signature of consciousness, the path to wisdom. In Christianity, it redeems; in Judaism, it binds us to history and responsibility; in Islam, it humbles and tests us; in Buddhism, it awakens insight. To suffer, in this sense, is not to fail—it is to live meaningfully and attentively. No pain, no gain, as it were. How, then, did we sell out thousands of years of wisdom for AI agents that can tell you what to wear today based on the contents of your calendar and the weather forecast for the day?

Anders warned that our deepest shame would come not from what machines can do, but from our growing belief that we ought to be like them. Both the capabilities and availability of AI is nothing short of incredible and these technologies are fascinating, to be sure. But the shiny, new things that we marvel at are also a mirror to our humanity—an opportunity to see how erosion of our willingness to endure ambiguity, to think purposefully, to act morally, and to suffer meaningfully is taking us towards a Huxlean dystopia of a brave, new, but also existentially shameful world.