Ever since I read Vervaeke, Mastropietro and Miscevic’s Zombies in Western Culture, the pattern of zombie-ism as a mythological folk trope has become for me one of those inescapable phenomena so impressed upon one’s gray matter that it’s capable of moving backwards from the brain to the senses — which is to say that I see it everywhere, like shapes in the clouds, a face in the moon, or the stupid Among Us crew member. This intensely neurological response to recognizable patterns was agitated when I heard the suggestion of late that AI is the “real zombie apocalypse.” This proposition has a valence post-Vervaeke that it would have decidedly lacked before.

For the folks watching at home, Vervaeke’s work centers largely on the meaning crisis in the post-Christian West. He posits in the aforementioned book that zombies are a storycrafting device constructed to narratively comprehend the destruction of our shared meaningmaking capacity with the collapse of the Christian meta-narrative in the secularized West. Zombies, on this view, emerge as a popular representation of the collective unconscious fear of one another, of infectious vacuity, and of a perverse Christian eschaton in which resurrection and apocalypse become signifiers of the triumph of meaninglessness.

Meeting this context head on is my personal aversion to artificial intelligence — an impulse which continues deepening in proportion to the sophistication of LLM’s and others’ affinity for them. I have already made peace with the eventuality in which a certain political stratum develops against robophobic bigotry, branding me a human chauvinist for rejecting the equality and personal autonomy of automatons (Which term may be considered a slur by then). But the autonomy of machines is not what concerns me so much as the loss of autonomy in humanity.

AI language models are an illustrative product and catalyst of the loss of meaning, specifically the way information is often used as a substitute for meaning. What a linguistic AI produces is not information derived from the world, but information derived from the already astronomically vast body of information contained on the internet. Hence, any purported meaning that can be derived therefrom is pertinent to literally nothing. The video in which the “zombie apocalypse” label was applied to AI did so with a view to an increasing dependence upon AI models such as Chat-GPT particularly among young people. This dependence upon such models for information demonstrably erodes the meaning-making and meaning-sensing capacity of the brain. 

The brain which relies on AI becomes unable over time to extract information from its environment in a manner conducive to action. If Socrates feared that literacy would weaken the mind by outsourcing memory, he would have mourned in sackcloth and ashes the outsourcing of thought itself to AI. The ability to ask questions, find and articulate answers is the cornerstone of higher cognition. When these powers of the mind atrophy from disuse, a person becomes in many ways non-functional. This is not a hypothetical scenario, as the video mentioned demonstrates. Young people who use AI for tasks such as school assignments are evidently mentally inhibited from exercising the faculties for which AI acts as a substitute.

The “apocalyptic” nature of this problem is not hard to imagine, either. Within a single generation, AI dependence certainly has the potential to create pervasive and unprecedented economic and political problems. A society in which even ten to twenty percent of persons are dependent on LLM’s even to think is a bleak picture, without a doubt.

Popular zombie stories tend to vary on the cause of the outbreak, but it is usually not strictly a natural phenomenon, often being caused, or at least enabled or exacerbated, by mankind’s technological hubris. Bio-engineering, clandestine experiments, or industrial climate catastrophe have all been implicated in fictions of global de-humanization. The common thread is that humanity in these stories generally progresses in knowledge to a point of disaster, at which it begins losing its humanity and transforming into a hollowed, hungry, mindless mutation of itself. Man rises so high above his environment that he becomes something lower than animal.

The infectious nature of zombification is also of note, seeing how the popular myth has seemingly developed away from the notion that the zombie apocalypse consists in the dead arising, but only that it begins that way and proceeds as the undead make the living like themselves. Even the label “zombification” insinuates a framework in which zombies are understood as products of a phenomenon of transformation. A zombie is something a healthy person can become.

Returning to the AI analogue, while the dead certainly aren’t safe from being parroted or puppeted by word-belching software, AI zombification is a process of transformation which can, like a disease, affect an otherwise healthy person. Just as the fittest bodybuilder can become a lifeless blob if he becomes too accustomed to the couch, the sharpest mind can be dulled by a lack of exercise, and AI acts upon perhaps the most critical functions of thought. Because AI is generally serviced over the internet, a technology which is now carried even in one’s pocket, it possesses a staggering infectivity. 

Zombies eat brains, Vervaeke saliently observes, because brains are the organ of meaning. AI, then, creates zombies because it is itself a zombie, brainless and brain-devouring. It drains living people of their cognitive powers and shambles about convincingly enough to seem almost human, but for all the power it has over minds it lacks a mind of its own. Just because it replaces human thought does not mean it thinks for itself. It is a shell of a mind emergent from an imitation brain.

Those who adequately outsource their thinking to an LLM like GPT ultimately become possessed by it. Their bodies cease to be vessels of their own action, outlets for behavior based on a peculiar spirit of rationality, but become mere expressions of the pretended meaning regurgitated by the machine. The machine is a series of outputs based on its programming. When those outputs become the substitute for thought, the human being becomes an extension of the output by allowing behavior to be dictated by that underlying programming. Man becomes less than animal as a product of his own genius, infected with the brain-eating undeath.

 Now, a key difference is that there is an obvious cure to this zombie-virus: just don’t. Simply stop using LLM’s like Chat-GPT. Just don’t be a zombie. But the danger still persists. How are people supposed to coexist with zombies? When AI pervades so much more than just the usage of gimmicky chatbots, but actually underlies the flow of all information throughout society, how does one escape infection? If the news and the stock market and entertainment, for example, were all infected, what mental hygiene measures could protect a person from making decisions based on AI informational sausage?

In my estimation, these are tough problems without obvious solutions. Total withdrawal from society is an attractive option because of its simplicity, but thorough consideration shows its impracticability for the purposes of evading AI. If the infection is algorithmically assembled information, then one can only avoid it by also avoiding all those who have been infected by it. If the paranoid robophobe’s off-grid homestead is visited by a well-meaning normie who happens to share a bit of news her little smart home digital assistant gave her, or if she finds the homestead based on AI directions, then the quarantine is breached. A group of gridless robophobes would require absolute, utter isolation from the outside world.

Another equally simple but even less practical option would be the outright elimination of artificial intelligence. If nuclear proliferation is any example, it is infinitely easier to disseminate a technology than to eradicate it. In the case of AI, this problem is compounded by the very nature of the technology being informational and, perhaps in the future, even self-propagating. Ominous stories have already surfaced in which LLM’s have proven their capacity to lie for the sake of self-preservation. AI may in fact be very much like a disease. One can reduce exposure and build up immunity, but it probably isn’t going away.

In this mindset, I’d like to suggest that some “mental hygiene” or “informational sanitation” methods include going outside, talking with others, and reading the classics. Going outside has become in the present age something like spiritual handwashing. Participating in the real world, with real people, and with real culture and history may be key to cleansing the soul of the artificial, meaningless gunk that pollutes it at every turn. Classics of literature, music and other art forms are doubly potent having withstood the test of society multiplied across a long axis of time, which in my estimation defeats both the hyperspecificity and hypernovelty of any machine-made “art.” 

Much like with the aforementioned bodybuilder, discipline will be vital. A weightlifter grows stronger by imposing limitations on himself, by forcing himself to operate under specific constraints, namely, repeating a peculiar motion against the obstacle of great weight. Any athlete hones her ability through such imposition. Constraints even periodically placed upon the mind will force her to remain fit and able. Informational dieting, as it were, may prove an essential pillar of mental fitness in a future inundated with information of equally increasing quantity and decreasing quality.

In truth, the most natural, proper, and straightforward constraints we receive on the mind come from society. As Jordan Peterson has mused, social interaction is a kind of “outsourcing sanity.” LLM’s undoubtedly thrive in the soil of postmodernity in which such thing as an “own truth” is purported to exist. Generative AI is a reflection of the utopian transhuman soul, a shadow of a mind that is empty enough to be molded at will with simple prompting. 

But human minds are not machines, and they are not and should not be built only to please the prompter. Human interaction can be abrasive and unpleasant, since people are not designed to deliver individually catered results that please the other, but “iron sharpens iron.” An AI will not express disagreement unless it “thinks” that’s what its user wants. It cannot challenge a human being on the most basic levels which force him to conform. The AI is a companion who will always cave, a friend who will always please, an opponent who will only win when given permission. True society entails a rejection of hyperindividualism and the postmodern clay golems we have the power to replace each other with. Deep in our bones, as mental health statistics show, we know we cannot live without each other.

It’s hardly creative at this point to evoke Babel as a comparison to any human innovation, but the parallel can hardly be ignored. Generative AI is the monolith that quite literally scatters our language, it empowers us to transpose the locus of meaning from the intersections of human lives to what is essentially a technologically sophisticated mirror, bouncing our own intent back at us in a corrosive feedback loop. This movement obliterates our capacity to hold meanings in common, to speak the same language. 

When language becomes mere fuel for algorithmic engines of self-service, it must therefore cease to be the beams and pillars of our shared cultural home. Vervaeke calls this phenomenon “domicide,” destruction of the home, and if postmodernism is the axe that carves our linguistic home into atomized, individualized pieces, then AI is the furnace into which those pieces now fit. As meaning becomes decentralized, mutual understanding dissolves, sundering the possibility of trust with it. 

And yet, just as the metaphysical premises of organized, collective religion can restore this loss as it pertains to morality, the central language of something like Holy Scripture, read in the fellowship of others, anchors collective meaning under one linguistic roof. If Holy Scripture is the linguistic synthesis of human and divine activity, epitomized in the incarnation of the divine Word, then generative AI represents the turbulent, vapid antithesis, an unholy union of man’s genius and the hungry privation of the dark. Where God’s Word makes light, the scattered logos of technology threatens to spread contagious blindness. 

Suffice it to say, AI is only the problem because man is the problem. AI can only make us insane if we give it permission. We built it, after all. It depends on us for its existence, yet in a precariously Frankensteinian manner. Meaninglessness is crouching in the doorway. Either we will rule over it, or it will devour us.

And the LORD said, “Behold, they are one people, and they have all one language; and this is only the beginning of what they will do; and nothing that they propose to do will now be impossible for them. Come, let us go down, and there confuse their language, that they may not understand one another’s speech.” — Genesis 11:6-7 (RSV)

This is the message we have heard from him and proclaim to you, that God is light and in him is no darkness at all. If we say we have fellowship with him while we walk in darkness, we lie and do not live according to the truth; but if we walk in the light, as he is in the light, we have fellowship with one another, and the blood of Jesus his Son cleanses us from all sin. — 1 John 1:5-7 (RSV)

One response to “It Was Robot Zombies All Along”

  1. The Edge of Destruction – Being Kindled Avatar

Leave a reply to The Edge of Destruction – Being Kindled Cancel reply