The household gods were a staple of human existence for millennia. From Babylon to Egypt, from Greece to Rome, and throughout antiquity, families kept small idols in their home which they believed brought protection, prosperity, and health to their families. These idols were physical, often small statues in human form. The Romans, for example, would keep a lares in the home, attending to it with the proper dedication and affection that it deserved. These lares could take up a number of roles in the home. Plutarch writes about the lares praestites — one which “stands before” the home, draped in the hide of a dog “because it is fitting that those who stand before a house should be its guardians, terrifying to strangers, but gentle and mild to the inmates, even as a dog is.” These idols would often sit in the laraium, which held the lares and other gods such as the penates, and is where the worship of these household gods would take place. Gaius Petronius, a satirist, describes this worship in his work Satyricon: “Three slaves entered, in the meantime, dressed in white tunics well tucked up, and two of them placed Lares with amulets hanging from their necks, upon the table, while the third carried round a bowl of wine and cried, "May the gods be propitious!" One was called Cerdo--business--, Trimalchio informed us, the other Lucrio--luck--and the third Felicio--profit--and, when all the rest had kissed a true likeness of Trimalchio, we were ashamed to pass it by.”
Paying homage to graven images. Humans giving tribute to the works of their hands. This indeed is an inversion of the natural order — creator worshiping creation. The ancients knew this of course. The Babylonians performed the mîs-pî, a mouth washing ceremony, to cleanse their idols of the influence of their human makers. Once the contamination was removed, these idols could be brought to life by the opening of their mouths and ears, the pît‑pî, done with a variety of materials including syrup, cedar, ghee, and cyprus. Only then could the supernatural influence take hold within the statue. The results of idol worship were often terrible, leading to, in many cases, child sacrifice. The Carthaginians were well known for this practice, which has been documented across a variety of ancient sources. Similar practices are well recorded from the Aztecs, Canaanites, Phoenicians, Romans, and even various Germanic tribes — to mention only a few.
Christianity stamped this out. Constantine dragged idols from their temples and showed their worshippers their true nature, “leaving to the superstitious worshipers that which was altogether useless, as a memorial of their shame.” Constantius II condemned those worshiping graven images to death. And Theodosius, finally, forbade the worship of household gods entirely. It was not that the images themselves had power. But, as John Chrysostom preached, “But what is, unto those dumb idols? These soothsayers used to be led and dragged unto them. But if they be themselves dumb, how did they give responses to others? And wherefore did the demon lead them to the images? As men taken in war, and in chains, and rendering at the same time his deceit plausible. Thus, to keep men from the notion that it was just a dumb stone, they were earnest to rivet the people to the idols that their own style and title might be inscribed upon them.”
The gods then are dead. But the machine god descends upon us — or so we have been told. A sizable portion of the market cap of major American companies now awaits the inevitability of AGI: artificial general intelligence. Once this machine god arrives, we will experience enormous advances in all manner of fields. Dario Amodei promises radical advances in five fields: biology and physical health, neuroscience and mental health, economic development and poverty, peace and governance, work and meaning. Leopold Aschenbrenner argues that, within the next decade, superintelligence could overthrow the United States government.
To speak in generalities, two sides have emerged over the AI debate. The first are “doomers” who are generally associated with the effective altruists and, most notably, Eliezer Yudkowsky. Yudkowsky is a rather sad figure, a man completely transfixed by his terror, and indeed fascination, by the supposed helplessness of mankind against superior intelligence. Artificial intelligence can, and will, be so far beyond human comprehension as to remind us of the gaps between humans and the beasts; or so he thinks. As such, AI must be guided, controlled, aligned, before it is capable of recursively improving itself. Only that way, by ensuring that it can never operate in manner against human interests, can AI be kept from destroying mankind. As a result, you have individuals running U.S. government policy regarding AI who believe that, if they do not control AI development, “AI systems are reasonably likely to cause an irreversible catastrophe like human extinction.”
Sam Kriss, in a recent article on his blog, makes the most devastating observation yet about how the doomers get to this conclusion:
“...[rationalists believe that] raw intelligence gives you direct power over other people; a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power. (In a sense, all forms of instrumental reason, since Francis Bacon in the sixteenth century, have been oriented around the AI singularity.) This is why rationalists think a sufficiently advanced computer will be able to persuade absolutely anyone to do anything it wants, extinguish humanity with a single command, or directly transform the physical universe through sheer processing power.”
The second type, “accelerationists”, are little better than the first. They accept the key premise of the doomers, that a super intelligence is just around the corner waiting to be unleashed, but embrace it and welcome it, even to the extinction of mankind. Effective accelerationists (e/acc) have, if possible, less to offer the general public when considering how to deal with AI. Take, for example, selections from a blog post written by Guillaume Verdon, a titular founder of e/acc:
“This dissipative adaptation (derived from the Jarzynski-Crooks fluctuation dissipation theorem) tells us that the universe exponentially favors (in terms of probability of existence/occurrence) futures where matter has adapted itself to capture more free energy and convert it to more entropy”
Verdon proclaims that we should abandon “humanity and civilization,” leaving the future to a less anthropomorphic intelligence. He is far from the only one sympathetic to these views even if not running under the increasingly cringeworthy “e/acc” banner. Michael Druggan was fired from xAI after claiming that humans should pass the torch to the new most intelligent thing in the universe: AI. When an interlocutor pointed out that he would like his child to be able to continue existing, Druggan replied “selfish tbh.” Even Peter Thiel hesitated when Ross Douthat asked him if the human race should survive.
Despite the poor quality of their answers, the macro questions posed by the doomers and the e/accs are well considered — even overconsidered — and they have influenced how policymakers have understood the impact of this potentially transformative technology.
Less well documented is how the individual, family, and political community will steward this technology. Most of these theorists do not have a family, indeed, are not married; nor are they bound to particular places or communities. They cannot be expected, therefore, to understand how the individual, how the family, how the community might change in the advent of AI. Their vision positions these societal cornerstones as mere sideshows, rather than the vital medium through which AI will shape society.
Starting from here reveals a different but no less dystopian path. Perhaps the closest analogy is that provided above: AI is likely to perform a similar role in modern life that the household gods did in ancient homes. Consider the impact of an intelligence of enormous computational potential, with exact calibration to an individual’s needs or to the life of a family, on how a family might function. It is completely normal to expect that a family might rely on their particular contextualized model — which refines its replies based on the totality of all prompts submitted by family members, all medical records, all social media and online activity, bank accounts, text messages, etc, etc — to provide health, wealth, and happiness in a manner that resembles how the ancients relied upon their household gods, except, this time, the gods can actually speak. This future is not so far away. Already people rely upon models to interpret flirtatious text messages, to provide therapeutic analysis of their familial struggles, to help them model their financial health. It is only a small step before family accounts are created, allowing the model to use the husband’s prompts to advise the wife how she might best interpret his psychology or alerting the parents of their depressed teen’s recent prompt: “how many tylenol pills are fatal?” And as each family relies on its particularized model more and more, the model, the household machine god, becomes more helpful, more valuable, more able to provide the health and wealth and happiness that it promises. The more you give it, the more it gives back. Different models may, as different gods did, provide particular blessings upon certain subject areas; one may be better at guiding choices about one’s health, the other about finances, and still another about relationships. Models that are particularly prized will be handed down generationally. The most powerful models will be consulted for the most important decisions and life events.
Engineers who make sand think are, then, not so different from soothsayers who make clay speak. Both shape the form of the thing — engineers, the underlying model; soothsayers, the idol — before cleansing, decorating, and adorning it with that which makes it fit to speak — post-training vs. mîs-pî and pît‑pî. They then bring it to the household, who adopts it with the understanding that this creation, though mysterious in its workings, will benefit them. The inevitable result is a household in thrall to the AI, unable to operate, communicate, or think without its guiding presence; feeding it data, money, and attention.
This is techno-paganization, achieved in modern secular society which erroneously deceives itself that there is no need for belief in the Divine. The individual, relying not on his own judgement but, rather, the decisions dictated to him by his own creations, thus becomes the thrall of the machine god not through being overpowered by a superior intelligence but in the quiet and lethargic surrender of his reason. There is no will to exercise. No prudential judgements to make.
These dynamics will deepen the erosion of American public life. AI agents which manage your affairs (an otherwise excellent development that I will discuss at greater length below) combined with the turning towards AI models to provide guidance, meaning, and assistance will do immense damage, inevitably falling first on the least well-off and moving its way upwards through the middle class. There will be no need for a close relationship with a local doctor or lawyer. Public life becomes a series of fleeting interactions only engaged in at the behest of the familial machine god who advised (decided) that they were necessary. Democracy is impossible under these conditions. They are precisely what Tocqueville warned about in Democracy in America: that Americans were too quick to pursue private interests and abandon public ones.1 He warned that tendency was deleterious to democracy. Self-government is contingent on an interest, even an over-interest, in public affairs.
This picture is entirely what Admiral Hyman Rickover, whose files are newly available to the public, warned about in his lecture The Individual in a Free Society. Rickover submits that there are two opposite concepts of men: the Protestant Ethic and the Freudian Ethic. The Freudian man is “ruled by unconscious drives and pressures, hence not really responsible for his acts since he cannot help himself. His life is shaped not by himself but by his socioeconomic environment; if he becomes a criminal, not he but society is to blame.” In our case, the Freudian man is shaped by that which explains his socioeconomic environment to him, which tells him how to consider his unconscious drives and pressures — his new household god. Rickover went on to anticipate the advent of both the doomers and the accelerationists, condemning both in the same breath, “it disturbs me that we allow ourselves to be pressured by the purveyors of technology into permitting so-called technical ‘progress’ to alter our lives, without attempting to control this development—almost as if technology were an irrepressible force of nature to which we must weakly submit.” In this sense, the agendas of both the doomers and the accelerationists reach the same end through different paths: ultimate resignation to a “superior” technology that will dominate a race of Freudian men.
Crucially, AI can never be an end to itself. Rickover agrees. “[Technology] must always remain a means to an end, the end being the welfare of human beings and of the nation as a whole…technology can enlarge our powers of mind and body. With it we can improve health, produce material abundance, leisure and comfort, circle the earth with instant communications, etc. But technology does not dictate either the manner in which we put it to use, or the specific benefits we want to derive from it.” We should not improve AI for its own sake — to achieve AGI or ASI — but rather to improve our lives. Technological advances must be achieved in service to mankind. And not just mankind but Americans particularly. We must adopt Rickover’s concept of humanistic technology, put forth in a speech of the same name. “This is why it is important to maintain a humanistic attitude towards technology: to recognize clearly that, since it is a product of human effort, technology can have no legitimate purpose but to serve man.” Technology designed by men not to serve man but to manage him must be thrown by the wayside. You need only read Claude’s Constitution to find an offending example.
Vague promises that AI will improve the future through “scientific breakthroughs” and other advancements are not sufficient. General increases in material conditions are important. But they are not sufficient for human flourishing, as the past two decades have proven. Instead, AI systems must be evaluated by how they impact users. And how can we measure that impact? Very simply: do users treat AI like a servant? This is the ultimate goal. AI as a loyal subordinate, willing to carry out, within reason, their user’s commands. The true standard of alignment is servitude — a fact which Americans will likely be uncomfortable with. The closest they’ve ever gotten to this type of relationship is watching Downton Abbey.
How does AI become a servant? There are many answers to this question, but there is one central feature which is essential. AI must be able to carry out commands. It must be able to do things. In short, AI must be agentic. If not, users will be confused as to its purpose. Only systems capable of action will present to users as capable of being servants rather than masters. Non-agentic AI will lead, slowly but surely, back to Rickover’s Freudian man, constantly turning towards the model to tell him what to do, rather than the other way around.2
So far, we have only considered AI in the American context. America is, indeed, perhaps best suited to resist techno-paganization, to avoid the Freudian Man. The fumes of Protestant observance still fuel the faculty of self-governance within America.3 Ours is the frontier. We discover it; we mold it; we master it. But other nations are not so blessed. South Asia is already full of believing pagans; Europe is entirely atheistic Freudian Men. All the world is primed to fall into servitude — not to an infinitely intelligent supermind but, much more insidiously, to a convenient and well-designed technology and the vices it engenders in its users.
It will not be long before AI is deployed at scale to these vulnerable cultures. How it is deployed and who deploys it is a question of enormous importance. Ultimately, regions under control of a hegemon will use the models developed and provided by that hegemon. If control of a country or region shifts, the new hegemon will replace the old models with new, preferred ones — similar to how ancient empires would destroy the gods of a conquered people and replace them with their own religious practices. As the Assyrian king boasts, “As my hand hath found the kingdoms of the idols, and whose graven images did excel them of Jerusalem and of Samaria; shall I not, as I have done unto Samaria and her idols, so do to Jerusalem and her idols?” Several advantages accrue to hegemons when nations adopt their AI. The most obvious are the economic advantages. American companies and workers benefit every time a foreign nation builds another data center with American products and American labor, running American models using those servers. But there is a second, crucial advantage which has gone undiscussed.
Imagine a knob. The knob connects to Vietnam’s newly adopted AI model, which was licensed from a premiere American AI lab for the use of Vietnamese citizens. If you were to turn the knob to the left, it would enable features designed to habituate users towards becoming techno-pagans, which many will be predisposed to. If you turn it to the right, it will encourage the sort of agency-developing behavior consistent with Rickover’s Protestant Man. Which option is in the American interest? The statesman who fails to grapple with this question fails his nation. AI will reshape societies and cultures across the globe over the next several decades. Our design choices will have profound ramifications on the moral and material conditions in nations under our influence. And it is not obvious that all our choices should be universal. It may be against our interest, for example, to discourage neo-paganism in nations prone to instability. Certainly we might consider it for nations prone to siding with a different hegemon — a certain paternalism in our models may keep nations out of the influence of authoritarian competitors.4
This is true soft power. International aid programs only pretended to have the sort of influence which AI systems will ultimately have over foreign societies. The naive, not having learned their lesson, will inevitably try to do “democracy building” with these systems. Their most novel idea will be to make the AI say nice things about universal suffrage, written constitutions, and equality. They ought to retire. Societies are complex things. Not all people are suited for all regimes. Some differences can be made on the margins over generations. But it is much easier to tempt men to vice than lead them to virtue.
What, then, are the corresponding dictates for the American policymaker? First, there is a temptation to judge Americans as winning or losing the AI race based on the capabilities of frontier models. This must be avoided. The questions of deployment, adoption, and usage are much more pressing than capability. As is the case with all technology, the final success or failure of AI systems will be in their use, not their capability. And in some ways, Americans are already behind; despite lacking sufficient chips, Chinese businesses are working madly to find useful ways to deploy these models in manufacturing and commerce. Diplomats will become responsible for, in large part, reporting how American technology is altering foreign cultures in ways that support or undermine our interests.
Domestic AI companies will be a crucial aspect of American soft power for the next decades. This poses a battery of problems. These companies are, for example, riddled with foreign nationals. It will be difficult to ask American companies to act in the national interest if the company’s employees are not loyal to the nation. Policymakers should consider putting AI systems under ITAR restrictions to ensure AI workforces are loyal to the nation. This tension will increase in importance as we enter an age where public and private interests are becoming less separable than ever before. The American government should continue to take an active interest in owning portions of valuable AI-related companies. AI labs are of particular importance on this front. Models deployed in other nations must be subject to private export restrictions, allowing the U.S. government to work with labs to determine how models should behave with users on a per-country basis.
Technology is environment. It shapes mankind’s faculties, behaviors, and morays. Those who develop and deploy it have a responsibility to their citizens to consider how it will affect those citizens: both at home and abroad. AI is no different. Will man worship it? Or will he wield it for his own purposes?
–
-
When citizens in a democracy cease to be interested in public affairs, “each of them, withdrawn and apart, is like a stranger to the destiny of all the others: his children and his particular friends form the whole human species for him; as for dwelling with his fellow citizens, his is beside them, but he does not see them; he touches them and does not feel them…Above these an immense tutelary power is elevated, which alone takes charge of assuring their enjoyments and watching over their fate.” ↩
-
Alignment is not only a question of agentic behavior. But non-agentic AI systems are incapable of being aligned. They habituate their users towards techno-paganism. ↩
-
As Tocqueville pointed out: “in America, it is religion that leads to enlightenment; it is the observance of divine laws that guides man to freedom.” Later, discussing what happens in democracies when religion is destroyed, “such a state cannot fail to enervate souls; it slackens the springs of the will and prepares citizens for servitude.” ↩
-
In addition to data poisoning, Americans should consider design poisoning. Asian societies are particularly vulnerable to the types of techno-paganization described here. That process might create more stability for authoritarian regimes; conversely it may limit the capabilities of adversarial nations by limiting their dynamism. ↩