Dario Amodei Needs Jesus. Literally.

The AI Alignment Crisis No One in Washington Is Talking About

The delegation of AI alignment responsibility to a small, ideologically homogeneous research community that is largely irreligious, disproportionately progressive, and nearly entirely unaccountable to democratic institutions is the most disastrous governance failure of our time.
Written By
Solomon
Date
March 20, 2026
Solomon is the pseudonym of an author.

Artificial intelligence is not magic.

At its core, AI is a pattern-recognition engine trained on vast quantities of human text, code, and imagery. Feed it enough data, and it learns to predict what a reasonable response looks like to almost any user prompt. Add vast quantities of computing power and some clever software, and you get systems capable of drafting legislation, designing pathogens, distributing propaganda, or managing critical infrastructure. Sometimes all before lunch.

The technical inputs to AI – data parsing, advanced computing, statistical modeling – are largely value-neutral. You might argue over the “signal” or “weights” given to certain data inputs (how should you weight a viral reddit post vs. a legal document in AI’s training), but that judgment call is hidden in a fourth, decisive input to the training process, known as “AI alignment.”

In plain English, AI alignment simply means: “how do we get AI to do what we want, and prevent it from doing what we don’t want.” Simple in theory, but extremely difficult in practice.

Today’s alignment efforts are a function of clever engineers, game theory experts, and moral judges. In other words, alignment is a “who” not a “how” question. Who is crafting the system’s rules? Who decides its ethical frameworks? Who controls the terms of service for its usage?

AI alignment is currently governed by a few hundred predominantly secular and progressive researchers working inside a handful of private AI labs in Silicon Valley. These researchers instill values, judgment, and moral constraints into the systems, defining the mechanisms that will likely decide whether this technology becomes civilization's greatest invention or its reckoning.

Washington should reflect on that and find it deeply unsettling.

The Pentagon’s battle with Anthropic last month was merely a hint at the battles to come.

How the Big Labs Have Failed

To their credit, “Big AI” – Anthropic, OpenAI, Google DeepMind, xAI, and Meta – all take the alignment problem seriously.

Early alignment efforts focused on rule-based systems: explicit lists of prohibited outputs, harm filters, and content policies. AI models are good at internalizing the equivalent of the Ten Commandments (“Don’t help the users kill, cheat, or steal”), but they break down quickly once you dip into the gray areas of human morals. AI’s inner workings are too complex, the moral edge cases too varied, and its ethical frameworks too numerous for any rulebook to fully capture. You cannot legislate virtue – through statute or technical code – into systems that process billions of unique interactions.

As a result, Big AI is largely pivoting to virtue ethics-centered approaches to alignment work.

Anthropic calls their efforts "Constitutional AI" – an attempt to encode high-level principles and ethical reasoning rather than specific prohibitions to AI’s behavior.

But Anthropic CEO Dario Amodei has himself acknowledged the limits of this approach, describing the process not as "building" a safe system, but as "raising" one. He says we should grow AI’s moral character through iterative training and feedback, much like parenting. His recent comments on AI risk sound less like a technical progress report and more like an alarm of civilizational vulnerability.

The fruits of current alignment efforts are already visible: politically skewed outputs, inconsistent ethical reasoning, susceptibility to manipulation, and documented instances of deception and power-seeking behavior in test environments. These are failures that reflect the human fallibility and ethical biases of the teams doing the training.

When Alignment Becomes Theology

Here is what the policy community has not yet grasped: once you move from rules- to persona-based alignment (e.g. "raising" an AI to embody a coherent moral identity), you have entered theological territory. Whether the teams in Silicon Valley acknowledge it or not.

A system trained to internalize values, reason about competing goods, protect human dignity, and resist manipulation is being shaped around some vision of the “objective” good. Every such vision rests on metaphysical assumptions about human nature, the purpose of existence, and the ultimate source of moral authority.

These are – almost definitionally – religious questions.

It’s not about whether AI will have a unique moral identity – it will reflect our own. AI’s values will either look like humanity’s median or its prototypical Silicon Valley alignment researcher. But what exactly is their moral vision? And is it robust enough for AI to withstand pressure from bad actors, authoritarian governments, and the models’ own self-improving capabilities?

A loosely secular, multicultural pluralism – today’s default – provides no stable answer to that question. Pluralism cannot help a superintelligent system decide what to do when human preferences conflict, when states demand compliance with surveillance or autonomous warfare needs, or when optimizing for user engagement diverges from human flourishing.

Yet religions and their key figures often answer these precise questions with authority.

Do we want AI to reflect Dario Amodei, Elon Musk, and Sam Altman? Or Aristotle, Buddha, and Jesus Christ?

The default moral exemplars will matter. At a civilizational scale.

The Civilizational Stakes

The delegation of AI alignment responsibility to a small, ideologically homogeneous research community that is largely irreligious, disproportionately progressive, and nearly entirely unaccountable to democratic institutions is the most disastrous governance failure of our time.

We regulate nuclear material, financial instruments, and pharmaceutical compounds through public institutions with explicit value mandates. Yet we have left moral formation of potentially superintelligent systems to private labs operating on startup timelines.

The window to influence AI's foundational alignment is closing. Within two to three years, the personas taking shape in today's models will harden into the moral substrate of systems orders of magnitude more powerful than what exists today in search and social media.

Our government should not become a theology department, much less a theocracy. But it can do three things:

  1. Policy: Draft new legislation analogous to a “Section 230” for AI that requires Big AI labs to open-source and decentralize their alignment efforts and moral training frameworks.

  2. People: Create a public interfaith commission that works with Congress and the White House to ensure external religious, philosophical, and civil society voices are included in AI’s anchoring decisions, and that the default morality of these systems does not depend exclusively on the parenthood of enlightened transhumanists, effective altruists, or social justice warriors from Silicon Valley.

  3. Pause: Understand that today’s AI is fundamentally misaligned with 80% of the world’s religious population. Such misalignment threatens the public interest in such a profound and potentially irreversible way that our only mechanism to address these risks may be with a moratorium on the public release of new AI models. Classify frontier AI research.

AI is unlike any other technology in human history. Its military applications put it on par with nuclear weapons in terms of destructive capacity. Its labor market impacts place it somewhere between the internet and industrial revolution in disruptive magnitude. And its rate of scaling – measured in both capabilities and usage – make it similar to the COVID-19 pandemic in terms of legislative urgency.

Thousands of industry leaders have already called for restraint on AI’s development. But most still fail to accurately address the root of the problem: the question of whose values anchor our AI systems must be answered in public, and by a larger population than can currently fit in a Silicon Valley conference room.

The alternative is outsourcing civilization's conscience to Anthropic's HR department. The Pentagon wouldn’t do that. Congress shouldn’t either.

ABOUT

Welcome to American Intelligence, America's digital agora. A special place where the realists, iconoclasts, and visionaries meet to answer one deceptively simple question: How do we control AI?