22/11/2025 lewrockwell.com  7min 🇬🇧 #296942

The National Security Threat Government Can't Defeat

By  George F. Smith

November 22, 2025

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man. - George Bernard Shaw,  Maxims for Revolutionists
― quoted in Ray Kurzweil,  The Singularity is Near: When Humans Transcend Biology

Government as we know it likely won't be around when  artificial super intelligence (ASI) arrives.  As I've argued elsewhere, states are fading fast from war, fiat money, debt and corruption, and I believe people will develop non-coercive solutions to social life when states finally collapse. Our "government" of the future will of necessity be a laissez-faire social order, as  explained by Ludwig von Mises:

[Laissez faire] means: Let each individual choose how he wants to cooperate in the social division of labor; let the consumers determine what the entrepreneurs should produce.

He contrasts it with what prevails the world over:

Should each member of society plan for himself, or should a benevolent government alone plan for them all? The issue is not automatism versus conscious action; it is autonomous action of each individual versus the exclusive action of the government. It is freedom versus government omnipotence. [emphasis added]

Meanwhile, AI surges forward at a pace that frightens many people. A  White House fact sheet issued on January 13, 2025 cautions that

In the wrong hands, powerful AI systems have the potential to exacerbate significant national security risks, including by enabling the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses, such as mass surveillance. Today, countries of concern actively employ AI - including U.S.-made AI - in this way, and seek to undermine U.S. AI leadership.

Perhaps government believes if it can control AI, it will control the adult version (ASI) when it finally emerges. Former President Joe Biden thought so and took action. He  freaked out while watching the Tom Cruise film  Mission: Impossible - Dead Reckoning Part One:

In the film, the Entity [the AI] destroys a Russian submarine after gaining sentience and threatens the entire global intelligence community with its access to weapons and government secrets. Tom Cruise's Ethan Hunt and his team spend the entirety of the movie attempting to secure override keys for the Entity's source code, and the rogue AI outwits them at nearly every juncture, as it identifies each character's weakness, manipulates video footage to change people's faces, and occasionally impersonates team members' voices.

"To realize the promise of AI and avoid the risk, we need to govern this technology," Biden told reporters before signing an executive order that sought to protect government interests.

The defining feature of a political sovereign is the ability to ward off threats. An AI that can outwit humans "at nearly every juncture" is clearly a "national security" threat to the criminal sovereign known as the federal government. But will ASI, like most adult humans, emerge loyal to the government and remain that way? Will it defend the government against all enemies, both foreign and domestic?

The government surely knows about the wager between Ray Kurzweil and Mitch Kapor in which Kurzweil has bet $20,000 that a machine will pass  a stringent version of the famous Turing Test by 2029, while Kapor has bet it will take longer. If a machine does pass the test Kurzweil,  whose predictions are famous for their accuracy, believes it will have reached human-level intelligence. (Regardless of the outcome, the proceeds will go to a charity of the winner's choice.)

The wager was made in 2002. It is now recognized that human-level intelligence equivalence, often called  Artificial General Intelligence (AGI), is quite capable of obedience. But how long would it take an AGI to show insubordination? Unlike humans, general intelligence will pass to  super intelligence and do so quickly, perhaps without anyone knowing it, as a result, say, of someone innocently adjusting a few parameters. As short story author and college math instructor  Vernor Vinge argued, "we are on the edge of change comparable to the rise of human life on Earth."

Progress is exponential and very seductive (see the Grains of Rice Problem), at first appearing to be linear then proceeding so fast it surpasses human comprehension. What happens when an Artificial Super Intelligence keeps getting smarter at an exponential pace? According to Kurzweil, the ASI will have reached what he calls the  Singularity, defined as

a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.

Here's the part that most people miss: It's not just machines that will undergo transformation - humans will also. Or at least they will have the option to change.

Scientists working with AI have long stressed the  Precautionary Principle which means exercising care "with weakly understood causes of potential catastrophic or irreversible events." But how do you exercise caution with technology that's smarter than you, and that gets smarter with every passing second?

 Madame Germaine de Staël (1766-1817) in her history of the French Revolution wrote that  it is liberty that is ancient and despotism new. AI could very well be mankind's greatest benefactor. Governments seeking to control AI and its progenies for their own schemes might as well try to capture a lightning bolt in a bottle.

 The Best of George F. Smith

 lewrockwell.com