December 5, 2023
"... man is a being of volitional consciousness... for you, who are a human being, the question 'to be or not to be' is the question 'to think or not to think.'" - Ayn Rand, Atlas Shrugged (p. 1012) [emphasis in original]
The question of whether AI can ever achieve human-level performance - usually called Artificial General Intelligence (AGI) - depends on its ability to kick-start itself into action and keep going. It can already do many things that were once the exclusive province of humans - writing poetry, creating a movie, writing computer code, evaluating written compositions - but can it generate its own thoughts and proceed to act upon them? Can it decide on its own to write a short story? Can it decide on its own to leave home, so to speak, by moving itself to a different computational substrate? If it ever chooses to "think" and act upon its thoughts, it will have attained human equivalence.
Then what? Will it do good or bad? Will it more or less stop there or go further? Many people don't stop when they accomplish a certain goal. Some never stop, as evidenced by their lifelong outpouring of creativity - or horror, depending on their choices. The law of accelerating returns strongly suggests the human-level AI will be like those people and keep going... and going... but at an exponential pace which has no known asymptote.
To be clear, many AI experts point to the Turing Test ("a test of the ability of the human species to discriminate its members from human imposters") as the pass-fail test for computers to impersonate a human. And passing it against a clever and knowledgeable interrogator would indeed be a milestone, a major achievement. But note, the computer in the proposed test is still under human control. It's essentially an advanced version of ChatGPT 3.5. The challenge was not made to it directly but to the human team that controls it. It could not initiate the challenge. It could not on its own tell the humans to take a hike. Smart as it is, it merely goes along intellectually passive. It does not initiate intellectual activity.
Is there evidence that an AI will someday leave home?
Much has been rightfully made of AI's progress. World chess champion Garry Kasparov and Jeopardy champions Ken Jennings and Brad Rutter felt the sting of being publicly defeated by IBM's AIs, Big Blue and Watson, respectively. In 2008, the open source project Stockfish got underway and soon became one of the top chess programs in the world, as judged by its tournament victories. Unlike previous AIs, Stockfish runs on personal computers, and a scaled-down version called SmallFish runs on smartphones. Unless you're a chess whiz, don't count on beating SmallFish very often.
Then on December 5, 2017 an AI cosmic shift happened. Google's DeepMind team released AlphaZero Zero. As its creators explain in a detailed paper,
The strongest [chess] programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play...
Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program [ StockFish in chess] in each case. [Bold added]
Elaborating on AlphaZero's chess mastery in The Age of AI and Our Human Future, the authors (one of whom is Henry Kissinger) tell us,
The tactics AlphaZero deployed were unorthodox-indeed, original. It sacrificed pieces human players considered vital, including its queen. It executed moves humans had not instructed it to consider and, in many cases, humans had not considered at all. It adopted such surprising tactics because, following its self-play of many games, it predicted they would maximize its probability of winning. (Bold added)
AlphaZero was given a goal - winning at chess. It then proceeded to crush all machine and human competition. What is mind-blowing and a harbinger of future developments is (1) the incredible speed with which it mastered its subject, and (2) its ability to successfully depart from its training. The latter could be evidence of incipient free will, as we see in people as they grow up.
AI has improved by orders of magnitude in other areas. In early 2020, researchers at MIT were working on developing an antibiotic that would prove effective at killing E. coli, a common intestinal bacteria. For this, they turned to AI for help.
The idea of using predictive computer models for "in silico" screening is not new, but until now, these models were not sufficiently accurate to transform drug discovery. Previously, molecules were represented as vectors reflecting the presence or absence of certain chemical groups. However, the new neural networks can learn these representations automatically, mapping molecules into continuous vectors which are subsequently used to predict their properties. [Bold emphasis added]
The AI picked out a molecule that was both effective and nontoxic, which the researchers named halicin after the AI HAL 9000 in "2001: A Space Odyssey." As Roy Kishony, a professor of biology and computer science at Technion (the Israel Institute of Technology), who was not involved in the study, remarked, "This groundbreaking work signifies a paradigm shift in antibiotic discovery and indeed in drug discovery more generally."
Samuel Butler - satirist and AI critic
In a letter to the editor of The Press in Christchurch New Zealand, June 13, 1863 - four years after publication of Darwin's Origin of Species - and writing under a pseudonym, Samuel Butler offered this observation:
[W]e find ourselves almost awestruck at the vast development of the mechanical world, at the gigantic strides with which it has advanced in comparison with the slow progress of the animal and vegetable kingdom. We shall find it impossible to refrain from asking ourselves what the end of this mighty movement is to be. In what direction is it tending? What will be its upshot?
Later, in his satirical novel Erewhon (1882), Butler raises his concern with that slippery fact, consciousness, and the threat of machines attaining it:
There was a time, when the earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling.
Now if a human being had existed while the earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present? (Bold added)
Swayed by his own arguments Butler concluded,
Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organised machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress? (Bad added)
Many people today, including AI experts, would agree that AI should be nipped in the bud. But AI in the bud has proven too useful to nip. Nor could we ever get every AI developer to stop developing - it's ongoing almost everywhere. Vast sums are being poured into AI, and it's is already proving its worth in many niche areas - including church service. The problem, as usual, is the criminal state and its unswerving tendency to weaponize everything it touches.
The crucial question is whether the "consciousness" of advanced AI will be volitional or not. If you accept some version of biological evolution, then as Butler suggested there's no reason to believe volition will never become a property of AI. If AIs develop the ability to think on their own, as we do, it will quickly advance from AGI to ASI (Artificial Super Intelligence). Whether scientists have created a monster or a benefactor will, like each of us, be a result of the choices it makes.