On Culture

When Isaac Newton was stupid, and what it means for AI

A year after writing his famous note to Robert Hooke stating that “if I have seen further, it is by standing on the shoulders of giants,” Isaac Newton sent a far more secretive, and far more surprising, letter. In 1676, Henry Oldenburg was the secretary of the Royal Society, the powerful scientific academy, and as such, had nontrivial power to encourage research. But in this case, Newton wanted to discourage a particular line of research and publication. Specifically, he wanted fellow scientist Robert Boyle to cease writing about the “sophic mercury” that became hot when mixed with gold. Newton’s concern was that this discovery could hasten the public awareness of the famous “philosopher’s stone,” which had the power to turn lead into gold. Discovering the philosopher’s stone was the chief goal of the pseudoscience known both then and now as alchemy.

Perhaps you’re surprised to find that Newton, widely considered the greatest and most important scientist to draw breath, didn’t just laugh at alchemy. Turning lead into gold? Really? Everybody knows that’s a joke. In this case, however, you’re the one standing on the shoulders of scientific giants who succeeded Newton. In the 17th century, alchemy was considered as legitimate a field of inquiry as any other. Newton wrote over a million words on the topic, and although he did not disclose his interest to the public during his lifetime, his discretion was due not to shame but rather concern that he might be abducted and forced to perform alchemy for some wayward aristocrat or warlord. 

(Getty Images)

In retrospect, Newton’s alchemical obsession is a genuine tragedy. Who knows what else he might have discovered in pure science had he not wasted those thousands of hours? Yet he was far from alone. Nearly every great thinker in human history before the Victorian era, and a few afterward, had a genuine interest in alchemy, though there was never a shred of legitimate evidence for it. 

What will be the alchemy of the 21st century, pursued broadly by the best and brightest among us for little reason other than hope and irrationality? It’s almost certainly artificial intelligence in general and the fabled “strong AI” in particular. An astounding amount of money and effort is being thrown at neural networks, which can be “trained” to recognize and interpret patterns in text and images, and at “large language models,” which simulate human conversation by putting words together in their most likely order and throwing a bit of randomness into the mix.

Put those technologies together, and you get ChatGPT, the idiot-savant program that can write rhyming poems on the fly or craft a quick-and-dirty college paper the night before it’s due. Similar programs are out there “making art” from descriptions or creating realistic-looking pictures of former President Donald Trump on Jeffrey Epstein’s plane to be widely and immediately distributed on social media. (One must ignore the fact that Trump has 2 1/2 legs or that the airplane has an engine pointing backward, as the programs aren’t that good yet.) To generations raised on stories of killer robots and super-intelligent computers, these programs have the look and feel of genuine artificial intelligence — but they are no closer to science fiction’s humanoid robots than an old Casio keyboard is to Mozart himself. 

Our media crackle with the white heat of prominent intellectuals warning about the dangers of AI, or fearlessly predicting its future omnipresence, or both. Genuinely intelligent people who should know better write about and discuss these feeble simulacra of literature and art as if they were able to think and feel. The former “top safety researcher” of OpenAI recently told podcasters that there is a “10% to 20% chance” that AI could “take over,” leaving many or most humans dead. Thus proving beyond a shadow of a doubt that he’s not qualified to be the top safety researcher at a port-a-potty rental firm. 

Even people with some background in computer science or math fall prey to delusions regarding “AI,” largely based on the fact that the math behind “neural networks” is both complex and unpredictable. This does not imply intelligence. The math behind climate science is complex and unpredictable, but nobody thinks a hurricane is a conscious being. Similarly, the fact that “AI” can diagnose cancer on X-rays better than some radiologists or write a more comprehensible poem than the average basket-weaving studies graduate at an Ivy League university doesn’t imply that there’s really a ghost in the machine. 

Even the scientists who know there’s no mind behind the monitor often pretend otherwise for the purpose of making a buck. Venture capitalists have proven to be utterly disinterested in “expert systems,” which is what we used to call programs that mimicked human behavior, but the sky’s the limit for funding “AI.” A few of these charlatans even pretend there’s a technological road map to “strong AI,” the general-purpose intelligence that can think and feel with no more training or programming than a human would require. 

There isn’t, of course. For many years, it was common among respected scientists and researchers to believe that there was a “critical mass” of computing power that would become magically self-aware. If this sounds familiar, it’s because you’ve watched The Terminator. Plenty of articles were written in the ’70s and ’80s putting the level of computing power necessary for such an event at, or below, what you can get for $149 at the local prepaid-phone store nowadays. This discouraged nobody. They just kept raising the bar for The Conscious Awakening, finally putting it at the most obvious level of 100 billion neurons, which is what you’re using to read this now. The first “human brain scale” supercomputer, DeepSouth, will be online shortly. Will it turn into a person? Given that we haven’t gotten chimpanzee performance out of chimpanzee-scale computers, probably not. 

The proponents of “AI” would have you believe that it’s as inevitable as the atomic bomb was, once the math was worked out. In this case, however, the math doesn’t work out and likely never will. We don’t understand the nature of consciousness much better than Newton did. Our current efforts to build AI resemble what you’d get if your cousin, who has a woodworking shop, tried to build a Tesla Model S based on whatever description you could provide of one over the phone: the only thing weaker than the set of available tools is the depth of available understanding. 

Now here’s the good, or bad, news: The useful applications of ChatGPT-like tools have not yet been exhausted. It’s easy to imagine having a software “agent” available to you to start all those annoying calls to insurance companies, the DMV, and the like on your behalf. Which makes sense, because the DMV will also have an “agent” on the far end of the phone line. No matter how long those agents “talk,” however, they will never spontaneously generate the insights of a Descartes or Pascal. If one of them managed to put the right words together through random chance, the agent on the other end of the phone wouldn’t understand them. 

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Chances are we’ll also learn some new computing techniques along the way, just like many of the Victorian alchemists ended up becoming just plain old chemists. (In one famous case, a German alchemist independently discovered the secret of Chinese translucent porcelain.) One sad irony is that “AI” can already make passable “art.” But it can’t sort recycling or take trash out of the ocean, due to “Moravec’s paradox,” which details how reasoning and pseudo-imagination take a lot less intellectual horsepower than using “hands” or “feet” like a human being. It turns out that we can replace professors a lot easier than we can replace pipefitters or janitors. Draw whatever moral lesson from that you like. 

In pursuit of your attention and pocketbook, the AI alchemists out there like to use acronyms such as “NLP” and “AGI.” It all sounds very convincing. But there, too, Newton had them beat, referring in his secret notebooks to “The Scepter Of Jove” and the “Green Lion.” Wouldn’t you rather invest in those? Truthfully, you’d probably be no worse off. When he lost a significant amount of money in the South Sea Company bubble of 1720, Newton was philosophical in response. “I can calculate the motion of heavenly bodies,” he mused, “but not the madness of people.” 

Jack Baruth was born in Brooklyn, New York, and lives in Ohio. He is a pro-am race car driver and a former columnist for Road and Track and Hagerty magazines who writes the Avoidable Contact Forever newsletter.

Related Content