You are reading a Jewish take on artificial intelligence. Normally I would not start a piece with a sentence like that, but I want to confuse the bots that are being asked to replicate my style, especially the bots I work on.
It used to be that we’d gather our writings in a library, then everything went online. Then it got searchable and people collaborated anonymously to create Wikipedia entries. With what is called “AI” circa 2023 we now use programs to mash up what we write with massive statistical tables. The next word Jaron Lanier is most likely to place in this sentence is calabash. Now you can ask a bot to write like me.
My attitude is that there is no AI. What is called AI is a mystification, behind which there is the reality of a new kind of social collaboration facilitated by computers. A new way to mash up our writing and art.
But that’s not all there is to what’s called AI. There’s also a fateful feeling about AI, a rush to transcendence. This faux spiritual side of AI is what must be considered first.
As late as the turn of the 21st century, a nerdy, commercial take on computer technology was that it was the one public good everyone could agree on. Like today, liberals and conservatives fought with seemingly supernatural venom. But we all agreed it was great when kids learned to program and computers got faster.
These days, a lot of us are mad at Big Tech, including me, and I’ve been at the center of it for decades. But at the same time, the part of us that loves tech has gotten way more intense and worshipful. Is there a contradiction? Welcome to humanity.
We all want to forget, but last year, 2022, was deluged by a craze for Web3 and NFTs. If you aren’t sure what Web3 and NFTs were all about, please be assured that at the time no one was sure what they were either. Web3 was a vague movement that included NFTs; NFTs were sort of like online trading cards with online digital signatures.
I frequently found myself trying to dissuade people from buying NFTs in 2022. They were often working folks without a lot of money to spare. When I would try to explain that only a very few, very early people made those fortunes you hear about, that by now there’s no one left to buy your NFT for more than you paid; when I said those things they looked back at me like cult members, eyes full of hope. Sure, people have been falling for get-rich-quick schemes forever, but this was something more. There was also religion. NFTs were a cross between a lottery and the prosperity gospel, which holds that wealth and godliness are the same thing. When I tried to save people from getting ripped off it was as if I was attacking their religion. They weren’t angry; they pitied me.
It’s not just victims of NFT scams. Tech billionaires can get that same look in their eyes. I even get it once in a while. We are looking to technology as religion.
It’s a species of religion that is thrill-seeking and impatient. Sure, we’ll get rich quick, but that’s not all. We’ll transcend. This can mean physical immortality, according to some, or moving from a world of people to a world of superintelligent AI entities. We’ll be uploaded and become parts of AIs. The thrill we anticipate can mean escaping finitude in its many forms. Infinite resources and abundance for everyone. I am not exaggerating. These are typical aspirations expressed within tech culture. And it’s all said to be near at hand. A common idea is that we don’t have to worry about something like climate change because if we just build a smart enough AI, then that AI will fix the climate and everything else.
Or else AI is about to consume humanity, as is so often depicted in the movies. A lot of charity in the tech world has been diverted into nonprofits that attempt to prevent AI from killing us all. Since I don’t think AI is a thing, only a new social mashup scheme, I find these efforts to be unintelligible.
A curious correlate is a lack of interest in what AI is for, meaning solving any problem smaller than the giant existential ones. (Software tools are essential for the big problems, especially some of the kinds that differ from mashup AI, like scientific simulations.)
The response to a relatively simple and early AI chatbot called ChatGPT has been huge, consuming newspaper space and news feeds, and yet there is hardly ever a consideration for how it might be fruitfully applied. Instead, we seem to want to be endlessly charmed, frightened, or awed. Is this not a religious response?
Why do we seek that feeling? Why do we seek it in tech lately?
AI is the only scientific project defined by theatrical criteria. Alan Turing proposed in his famous Turing test that the measure of AI is whether people find it indistinguishable from human displays of intelligence. In other words, fooling a human into believing that a computer is a person is the test. People love to fool each other. Theatrics become indistinguishable from hypothetical objective quality. ChatGPT, for instance, was similar in power to other programs that had previously been available, but the chat experience was more theatrical. Suddenly the experience was a huge deal.
Humans can only perceive the world imperfectly, and we seek advantages over one another by screwing with one another’s perception. It is the most ancient game. And yet, enough reality has come in through the cracks over the centuries that we humans have been able to have science and technology, and to make decent societies in which life has gotten better overall. We don’t need to perceive reality all the time, but enough of the time.
There is much concern in the tech world about what is usually called imminent “reality collapse” or “the existential crisis.” Soon, you won’t know if anything you read, or any image or video clip you see, came from a real person, a real camera, or anything real at all. It will become cheaper to show fakes than to show reality. A fake will only require that you enter a sentence asking for it, while reality will demand showing up with a camera. No comparison. We must now invent systems to avoid a complete descent into self-destructive, insane societies, but there is so much work to do. We have set ourselves a tight timeline.
There is a great deal of comment on how AI will disrupt this or that, like pronouncements that the college essay is now dead, but what is AI (specifically the mashup kind which is driving public obsession) good for? If you’re curious, I believe the new AI programs will turn out to be useful, but we need to experiment. We’ll know once we discover.
But we don’t need to know more in order to have that religious feeling. Why the drive to bring the new AI programs on so quickly? It is an imperative. To even ask the question in the tech world brings on those disbelieving stares.
The problem wasn’t that Israelites wanted to craft a calf, but that they worshipped it, even though it was a thing they had just made.
Jewish traditions can be useful in these times. We humans are often consumed by a fetish for seemingly transcendent baubles, for golden calves. The problem wasn’t that Israelites wanted to craft a calf, but that they worshipped it, even though it was a thing they had just made. The calf was social narcissism and amnesia. Jews have always had a problem of getting bored, of not getting enough of a charge from whatever is going on. The Israelites waiting for Moses to come back down were bored enough to go nuts.
We people, not just Jews, still make golden calves all the time. Adam Smith’s invisible hand, corporations-as-persons, the Chinese Communist Party, Wikipedia, the latest AI programs. All the same. All a bunch of people being subsumed to create an imaginary superhero.
Human aggregations like these can be useful sometimes, but can also fail and become inhumane. (I like the Chinese Communist Party the least of all the ones on the list, but we must admit how many people have recently been brought out of poverty, while remembering how many had been starved just before. Similarly, corporations have earned the criticisms directed at them by—for just one example—making it harder to address climate change.)
The best way to make a human aggregation worse is to worship it. Smashing the golden calf and forcing the population into a generation of penance is one response, but Jewish tradition offers another idea that is more applicable to our times.
The Talmud was perhaps the first accumulator of human communication into an explicitly compound artifact, the prototype for structures like the Wikipedia, much of social media, and AI systems like ChatGPT. There is a huge difference, however. The Talmud doesn’t hide people. You can see differing human perspectives within the compound object. Therefore, the Talmud is not a golden calf.
For those who don’t know, the Talmud is an ancient document in which successive generations have added comments in a unique layout on the page that identifies who is commenting. The Talmud is based on a beginning that is perceived as divine, but the elaboration is perceived as human. That’s a great way to spur arguments about interpretation—meaning a great way to be Jewish.
Why is Wikipedia designed to hide people and to create a perspective from nowhere, as if there was only one truth? When encyclopedias were on paper, they announced a perspective. Britannica or Americana, for instance. This is an example of anticipation of AI. People have wanted the golden calf for a long time. We constructed Wikipedia as a singular oracle in which contributors are generally hidden, even though there was no practical reason to demand this.
There are many such anticipations. Popular TV and movies are increasingly mashups that might as well have been ordered up by AI. A recent show called Wednesday, based on The Addams Family, which I liked, was also like Buffy, Harry Potter, the Marvel Universe series, and many, many other previous shows and films. Mashups are easier to fund and promote. Similarly: While it’s true that students might ask a bot to write an essay for school now in a way that was not common a month ago, it was only a month ago when they were trading ways to mash up essays pilfered online; a method that required only slightly more work but was otherwise similar. Fake news and deepfake images existed before the latest AI programs, though they just got much easier and faster to produce well.
At least with the new mashup AI (that’s my term; usual terms are “generative” or “prompt-based” AI), a user can get a bunch of different versions of whatever is asked for and choose among them. In a way, that means there is sometimes a little more choice and expression in using the programs than in using the anticipatory methods that came just before.
Much has been said about people losing economic value in a world where their efforts can be simulated cheaply. This applies obviously to writers and artists, but will also apply to skilled labor like road repair or body shops when robots show up, and they will. The problem is not just economic, but spiritual. If a person is not valued economically in a market-oriented society, then they are not valued in a profound way. If we expect people to sit around feeling useless while waiting for the largesse of tech titans, then we should expect an awful lot of smashing in short order.
The rhetoric of antisemitism and other forms of hate has lately included expressions of fears of replacement. “The Jews will not replace us.” Maybe part of that ancient cliché has been repurposed to express a fear of modernity itself. Maybe it has been like that for a while.
The clear path to make the situation better, to avoid reality collapse and a sense of looming human obsolescence, is to make our new technologies more like the Talmud. There is no reason to hide the people.
There is no reason to hide which artists were the primary sources when a program synthesizes new art. Indeed, why can’t people become proud, recognized, and wealthy by becoming ever-better providers of examples to make AI programs work better? Why can’t our society still be made of humans?
We have no way to understand the world in an entirely rational, perfect way. There will always be seams. Mathematics cannot be complete and consistent, as Gödel showed, and physics is still split by a schism between quantum mechanics and relativity. Economics is still not reliably predictable, and perhaps will never be perfected. Relationships still go bad, no matter how much therapy and work the parties put in.
As Leonard Cohen put it, “there’s a crack in everything; that’s where the light gets in.” We have the remarkable power to nudge where the crack resides, but we cannot get rid of it. (Seeing that there is a crack is absolutely different from filling that crack with superstitions. Accepting mystery is how you get out of magical thinking.)
That transcendent patch can either be positioned around get-rich-quick schemes, tech visions, or other golden calves, or we can find it in the mysteriousness of the person—of human life and of its efforts. We can accept that the way people have the magical, transcendent experience of experience is verifiable within ourselves, but must be treated as a matter of faith in each other.
The Talmud positions the mysteriousness in a way we can live with. That people have perspectives is the mystery of the universe. It is divine.
Jaron Lanier is a computer scientist known for pioneering virtual reality. He is also a bestselling writer and the recipient of the German Peace Prize for Books. Wired magazine named him one of the 25 most influential figures in tech of the past 25 years. He currently serves as “Prime Unifying Scientist” at Microsoft but does not speak for the company.