Navigate to News section

The Cyborgs Go to Washington

Who really benefits from the fearmongering calls for AI regulation?

by
Hirsh Chitkara
June 13, 2023
Win McNamee/Getty Images
Sam Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023. The committee held an oversight hearing to examine AI, focusing on rules for artificial intelligence.Win McNamee/Getty Images
Win McNamee/Getty Images
Sam Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023. The committee held an oversight hearing to examine AI, focusing on rules for artificial intelligence.Win McNamee/Getty Images

Sam Altman is not exactly what you’d call “a humanist.” The CEO of OpenAI (the company behind ChatGPT) believes we are designing our own evolutionary superiors through artificial intelligence. Humans, he says, can either embrace a cyborgian future or resign themselves to extinction. To that end, Altman plans on uploading his brain to the cloud. He even put a $10,000 deposit on an embalming procedure that, while “100 percent fatal,” will also supposedly preserve his brain so that he can be resurrected via computer simulation. Only artificial intelligence will allow us to “fully understand the universe,” Altman wrote last year. “Our unaided faculties are not good enough.”


This is all to say that Altman approaches AI regulation from a rather unsavory vantage point, assuming you still have a soft spot for those bumbling, pre-cyborgian apes we call humans. And yet, appearing before the Senate Judiciary Committee in May, Altman played the role of benevolent technologist, concerned above all with helping the government promote human flourishing in the midst of technological upheaval. He reminded the senators on multiple occasions that OpenAI is a nonprofit, leaving out the part where investors can still receive a hundredfold return before the money makes its way back to the nonprofit arm. He also called for proactive regulations: It would be a “great idea” to put scorecards on AI systems, “great for the world” to set up international AI safety standards, and “very important” for governments to align AI systems with social values. The senators either all fell for Altman’s schtick or, more likely, recognized his hustle as mutually beneficial. It was a chummy affair—the cherry on top of an enormously successful lobbying campaign that saw Altman meet with more than 100 members of Congress.

OpenAI achieved something undeniably remarkable in training silicon wafers to mimic human writing and thought. At the same time, nobody knows what comes next. Will it eliminate white-collar jobs? Upend journalism? Enchant incels with girlfriend simulators? Or, as Nassim Nicholas Taleb puts it, is ChatGPT just “a powerful cliché parroting engine?” Perhaps it can be all these things at once. After all, most desk jobs are tedious, most writing on the internet is cliché, and, to the lonely, even some small talk can be sustaining. The New Yorker may never use AI to write articles, but Axios and ESPN might, and BuzzFeed and Insider have already started. As for white-collar jobs, guesses range from labor market boom to fully automated corporate capitalism. IBM paused hiring for around 7,800 roles that the company said could be replaced by AI, but the publicity around the move suggests IBM was also trying to advertise its own AI automation services. Meanwhile, LinkedIn abounds with self-styled AI influencers preying on fears of professional obsolescence.



While it isn’t clear what Congress should do, everyone seems to agree it needs to do something. Yet our legislators have shown time and again they can’t meaningfully regulate the tech industry, which spent over $277 million on lobbying since 2020. Silicon Valley has only grown more powerful amid perpetual promises from Democrats and Republicans alike to rein in Big Tech. Every time something gets off the ground, the Big Tech lobbying apparatus ends up either killing the bill, neutering it, or appropriating it to serve their own interests. Last year, the industry managed to do a little of all three, banding together to pass a $76 billion corporate subsidy package, block a bill targeting Big Tech’s anti-competitive practices, and promote a wolf-in-sheep’s-clothing federal privacy bill that would have preempted more comprehensive legislation in the states.

The hype cycle around AI is making matters worse. With Altman and his allies leading the dance, Silicon Valley can help write the rules of its own regulation in a way that lets both Washington and the tech industry claim they are “doing something” while further entrenching their own power. The national security state has already used the specter of AI disinformation to justify an escalation in their surveillance and censorship campaigns. Silicon Valley will be happy enough to play along, so long as it doesn’t impact bottom lines. More likely, the resultant regulations will effectively designate a handful of “responsible” purveyors of AI, guaranteeing an oligopoly for Microsoft (which owns a sizable portion of OpenAI), Amazon, and Google.

The OpenAI CEO whose company created ChatGPT believes we are designing our own evolutionary superiors through artificial intelligence.

To the disinformation brigade, generative AI represents an unprecedented threat warranting an unprecedented response. The CEO of NewsGuard called ChatGPT “the most powerful tool for spreading misinformation that has ever been on the internet.” Obama outlined “frightening and profound” implications for AI-generated disinformation on elections, democracy, and the legal system. Name-dropping on her podcast, Kara Swisher let slip that she had dinner with “Tony” Blinken, who was “very interested” in the State Department being part of an international effort to regulate AI. The imagined scenarios coming out of this sphere read like MSNBC fan fiction circa 2017: In the hands of Russia, China, and Iran, generative AI will be used to flood social media with authoritative-sounding disinformation, manipulating Americans at an unprecedented scale on issues such as vaccine efficacy and election procedures.


Because AI makes disinformation-generation ubiquitous and near-instantaneous, content moderation systems will need to be made ubiquitous and instantaneous, too—or so the thinking goes. Meta says it has been working on “new AI systems to automatically detect new variations of content that independent fact-checkers have already debunked.” Advances in AI will make for more powerful and unintelligible content moderation systems, capable of picking up on the subtleties of speech and suppressing select viewpoints in near real time. Once these systems are in place, AI disinformation campaigns can serve as boogeymen for clamping down on ideological dissent. Researchers at Georgetown, for example, found that “after seeing five short messages written by GPT-3 and selected by humans, the percentage of survey respondents opposed to sanctions on China doubled.” Their conclusion that GPT-3 is therefore a dangerous manipulation machine betrays a fundamental lack of faith in the critical thinking abilities of the general public. It also shows how an “AI manipulation campaign” can easily be conflated with espousing the wrong views.


AI makes for great cover in part because these scenarios are plausible, and to some extent already happening. In late March, the S&P 500 momentarily plunged after several prominent news aggregators fell for an AI-doctored photo of an explosion outside the Pentagon. Anyone can go to ChatGPT today and tell it to write an article in the style of The New York Times about Russia nuking the United States. The result is somewhat convincing, if not exactly Pulitzer-worthy. An example of the output: “The scale of devastation and loss of life is yet to be fully assessed, but early reports indicate a catastrophic impact on both infrastructure and human lives.” OpenAI has attempted to train ChatGPT to refuse nefarious prompts, but the guardrails are still surprisingly permissive: It won’t write fake articles referring to Xi Jinping or Joe Biden, but it can make up news about malfunctioning election machines and summarize the key points of vaccine skeptics (with a long preamble about “the overwhelming scientific consensus”).

AI disinformation campaigns can serve as boogeymen for clamping down on ideological dissent.

The focus on AI guardrails in the national security context belies a fundamental (and likely willful) misunderstanding of the technology. Training large-language models such as ChatGPT is incredibly expensive and resource-intensive, but once the models are created, they are relatively easy to download and run on a personal computer without any of the usual restrictions. An advanced generative AI model developed by Meta was already leaked online. It’s laughable to think Russia or Iran would be hindered by guardrails on commercial chatbots. This is something the technology maximalists actually get right: The cat is out of the bag, and licensing requirements would do little to prevent AI systems from getting in the wrong hands.


The more likely outcome is that licensing will increase existing market power, allowing the most powerful tech companies to lobby for safety standards only they can pass. In a recent interview, Microsoft President Brad Smith called for a licensing system that would ensure models are only developed in “large data centers where they can be protected from cybersecurity, physical security and national security threats.” The lack of subtlety here is almost comical: Microsoft, Amazon, and Google control virtually the entire U.S. public cloud market and hold highly coveted security clearances needed to bid on Department of Defense cloud contracts.

The Pentagon can also help tech giants take out their primary foreign AI competitors in Alibaba and Baidu. It’s therefore very much in their interest to hype up the apocalyptic potential of AI, whether they believe it or not. A recent open letter signed by the likes of Altman, Bill Gates, and Microsoft CTO Kevin Scott said, in its entirety, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The RAND Corporation, a military-funded think tank, has already developed plans for preventing “bad actors” (read: China) from developing advanced AI systems. Ideas include embedding microelectronic safeguards in advanced chips to prevent anyone from developing models without U.S. permission. And while it has a horrible record on censorship and surveillance, China already passed a set of AI safety regulations with key privacy protections we may never enjoy in the U.S.


All of this ends up feeling tiresome. We’ve seen this playbook before: Censorship disguised as safety, economic warfare in the name of national security, profiteering beneath a veneer of social welfare. What’s particularly concerning about the AI hype cycle is that—as with the best lies—they contain more than a kernel of truth. Sam Altman and his ilk are true believers, even if they are also opportunists. They know fear is a powerful tool for coercion, but they also know there is something to fear. The hype cycle around AI makes it seem as though everything will become radically different, and while that may eventually prove true, for now it just distracts us from all that will stay the same.


Hirsh Chitkara is a writer living in New York.