Jon Gingerich
Jon Gingerich

We knew this was coming. On October 30, President Biden issued an executive order to tame the beast that is artificial intelligence.

The landmark order, the first of its kind in U.S. history, builds on previous commitments made by more than a dozen tech companies to ensure security and safety standards when developing AI systems and effectively amounts to the most ambitious effort by any country to forestall AI’s ability to misinform or defraud citizens as well as its potential to disrupt the labor market.

The sweeping regulations create new standards for AI safety, security and trustworthiness, including a mandate that AI developers must share their safety testing with the government and will be subject to Department of Commerce guidance regarding watermarking AI-generated content. It also protects consumer privacy with new guidelines for federal agencies to evaluate AI systems’ data-collection and sharing techniques, establishes equity guardrails to prevent AI-based civil-rights violations, evaluates AI’s use in healthcare applications, monitors any potential AI-induced labor-market disruptions and promotes innovation and competition by expanding grants for AI research.

This article is featured in O'Dwyer's November '23 Technology PR Magazine
(view PDF version)

The White House isn’t alone in its quest to rein in AI—the EU is currently finalizing its own AI regulatory rulebook. This makes sense. In the year since ChatGPT debuted, lightning-fast developments in this technology have resulted in a veritable AI gold rush that has transformed industries and elevated AI to an unforeseen role in our lives. The technology behind generative AI has improved to the point where we’re encountering freakishly real deepfakes as well as the birth of incredible text-to-image models like DALL·E, which can render visual art from conversations. Given the potential impacts this heralds for society—provoking debates on everything from workforce disruptions, our inability to discern misinformation, the ownership of content and the safety of AI’s role in military operations—a regulatory framework seems to be in order.

While much of the alarmism surrounding generative AI is unearned and our subsequent policy debates surrounding AI have unearthed a lot of erroneous assumptions, still, the fact remains that this rapidly evolving technology poses very real existential threats. We’re effectively creating a replacement for human intelligence; a sizable amount of the online content we’ll see in the coming years will be generated entirely by AI. Think the news is untrustworthy now? Just wait until news site copy is supplied entirely by AI instead of human editors. Think misinformation and online radicalization are problems? Imagine what they’ll be like when algorithms curate an entire a la carte reality for each Internet user. Few of us seem to consider that we’re living in the last days of an information ecosystem where most of our content is still made by people.

I’m just not convinced Biden’s executive order, while it’s a step in the right direction, will solve these problems.

First, I’m curious how any government could quantify how much AI has harmed a person, institution or industry. It’s one thing if AI threatens to render an entire industry extinct, but if an AI prompt gives someone bad information on, say, the capital of Illinois, how much harm is really caused? Now, consider the harm if millions are led to believe it (or worse, a more spurious claim, like one related to a pandemic). How and when do we decide AI is harming society and isn’t just creating algorithmic noise? Where do we draw the line?

I also can’t help but wonder how the government can mandate AI systems’ “trustworthiness” without at least occasionally wading into territory protected under the First Amendment. I’m not a lawyer or legal expert, but I do know that the dividing line between speech and conduct is a notoriously tricky area of law to navigate, and even though I’m guessing AI falls under the rubric of commercial speech—which receives less First Amendment protections than non-commercial speech like, say, political speech—everything I’ve read suggests that the First Amendment’s application to AI-generated speech remains a murky, unsettled area of law still in development.

Finally, while Biden’s executive order carries the force of law, these regulations could be instantly erased if he isn’t re-elected next year. One way to solve this would be to leave it to the tech companies to effectively govern themselves on AI. Suffice it to say, many of us wouldn’t be comfortable empowering the corporations that control this technology to dictate how it’s used. The most logical solution would be to have AI laws enshrined by Congress, which seems unlikely anytime soon. (I mean, good luck getting Congress to agree on anything these days.)

Policing AI is easier said than done, especially when many of our leaders lack the expertise to understand the technology behind it. There seems to be an implied agreement that we need to get our collective hands around this issue before the technology spins out of control, yet there’s also a growing suspicion that the genie already left the bottle and that whatever laws we pass to govern AI will be outdated by the time the ink dries, or worse, that any meddling in the AI markets could stall innovation and effectively hand the baton to China to emerge as a frontrunner in the AI arms race. Ideally, we should thread our optimism about AI with the very real challenges it presents without neutering it. In other words, we’re facing not only a philosophical challenge regarding how we’ll be able to trust future information, but the long-term challenge of ensuring that any efforts to regulate AI don’t kneecap innovation or cause other deleterious effects, that our greatest technological advancement in recent history isn’t felled by an all-too-human misunderstanding of it. To do so would lend credence to the idea that artificial intelligence, as powerful as it may be, is no match for natural stupidity.