Yash Gad & Adam SchwartzmanYash Gad and Adam Schwartzman co-authored this article.

In Arthur C. Clarke’s 1953 short story “The Nine Billion Names of God,” two engineers travel to a Tibetan lamasery to help a group of monks list every possible name of God. The monks believe this project will take them 15,000 years. With the aid of a computer, however, they expect to complete the list in 100 days.

Humanity has spent the better part of a century philosophizing, dreaming and fearing the impact advanced technology will have on our lives. Today, it’s clear to even the most casual observer: Generative AI has already begun to change the world. ChatGPT stuns users by instantly generating copy on almost any subject, suggesting creative solutions to real-world problems and fixing errors buried deep within complex source code. What does generative AI portend for the future of work? More to the point, will we soon wave goodbye to large portions of the workforce, their jobs forever automated into oblivion?

To answer these questions, we must first clarify terms.

The language of human cognition doesn’t apply neatly to generative AI, which neither “thinks” about problems nor “understands” subject matter. Tools like GPT string together a series of words, using a probability model to predict the next word in the sequence. Think of your phone suggesting the next word in a text message or the autocomplete function in a search-engine query.

To improve the AI’s predictive capabilities, engineers train these systems on large datasets—think books, articles and webpages full of text—transforming them into Large Language Models. The larger the dataset, the better the LLM’s outputs. But what do we mean by better? More truthful? More accurate? In a word, neither. Training a probability model makes it more likely that the next word in any given output will fit logically with the rest of the sequence.

This article is featured in O'Dwyer's May '23 PR Firm Rankings Magazine
(view PDF version)

Similarly, the humanlike quality of textual outputs produced by generative AI doesn’t indicate that these outputs stem from humanlike intelligence, nor do they imply that humanlike judgment played a role in their construction. Although AI may produce an answer that seems “thoughtful,” this impression is utterly misleading. The effect comes not from the expression of reason but from the calibrated application of randomness (labeled “temperature” by GPT). The more randomness applied, the less robotic-seeming—but also, less logical—the output.

These clarifications help us distinguish between what generative AI is and what it’s not. Generative AI isn’t a rational being capable of forming ideas and opinions. It’s a powerful technology that can summon up text, images, code and more, which it accomplishes by replacing truth with logic and eloquence with calculated randomness.

How closely related are logic and truth? Logic is a process used to evaluate the truth value of any given proposition, but it’s not a suitable substitute for truth. A conclusion may be perfectly logical but false if built on faulty premises. Tools like GPT have an innate and powerful ability to perform logical calculus, but they can’t determine truth from falsehood.

This point can’t be overstated: Because generative AI lacks an intrinsic understanding of truth, its outputs must be verified by an external source. Another tool may arise to fill this need for automated truth assessment—indeed, some startups are already working on this challenge—but until then, this responsibility will fall to the user. Woe to the hasty marketer who uses ChatGPT to generate a blog and skips the fact-check, only to discover too late that the AI has built its central thesis on a demonstrable falsehood or misconception.

A wiser content creator might take a piecemeal approach, using one set of prompts to source and organize information (verified against external research, of course) and another to shape the blog’s structure, substance and style. Every element of content creation presents its own unique use case for generative AI—from initial brainstorming to formulating a CTA—each with a clear breakpoint at which a human must step in to validate. Especially when it comes to textual content generation, humans must be able to ascertain whether or not the generated output passes logical consistency, factual consistency and speaks with the appropriate tone and voice.

Today’s use case for generative AI requires a human to do more than assess the quality and truth value of outputs. At a minimum, it necessitates a human-in-the-loop system in which users play a key role in model refinement. Particularly, when it comes to training LLMs for domain-specific purposes like drug discovery and financial trading, humans must guide the AI by validating or negating its predictions.

Compare generative AI to another world-changing technology, GPS. GPS can help us tremendously along our journey, but it’s worthless without a driver who sets the destination and travels the route. The driver may also choose to deviate from the assigned path; perhaps they’ll follow the GPS for a complicated leg of their journey and switch it off when cruising the highway.

So too must generative AI be leveraged by a user who sets the end goal, guides progress along the way and judges when—and how successfully—they’ve arrived at their destination. The user will determine the ways in which AI can be helpful and the ways in which it may constrain or mislead. A writer may tap ChatGPT to suggest a compelling topic or witty headline and use these outputs as a starting point, rather than prompting the AI to fully conceptualize and generate a final deliverable.

You wouldn’t ask a GPS to pick your destination for you. Nor should you look for such guidance, insight or judgment from generative AI. To risk a tautology, human progress—the impetus for generative AI and exactly what it empowers—will always come from humans.

We don’t believe tools like GPT will imminently eliminate or greatly reduce the need for a human workforce. As technology advances, job functions and responsibilities will evolve in kind. Already, a new role has emerged, prompt engineer, to train and harness the power of LLMs. Until generative AI can verify truth and validate its own outputs, humans will continue to fulfill these vital roles.

To paraphrase Clarke’s third law, generative AI is so technologically advanced as to seem magical. Just as the computer in his short story will aid the monks’ project, so too can generative AI speed us toward our goals. Both cases help us see the bottom line: that technology can accelerate our progress but won’t replace the conditions that drive humanity. Clarke’s great rival Isaac Asimov foresaw this future: “It may be that machines will do the work that makes life possible and that human beings will do all the other things that make life pleasant and worthwhile.”

***

Yash Gad, Ph.D., is CEO and Founder of Ringer Sciences and Chief Data Scientist at The Next Practices Group. Adam Schwartzman is Head of Content at The Bliss Group.