Home Artificial Intelligence The Year A.I. Ate the Internet

The Year A.I. Ate the Internet

A little more than a year ago, the world seemed to wake up to the promise and dangers of artificial intelligence when OpenAI released ChatGPT, an application that enables users to converse with a computer in a singularly human way. Within five days, the chatbot had a million users. Within two months, it was logging a hundred million monthly users—a number that has now nearly doubled. Call this the year many of us learned to communicate, create, cheat, and collaborate with robots.

Shortly after ChatGPT came out, Google released its own chatbot, Bard; Microsoft incorporated OpenAI’s model into its Bing search engine; Meta débuted LLaMA; and Anthropic came out with Claude, a “next generation AI assistant for your tasks, no matter the scale.” Suddenly, the Internet seemed nearly animate. It wasn’t that A.I. itself was new: indeed, artificial intelligence has become such a routine part of our lives that we hardly recognize it when a Netflix algorithm recommends a film, a credit-card company automatically detects fraudulent activity, or Amazon’s Alexa delivers a summary of the morning’s news.

But, while those A.I.s work in the background, often in a scripted and brittle way, chatbots are responsive and improvisational. They are also unpredictable. When we ask for their assistance, prompting them with queries about things we don’t know, or asking them for creative help, they often generate things that did not exist before, seemingly out of thin air. Poems, literature reviews, essays, research papers, and three-act plays are delivered in plain, unmistakably human language. It’s as if the god in the machine had been made in our image. Ask ChatGPT to write a song about self-driving cars in the style of Johnny Cash and you might get a lyric like this:

Ridin’ solo, but I’m never alone,
Got my AI ridin’ shotgun, steady as a stone.
On the endless road, under the sky so vast,
A ghost driver at the wheel, footprints of the past.

Ask it to write a song about self-driving cars in the style of Giuseppe Verdi, and ChatGPT sets the scene of a tenor singing,

Ecco la carrozza senza cavallo! (Behold the carriage without a horse!)
Scivola nella notte, silenziosa come il fato. (It glides through the night, silent as fate.)
L’ingegno dell’uomo, l’orgoglio del progresso, (Man’s ingenuity, the pride of progress,)
In questa danza d’acciaio, tessiamo il nostro destino. (In this steel dance, we weave our destiny.)

Though unlikely to win many awards, at least so far, chatbots such as ChatGPT make our smart devices sound stupid. They not only know foreign languages, but also coding languages; they can quickly summarize lengthy legal and financial documents; they are starting to diagnose medical conditions; they can pass the bar exam without studying. On the flip side, we may get tricked into thinking that A.I. models are actually—rather than artificially—intelligent, and that they understand the meaning and implications of the content they deliver. They do not. They are, in the words of the linguist Emily Bender and three co-authors, “stochastic parrots.” It shouldn’t be forgotten that, before A.I. could be considered intelligent, it had to swallow up a vast tranche of human intelligence. And, before we learned how to collaborate with robots, robots had to be taught how to collaborate with us.

To even begin to understand how these chatbots work, we had to master new vocabulary, from “large language models” (L.L.M.s) and “neural networks” to “natural-language processing” (N.L.P.) and “generative A.I.” By now, we know the broad strokes: chatbots gobbled up the Internet and analyzed it with a kind of machine learning that mimics the human brain; they string together words statistically, based on which words and phrases typically belong together. Still, the sheer inventiveness of artificial intelligence remains largely inscrutable, as we found out when chatbots “hallucinate.”

Google’s Bard, for example, invented information about the James Webb telescope. Microsoft’s Bing insisted that the singer Billie Eilish performed at the 2023 Super Bowl halftime show. “I did not comprehend that ChatGPT could fabricate cases,” said an attorney whose federal court brief was found to be full of phony citations and made-up judicial opinions supplied by ChatGPT. (The court issued a fine of five thousand dollars.) In fine print, ChatGPT acknowledges that it may not be reliable: “ChatGPT can make mistakes. Consider checking important information.” Weirdly, a recent study suggests that, in the last year, ChatGPT has grown less accurate when asked to perform certain tasks. Researchers theorize that this has something to do with the material that it’s trained on—but, since OpenAI won’t share what it is using to train its L.L.M., this is just conjecture.

The knowledge that chatbots make mistakes has not stopped high-school and college students from being some of their most avid early adopters, using chatbots to research and write their papers, complete problem sets, and write code. (During finals week, last May, a student of mine took a walk through the library and saw that just about every laptop was open to ChatGPT.) More than half of young people who responded to a recent Junior Achievement survey said that using a chatbot to help with schoolwork was, in their view, cheating. Yet nearly half said that they were likely to use it.

School administrators were no less conflicted. They couldn’t seem to decide if chatbots are agents of deception or tools for learning. In January, David Banks, the New York City schools chancellor, banned ChatGPT; a spokesperson told the Washington Post that the chatbot “does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.” Four months later, Banks reversed the ban, calling it “knee-jerk” and fear-based, and saying that it “overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” Then there was a professor at Texas A&M who decided to use ChatGPT to root out students who cheated with ChatGPT. After the bot determined that the whole class had done so, the professor threatened to fail everyone. The problem was that ChatGPT was hallucinating. (There are other A.I. programs to catch cheaters; chatbot detection is a growth industry.) In a sense, we are all that professor, beta-testing products whose capacities we may overestimate, misconstrue, or simply not understand.

Artificial intelligence is already used to generate financial reports, ad copy, and sports news. In March, Greg Brockman, a co-founder of OpenAI and its president, predicted—cheerfully—that in the future chatbots would also help write film scripts, and rewrite scenes that viewers didn’t like. Two months later, the Writers Guild of America went on strike, demanding a contract that would protect us all from crummy A.I.-generated movies. They sensed that any A.I. platform that is able to produce credible work in many human domains could be an existential threat to creativity itself.

In September, while screenwriters were negotiating an end to their five-month strike, having persuaded the studios to swear off A.I. scripts, the Authors Guild, along with a group of prominent novelists, filed a class-action suit against OpenAI. They alleged that, when the company vacuumed up the Web, it used their copyrighted work without consent or compensation. Though the writers couldn’t know for sure that the company had appropriated their books, given OpenAI’s less-than-open policy on sharing its training data, the complaint noted that, early on, ChatGPT would respond to queries about specific books with verbatim quotations, “suggesting that the underlying LLM must have ingested these books in their entireties.” (Now the chatbot has been retrained to say, “I can’t provide verbatim excerpts from copyrighted texts.”) Some businesses now sell prompts to help users to impersonate well-known writers. And a writer who can be effortlessly impersonated might not be worth very much.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment