Home Artificial Intelligence The Big Questions About AI in 2024

The Big Questions About AI in 2024

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say “year,” I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Is the corporate drama over?

OpenAI’s Greg Brockman is the president of the world’s most celebrated AI company and the golden-retriever boyfriend of tech executives. Since last month, when Sam Altman was fired from his position as CEO and then reinstated shortly thereafter, Brockman has appeared to play a dual role—part cheerleader, part glue guy—for the company. As of this writing, he has posted no fewer than five group selfies from the OpenAI office to show how happy and nonmutinous the staffers are. (I leave to you to judge whether and to what degree these smiles are forced.) He described this year’s holiday party as the company’s best ever. He keeps saying how focused, how energized, how united everyone is. Reading his posts is like going to dinner with a couple after an infidelity has been revealed: No, seriously, we’re closer than ever. Maybe it’s true. The rank and file at OpenAI are an ambitious and mission-oriented lot. They were almost unanimous in calling for Altman’s return (although some have since reportedly said that they felt pressured to do so). And they may have trauma-bonded during the whole ordeal. But will it last? And what does all of this drama mean for the company’s approach to safety in the year ahead?

An independent review of the circumstances of Altman’s ouster is ongoing, and some relationships within the company are clearly strained. Brockman has posted a picture of himself with Ilya Sutskever, OpenAI’s safety-obsessed chief scientist, adorned with a heart emoji, but Altman’s feelings toward the latter have been harder to read. In his post-return statement, Altman noted that the company was discussing how Sutskever, who had played a central role in Altman’s ouster, “can continue his work at OpenAI.” (The implication: Maybe he can’t.) If Sutskever is forced out of the company or otherwise stripped of his authority, that may change how OpenAI weighs danger against speed of progress.

Is OpenAI sitting on another breakthrough?

During a panel discussion just days before Altman lost his job as CEO, he told a tantalizing story about the current state of the company’s AI research. A couple of weeks earlier, he had been in the room when members of his technical staff had pushed “the frontier of discovery forward,” he said. Altman declined to offer more details, unless you count additional metaphors, but he did mention that only four times since the company’s founding had he witnessed an advance of such magnitude.

During the feverish weekend of speculation that followed Altman’s firing, it was natural to wonder whether this discovery had spooked OpenAI’s safety-minded board members. We do know that in the weeks preceding Altman’s firing, company researchers raised concerns about a new “Q*” algorithm. Had the AI spontaneously figured out quantum gravity? Not exactly. According to reports, it had only solved simple mathematical problems, but it may have accomplished this by reasoning from first principles. OpenAI hasn’t yet released any official information about this discovery, if it is even right to think of it as a discovery. “As you can imagine, I can’t really talk about that,” Altman told me recently when I asked him about Q*. Perhaps the company will have more to say, or show, in the new year.

Does Google have an ace in the hole?

When OpenAI released its large-language-model chatbot in November 2022, Google was caught flat-footed. The company had invented the transformer architecture that makes LLMs possible, but its engineers had clearly fallen behind. Bard, Google’s answer to ChatGPT, was second-rate.

Many expected OpenAI’s leapfrog to be temporary. Google has a war chest that is surpassed only by Apple’s and Microsoft’s, world-class computing infrastructure, and storehouses of potential training data. It also has DeepMind, a London-based AI lab that the company acquired in 2014. The lab developed the AIs that bested world champions at chess and Go and intuited protein-folding secrets that nature had previously concealed from scientists. Its researchers recently claimed that another AI they developed is suggesting novel solutions to long-standing problems of mathematical theory. Google had at first allowed DeepMind to operate relatively independently, but earlier this year, it merged the lab with Google Brain, its homegrown AI group. People expected big things.

Then months and months went by without Google so much as announcing a release date for its next-generation LLM, Gemini. The delays could be taken as a sign that the company’s culture of innovation has stagnated. Or maybe Google’s slowness is a sign of its ambition? The latter possibility seems less likely now that Gemini has finally been released and does not appear to be revolutionary. Barring a surprise breakthrough in 2024, doubts about the company—and the LLM paradigm—will continue.

Are large language models already topping out?

Some of the novelty has worn off LLM-powered software in the mold of ChatGPT. That’s partly because of our own psychology. “We adapt quite quickly,” OpenAI’s Sutskever once told me. He asked me to think about how rapidly the field has changed. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,” he said. Maybe he’s right. A decade ago, many of us dreaded our every interaction with Siri, with its halting, interruptive style. Now we have bots that converse fluidly about almost any subject, and we struggle to remain impressed.

AI researchers have told us that these tools will only get smarter; they’ve evangelized about the raw power of scale. They’ve said that as we pump more data into LLMs, fresh wonders will emerge from them, unbidden. We were told to prepare to worship a new sand god, so named because its cognition would run on silicon, which is made of melted-down sand.

ChatGPT has certainly improved since it was first released. It can talk now, and analyze images. Its answers are sharper, and its user interface feels more organic. But it’s not improving at a rate that suggests that it will morph into a deity. Altman has said that OpenAI has begun developing its GPT-5 model. That may not come out in 2024, but if it does, we should have a better sense of how much more intelligent language models can become.

How will AI affect the 2024 election?

Our political culture hasn’t yet fully sorted AI issues into neatly polarized categories. A majority of adults profess to worry about AI’s impact on their daily life, but those worries aren’t coded red or blue. That’s not to say the generative-AI moment has been entirely innocent of American politics. Earlier this year, executives from companies that make chatbots and image generators testified before Congress and participated in tedious White House roundtables. Many AI products are also now subject to an expansive executive order.

But we haven’t had a big national election since these technologies went mainstream, much less one involving Donald Trump. Many blamed the spread of lies through social media for enabling Trump’s victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat. But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.

A shady campaign operative could, for instance, quickly and easily conjure a convincing picture of a rival candidate sharing a laugh with Jeffrey Epstein. If that doesn’t do the trick, they could whip up images of poll workers stuffing ballot boxes on Election Night, perhaps from an angle that obscures their glitchy, six-fingered hands. There are reasons to believe that these technologies won’t have a material effect on the election. Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI images—the pope in a puffer coat, for example—but they tend to be more skeptical of highly sensitive political images. Let’s hope he’s right.

Soundfakes, too, could be in the mix. A politician’s voice can now be cloned by AI and used to generate offensive clips. President Joe Biden and former President Trump have been public figures for so long—and voters’ perceptions of them are so fixed—that they may be resistant to such an attack. But a lesser-known candidate could be vulnerable to a fake audio recording. Imagine if during Barack Obama’s first run for the presidency, cloned audio of him criticizing white people in colorful language had emerged just days before the vote. Until bad actors experiment with these image and audio generators in the heat of a hotly contested election, we won’t know exactly how they’ll be misused, and whether their misuses will be effective. A year from now, we’ll have our answer.


 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment