Home Science An AI Researcher Claims that AI’s “Emergent Abilities” are Illusory

An AI Researcher Claims that AI’s “Emergent Abilities” are Illusory

On May 10th at the Stanford Data Science 2023 Conference, computer science researcher Rylan Schaeffer will present a paper published on the preprint server arXiv.org that challenges claims of emergent abilities in AI large language models (LLMs).

Emergent abilities are skills that suddenly and unpredictably show up (emerge) in AI systems. In the past year, emergent abilities have gained significant attention as intelligent machines acquire more skills and our understanding of what’s happening inside them grows more opaque.

Schaeffer isn’t downplaying the breakneck progress of artificial intelligence. Nor is he saying that emergent abilities aren’t possible or even happening. His research shows that many of the current claims of emergence appear to be the result of a skewed way of measuring the phenomenon. If this is the case, we could be in a bigger pickle in terms of AI safety and alignment if our detection methods for emergent abilities are flawed.

Emergent Abilities (a.k.a emergent capabilities or emergent properties) was a term introduced in a 2022 paper by researchers at Google Brain, DeepMind and Stanford. Schaeffer noticed their study used an extreme, all-or-nothing method of measuring emergent abilities. If the AI wasn’t perfect, an emergent ability wasn’t clocked until it was perfect. This might make a new skill look like it emerged sharply and unpredictably when the AI was actually improving at tasks at a steady rate.

When asked for comment, the researchers who first reported emergent abilities commended Schaeffer and his team’s skepticism and well-executed analysis. They maintain that there’s still compelling evidence of qualitative changes that come from scaling these AI language models.

Schaeffer says that skills don’t have to be emergent for AI models to become significantly more capable and potentially dangerous. This is why it’s imperative we be able to accurately measure how AI is developing. Like many of his colleagues, Schaeffer is concerned with how the whole field of AI research is moving so hastily it’s blowing through controls that have been the stalwart of the scientific method. “The problem with dealing with these large AI models is that you don’t have access to the models,” says Schaeffer. “You can’t even feed them input because the models are controlled by private companies.” Schaeffer says independent researchers often have to construct data sets and send them into the companies to run on their models. Then the companies score the outputs and send them back to the researcher. Schaeffer notes that these companies are incentivized to overestimate AI capabilities in a positive light to help sell products and likewise minimize possible harmful side effects that might be bad for business.

“The fact that these models are private, that information is controlled, makes it very hard to do science,” says Schaeffer.

Schaeffer’s paper “Are Emergent Abilities of Large Language Models a Mirage?” is under peer review for the NeurIPS conference in December. For an in-depth discussion on AI emergence and AI safety and alignment, you can watch my interview with Schaeffer below:

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessory action within 24 hours.

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment