Connect with us

Hi, what are you looking for?

Tech News

OpenAI’s new model is better at reasoning and, occasionally, deceiving

Photo collage of a computer with the ChatGPT logo on the screen.
Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.

While AI models…

Continue reading…

You May Also Like

Editor's Pick

In this StockCharts TV video, Mary Ellen reviews the broader markets after last week’s rate-cut induced rally. She also shares stocks that are breaking out...

World News

When our ruling classes speak of “believing in democracy,” they are speaking of a romantic version of a form of governance that, in real...

Editor's Pick

Adam N. Michel Tax policy has taken on an outsized role in this year’s presidential campaign and was mentioned repeatedly in the recent presidential...

Editor's Pick

David J. Bier New numbers from the Census Bureau’s mini-census, the American Community Survey (ACS), show that the immigrant population is increasing but is...