We truly admired Eryk Salvaggio’s masterful analysis of the new announcement of OpenAI’s so-called “reasoning” abilities. Here’s the beginning of his piece entitled “A Critique of Pure LLM Reason” (with a link to read the rest)
OpenAI’s newest LLM model is announced as having the capacity to reason. It’s worth noting that “reasoning” is a poetic interpretation of what happens under the hood: something happens that resembles reasoning. But what is that?
Models do not reason in the sense that we might expect. Reason amongst humans could be defined as “the power of the mind to think, understand, and form judgments by a process of logic." LLMs do not possess the power of the mind to think or understand things. The logic of LLMs, and by extension the “reasoning” of LLMs, is steered through language alone — and not language as a marker of thought, but language as an exchange of uncomprehended symbols sorted by likelihoods of adjacency.
Read the rest of the post on Eryk Salvaggio’s page »