Will LLMs mostly overcome the Reversal Curse by the end of 2025?
➕
Plus
64
Ṁ3983
2026
67%
chance

LLMs apparently suffer from a problem known as the "Reversal Curse", in which they fail to (properly) generalize from "A is B" to "B is A" (e.g., from "Tom Cruise's mother is Mary Lee Pfeiffer" to "Mary Lee Pfeiffer is Tom Cruise's mother").

Relevant twitter thread: https://twitter.com/OwainEvans_UK/status/1705285638218711409

Relevant paper: https://owainevans.github.io/reversal_curse.pdf

Will at least one SOTA LLM by the end of 2025 be able to overcome this limitation? For the purposes of this question, I'm not intersted in 100% accuracy (some amount of, for instance, hallucination, is fine), but I am interested in whether LLMs can "basically overcome" this limitation, as opposed to, for instance, they give only slightly better than random odds on the right answer or they give the right answer only if walked through the entire process by the user on a question-by-question basis.

I WILL count systems that have been fine-tuned to solve this problem as legitimate. Also, I WILL count systems that use fundamental chages in architecture (e.g., away from transformers) as legitimate, even if these systems are not called "LLMs", but ONLY if these systems are among SOTA language models more generally for at least some period during their deployment (e.g., I WILL NOT count more complicated systems that incorporate LLMs within broader scaffolding specifically designed to solve this problem as legitimate UNLESS such systems are generally used as SOTA LLMs more broadly).

I expect this topic will be of interest to AI researchers and thus I will defer to my sense of the general consensus of researchers about whether LLMs have overcome this limitation. If I am mistaken about that matter or if I can't figure out what the consensus is, I will perform some tests myself (e.g., analogous to some of the tests in the above paper, though using different specifics to avoid the possibility of data poisoning) to try to find an answer. I'll resolve N/A if it's legitimately very ambiguous. I will not bet on this market myself.

Get
Ṁ1,000
and
S1.00
Sort by:

I would be interested in seeing a few test cases that current LLMs fail that we would expect to see pass on LLMs that have "overcome" this as a resolution criteria for this market. As of now it seems a bit vague.

suppose (i know it is not true) that a pervious model say GPT1 does not have reversal curse, will you resolve this as true? (since GPT1 was sota when it developed)

Betting no because it seems unlikely that this would be seen as important enough that a solution (if one is found) would be incorporated into the SOTA model. This feels like the sort of issue that will be "solved" by a workshop paper showing some bespoke method on a small open-source model. Not something that OpenAI would re-train GPT-4 (or GPT-5) to alleviate.

I don't really understand this issue. Isn't this a question related to recalling information rather than reasoning?