Will the state-of-the-art AI model use latent space to reason by 2026?
Will the state-of-the-art AI model use latent space to reason by 2026?
💎
Premium
17
Ṁ24k
2026
18%
chance

Meta's Coconut paper describes a new way to train AI models so that they reason in latent space. Coconut doesn't have to explicitly write its thoughts in natural language (as e.g. OpenAI's o1 would).

Abstract from the paper:

Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.

In January of 2026, this market will resolve YES if the state-of-the-art (SotA) reasoning model uses some latent space representation of its cognitive state to reason across multiple iterations before giving its final answer.

It doesn't count if the model merely manipulates latent space within a single forward pass (since all LLMs already do this). Loosely speaking, the model has to use its weights to get a latent vector, then reuse those same weights to process that latent at least once without generating any natural language tokens in between. If it uses some mix of latents and natural language in its reasoning, this still counts as using latent space.

I will primarily be looking at reasoning-centric evaluations such as FrontierMath and GPQA to determine which model is the SotA. Ultimately, the resolution will be based on my best judgement. I will not trade in this market.

Get
Ṁ1,000
and
S1.00


Sort by:
1mo

Hopefully not.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Win cash prizes for your predictions on our sweepstakes markets! Always free to play. No purchase necessary.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like trading still use Manifold to get reliable news.
How do I win cash prizes?
Manifold offers two market types: play money and sweepstakes.
All questions include a play money market which uses mana Ṁ and can't be cashed out.
Selected markets will have a sweepstakes toggle. These require sweepcash S to participate and winners can withdraw sweepcash as a cash prize. You can filter for sweepstakes markets on the browse page.
Redeem your sweepcash won from markets at
S1.00
→ $1.00
, minus a 5% fee.
Learn more.