Will I think that the Belief State Geometry research program has achieved something important by October 20th, 2026?
➕
Plus
20
Ṁ1766
2026
31%
chance

The Belief State Geometry research program by Adam Shai (and Paul Riechers, Lucas Teixeira, Alexander Gietelink Oldenziel, and Sarah Martzen?) is based on noticing that Bayesian belief states should form a fractal-like structure related to the data-generating process. A hope is that it may provide a way to extract the internal models the neural network uses, thereby permitting total interpretability.

In about 2 years, I will evaluate Belief State Geometry and decide whether there have been any important good results since today. I will probably ask some of the alignment researchers I most respect (such as John Wentworth or Steven Byrnes) for advice about the assessment, unless it is dead-obvious.

About me: I have been following AI and alignment research on and off for years, and have a somewhat reasonable mathematical background to evaluate it. I tend to have an informal idea of the viability of various alignment proposals, though it's quite possible that idea might be wrong.

At the time of making the prediction market, the main thing I wonder about is whether Belief State Geometry will also apply in cases where the network is underpowered and unable to fully model the probabilities. The main determinant for my resolution is probably whether people get any useful results for neural networks trained on real-world problems (especially in the underparameterized regime).

More on Belief-State Geometry:

https://www.lesswrong.com/posts/gTZ2SxesbHckJ3CkF/transformers-represent-belief-state-geometry-in-their

https://www.lesswrong.com/posts/mBw7nc4ipdyeeEpWs/why-would-belief-states-have-a-fractal-structure-and-why

Get
Ṁ1,000
and
S1.00
Sort by:

@tailcalled is it possible to get some examples of "important good results" in similar fields, so that I can get more of a feel for what types of things could resolve this market one way or the other?

Comment hidden