Will we have any progress on the interpretability of State Space Model LLM’s in 2024?
Plus
12
Ṁ312Jan 1
71%
chance
1D
1W
1M
ALL
State Space Models like Mamba introduce new possibilities, as States are a new object type, a compressed snapshot of a mind at a point in time which can be saved, restored, and interpreted. But a cursory search didn’t turn up any work on interpreting either States or State Space Models.
This resolves Yes if research comes out that makes any significant interpretability progress into a state space large language model. I will not bet on this market.
Get
1,000
and1.00
Related questions
Related questions
By the end of 2026, will we have transparency into any useful internal pattern within a Large Language Model whose semantics would have been unfamiliar to AI and cognitive science in 2006?
38% chance
Will an LLM Built on a State Space Model Architecture Have Been SOTA at any Point before EOY 2027? [READ DESCRIPTION]
43% chance
Will there be an open source LLM as good as GPT4 by the end of 2024?
68% chance
Will there be a gpt-4 quality LLM with distributed inference by the end of 2024?
27% chance
Will the state-of-the-art AI model use latent space to reason by 2026?
47% chance
Will a lab train a >=1e26 FLOP state space model before the end of 2025?
22% chance
Will OpenAI release an LLM moderation tool in 2024?
67% chance
Will the best LLM in 2024 have <1 trillion parameters?
30% chance
Will mechanistic interpretability be essentially solved for GPT-2 before 2030?
29% chance
Will a LLM/elicit be able to do proper causal modeling (identifying papers that didn't control for covariates) in 2024?
41% chance