Will "Why Do Some Language Models Fake Alignment Wh..." make the top fifty posts in LessWrong's 2025 Annual Review?
2
Ṁ342027
16%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2025 Review resolves in February 2027.
This market will resolve to 100% if the post Why Do Some Language Models Fake Alignment While Others Don't? is one of the top fifty posts of the 2025 Review, and 0% otherwise. The market was initialized to 14%.
Get
1,000and
1.00
Related questions
Related questions
Will "Alignment Faking in Large Language Models" make the top fifty posts in LessWrong's 2024 Annual Review?
94% chance
Will "Takes on "Alignment Faking in Large Language ..." make the top fifty posts in LessWrong's 2024 Annual Review?
19% chance
Will "Tracing the Thoughts of a Large Language Model" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will "Language Models Model Us" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "How to replicate and extend our alignment fak..." make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "The case for more ambitious language model evals" make the top fifty posts in LessWrong's 2024 Annual Review?
12% chance
Will "“Alignment Faking” frame is somewhat fake" make the top fifty posts in LessWrong's 2024 Annual Review?
19% chance
Will "The Field of AI Alignment: A Postmortem, and ..." make the top fifty posts in LessWrong's 2024 Annual Review?
28% chance
Will "What Is The Alignment Problem?" make the top fifty posts in LessWrong's 2025 Annual Review?
15% chance
Will "Introducing Alignment Stress-Testing at Anthropic" make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance