I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
Plus
12
Ṁ2802026
59%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
Get
1,000
and1.00
Related questions
Related questions
Will a very large-scale AI alignment project be funded before 2025?
9% chance
Will another organization surpass OpenAI in the public sphere of awareness of AI progress by the end of 2024?
8% chance
In 2025, will I believe that aligning automated AI research AI should be the focus of the alignment community?
59% chance
Will I (co)write an AI safety research paper by the end of 2024?
45% chance
Will Dan Hendrycks believe xAI has had a meaningful positive impact on AI alignment at the end of 2024?
23% chance
Will a leading AI organization in the United States be the target of an anti-AI attack or protest by the end of 2024?
30% chance
Will a large-scale, Eliezer-Yudkowsky-approved AI alignment project be funded before 2025?
6% chance
Will the Gates Foundation give more than $100mn to AI Safety work before 2025?
25% chance
Will OpenAI + an AI alignment organization announce a major breakthrough in AI alignment? (2024)
7% chance
Will there be a noticeable effort to increase AI transparency by 2025?
50% chance