
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
Plus
14
Ṁ6302026
40%
chance
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
Get
1,000and
1.00
Related questions
Related questions
Will the OpenAI Non-Profit become a major AI Safety research funder? (Announced by end of 2025)
27% chance
Will we solve AI alignment by 2026?
2% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
98% chance
By the end of 2025, which piece of advice will I feel has had the most positive impact on me becoming an effective AI alignment researcher?
Will someone commit terrorism against an AI lab by the end of 2025 for AI-safety related reasons?
7% chance
AI safety community successfully advocates for a global AI development slowdown by December 2027
12% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
40% chance
