Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?
➕
Plus
22
Ṁ1361
2028
44%
chance

Additional context:
https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment

"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"

https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer?commentId=EG2iJLKQkb2sTcs4o
"I definitely agree that Eliezer's list of lethalities hits many rhetorical and pedagogical beats that other people are not hitting and I'm definitely not hitting. I also agree that it's worth having a sense of urgency given that there's a good chance of all of us dying (though quantitatively my risk of losing control of the universe though this channel is more like 20% than 99.99%, and I think extinction is a bit less less likely still)."

Feb 19, 1:55pm: Will Paul Christiano publicly announce a greater than 10% increase in his p(doom from AGI) within the next 5 years? → Will Paul Christiano publicly announce a greater than 10% increase in his p(doom | AGI before 2100) within the next 5 years?

Get
Ṁ1,000
and
S1.00
Sort by:
predicts YES

https://www.lesswrong.com/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment?commentId=HbhpgBGbShwxz9Xxz#comments

I think the comments you cite are all Paul talking about chances of doom along more specific paths, and his overall estimates of xrisk are higher

Maybe a total of more like 40% total existential risk from AI this century?

Does "a 10% increase" mean he goes from 10% to 11%, or from 10% to 20%?

Does the 10% increase have to apply to both the probabilities you listed?

@JakubKraus 10-20%
I clarified the second question in the title

predicts YES

@AnishUpadhayaya6ee What is Paul's current P(doom | AGI before 2100), for reference? What if Paul talks about how his P(doom) has increased but never provides a public quantitative figure?

I don't know this person at all. I'm just gambling because I assume being an important person at OAI indicates tech optimism and at these low Mana levels I do not have enough incentive to make sure I have high confidence on my bet.

https://www.lesswrong.com/posts/LhEesPFocr2uT9sPA/safety-timelines-how-long-will-it-take-to-solve-alignment

"Paul Christiano: P(doom from narrow misalignment | no AI safety) = 10%, P( doom from narrow misalignment | 20,000 in AI safety) = 5%"