If Artificial General Intelligence has a poor outcome, what will be the reason?
Mini
4
Ṁ2192030
1D
1W
1M
ALL
75%
Something from Eliezer's list of lethalities occurs.
55%
Alignment is impossible.
37%
Someone successfully aligns AI to cause a poor outcome
30%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
Get
1,000
and1.00
Related questions
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
If we survive general artificial intelligence, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will General Artificial Intelligence happen before 2035?
80% chance
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
The probability of "extremely bad outcomes e.g., human extinction" from AGI will be >5% in next survey of AI experts
73% chance
Who first builds an Artificial General Intelligence?