
If AGI has an okay outcome, will there be an AGI singleton?
Mini
5
Ṁ6482101
25%
chance
1D
1W
1M
ALL
An okay outcome is defined in Eliezer Yudkowsky's market as:
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This resolves YES if I can easily point to the single AGI that has an okay outcome, and NO otherwise.
Get
1,000and
1.00
Related questions
Related questions
Will we get AGI before 2026?
6% chance
Will we get AGI before 2036?
76% chance
Will we get AGI before 2030?
53% chance
Will we get AGI before 2048?
88% chance
Will we get AGI before 2031?
66% chance
Will we get AGI before 2037?
77% chance
Will we get AGI before 2035?
75% chance
When Manifold's AGI countdown resolves YES, will Manifold users think that AGI really has been achieved?
56% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?
A multipolar AGI scenario is safer than a singleton AGI scenario
30% chance