At the beginning of 2040, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
➕
Plus
23
Ṁ480
2040
67%
chance

If an intelligence explosion occurs, this market resolves N/A. Otherwise:

Shortly after market close, I will post a Yes/No poll in this market's comments, in the Manifold discord, and/or in whatever other appropriate Manifold-related spaces exist at that time. It will ask:

Do you believe that a rapid AI intelligence explosion poses a significant existential risk to humanity before 2075?

This market resolves to the percentage of Yes votes in the poll, rounded to the nearest integer.

The poll will be limited to one response per Manifold account, and the way everyone voted will be public.

All markets for each year:

Get
Ṁ1,000
and
S1.00
Sort by:

What does this resolve to if the intelligence explosion happens before 2040?

Theoretically, all these markets ought to predict the same probability if people's beliefs are rational, right?

Any difference would imply that the markets predict that the polls will make exploitable under or over confident predictions, which would be rather interesting.

(They might trade at slightly different prices due to discounting, but I don't think that should make a huge difference.)

@jack To be clear, I mean that the predicted probability should be the same at any given time. Obviously the resolution values will be different for markets resolving at different times.

Technically you could have someone who believes it's not a concern up until 2073, but is a concern in 2074, so they'd rationally answer no on next year's poll and yes on the year after that.

But realistically, yes, I created these markets to see if people's updates are predictable.

They're also implicitly forecasting what clusters of people end up becoming Manifold users. If at some point Manifold suddenly catches on in the AI-capabilities research community, the next year's poll may skew downwards, even if no individual person ever changed their mind.

@jack no, because the risk of intelligence explosion will have gone down significantly after the intelligence explosion in two to five years.

Good points, the questions are asking about different end dates. The experiment would probably work better if they all asked about intelligence explosion before 2100, for example. And yeah, changes to the poll respondents are likely an important consideration.

Metaculus has noticed this possibility as well and there's some discussions on it here:

https://www.metaculus.com/questions/1394/will-ai-progress-surprise-us/

https://www.metaculus.com/notebooks/10201/are-we-surprised-by-ai-milestones/

@L Ah, very good point. If nobody minds, I'm going to edit these markets to include that they resolve N/A if we're somehow still alive after the singularity.

@jack Upon reflection, I think there's no good reason for these to be using different end dates, that just makes it harder to analyze the data. If there are no objections, I will change all these markets to ask about an intelligence explosion prior to 2075.

I have no objections to that change

Ok, both changes made. If anyone had placed a bet based upon the earlier descriptions, let me know and I'll reimburse you.