Will Manifold stop having obviously wrong probabilities on markets with >=50 traders by the end of 2024?
➕
Plus
13
Ṁ341
Jan 2
7%
chance

Exhibit A: https://manifold.markets/JamesDillard/will-ai-wipe-out-humanity-before-th

Exhibit B: https://manifold.markets/Odoacre/will-proof-emerge-that-the-world-is

Exhibit C: https://manifold.markets/IsaacKing/will-this-market-still-exist-in-210

There are many others.

Some other examples that didn't reach the 50 traders mark:

These seem to fall into two distinct categories:

  • Markets where people knowingly bet "irrationally" for fun or to signal something.

  • Markets where people are lazy and don't look up easily-found information, just betting on "vibes" that turn out to be wrong.

Both count as problems for the purpose of this market. If any of those still exist (in markets with at least 50 traders) at the end of 2024, this resolves NO. Otherwise I'll wait a few months to see if any probabilities that existed at the end of 2024 are discovered to have been nonsense (with information that was publically available at the time.)

The issues around extreme probabilites are separate and therefore ignored for the purposes of this market; if the correct probability is 0.0001% and the actual market is at 0.5%, that still counts as "correct enough" for this market's resolution.

Get
Ṁ1,000
and
S1.00
Sort by:

What range of probabilities do you require for the lizard people market in order for this market to resolve yes?

predicts NO

@SirSalty Why did this market get marked as non-predictive?

@IsaacKing Under the new guidelines this would be a personal subjective market where the creator is betting.

predicts NO

@MartinRandall I thought those weren't in force yet?

@IsaacKing I agree. But seemed relevant.

Predictions market (and markets in general) show the current weighted beliefs of their participants. I am not sure these can be 'wrong'.

predicts NO

@AlexbGoode If that were true they'd be pretty useless. The idea behind prediction markets is that they'll take into account all public evidence.

@IsaacKing Beliefs often account for the available evidence.

I am not sure a single prediction market can be wrong. It seems the best we can do is look at statistics over many markets and see of the beliefs of the market participants correlate with real world outcomes. If these correlations are useful is an entirely separate question and depend on your cost function.

By far the main reason that these markets are mispriced is that they don't resolve for decades and therefore have little to no tether to reality - I think the market description is silly in not mentioning the fact that all such markets (I'd hope) don't resolve within the next few years.

predicts NO

@Conflux Only the first three.

The second three have already closed.

@IsaacKing Okay, but I think it’s disingenuous to lump the wrong-because-mana-time-discounting markets with the wrong-because-no-one-googled-the-thing markets

predicts NO

@Conflux Why? They're both a symptom of low incentives.

@IsaacKing I guess that’s true?

predicts NO

The point of this market is to ask "will Manifold probabilities be trustworthy". It doesn't much matter for practical use why they're not accurate, just that they aren't.

@IsaacKing I personally would be surprised to see a >50 trader market that closes before, say, 2028 that’s obviously mispriced - that’s a heuristic I use on Manifold.

Partially this is because a market with more traders tends to have more liquidity, solving the “not worth it” issue for the easily-googleable ones. But that doesn’t solve the problem of markets with far-future close dates

I'm not sure any of the examples you've given are obviously wrong?

@April A market that is guaranteed to resolve NO should not be at 20%.

@IsaacKing Eh. Maybe I care more about the market being accurate than about my expected mana in 2100

predicts NO

@April There's no reason to believe it's accurate though, it's just a poll of "how strongly do people want to signal this thing". As the fraction of Manifold users who are concerned about AI changes, that probability will change, with no particular connection to the actual probability of the event in question.

@IsaacKing AI doom 2100 is definitely not guaranteed to resolve no. Overall unlikely to resolve at all.

I definitely don't think it's "obviously wrong" given that various people who have spent a lot of time thinking about the question disagree about whether the number is too high or too low.

I think you are on more solid ground when you say there's no reason to believe it's accurate.

predicts NO

@MartinRandall We can only observe the question resolving NO.

@IsaacKing There are many true things that I cannot observe, and I don't normally call beliefs over those things "wrong". As an atypical but relevant example, I cannot observe

/MartinRandall/will-i-be-old-before-i-die

resolving NO.

predicts NO

@MartinRandall The issue is that the value of a prediction market comes from market forces efficiently setting a price. If participants are betting in ways that can't possibly return a profit, that indicates that the underlying system isn't working properly.

@Frogswap any bet can possibly return a profit by selling the shares at a higher price than they were bought for.

predicts NO

@MartinRandall Sure, but that's just betting on additional irrational participants coming to bail you out- essentially a Ponzi scheme

@Frogswap Not at all, the additional participants can be non-human.

predicts NO

@MartinRandall I guess that's right, but surely there's not an 18% chance that AI wipes out humanity and continues to use Manifold

@Frogswap It sounds like your argument is now that the market is irrationally high because you think it should be lower. That is a very reasonable position, but I think not very persuasive by itself.

predicts NO

@MartinRandall It wasn't intended as an argument in that sense. I was assuming you would agree that an AI that is indifferent to humanity would also be indifferent to this website. If you genuinely believe that there is an 18% chance of a future where AI buys yes in that market, waits long enough for human participants to take a profit, kills all humans, and then resolves the market accurately, I don't think we have enough common ground to be discussing whether the percentage is obviously wrong.

@Frogswap

I was assuming you would agree that an AI that is indifferent to humanity would also be indifferent to this website.

No, I wouldn't agree with that, as stated. It seems pretty clear to me that a future intelligence doesn't have to care about humans in order to care about Manifold. What is your argument for that claim? If I train an AI to care about Manifold, what will make it also care about humans?

If you genuinely believe that there is an 18% chance of a future where AI buys yes in that market, waits long enough for human participants to take a profit, kills all humans, and then resolves the market accurately

No, I don't believe that. There are four relevant futures to track in 2100:

  1. Manifold exists. Humanity exists.

  2. Manifold exists. Humanity does not exist.

  3. Manifold does not exist. Humanity exists.

  4. Manifold does not exist. Humanity does not exist.

I believe outcomes 1 and 2 are both <1% likely. The "correct" value of the AI Doom 2100 market depends on the ratios of 1:2 or 3:4, depending on whether you want to take anthropics into account.

predicts NO

@MartinRandall I consider it very unlikely that the AI which wipes out humanity is trained to care about Manifold. I expect indifference to humans to be against the creator's goals, but I expect indifference to Manifold to be aligned with the creator's goals (because they themselves don't care). I am not saying the one indifference suggests the other, but that it is at least as unlikely for an AI to care about Manifold as it is to care about humans.

From the rest of your comment, I wonder if I haven't been clear. I think that the likelihood that AI wipes out humanity by 2100 is >18%. I think that an efficient prediction market should have it at <<1%. A prediction market should reach equilibrium when expected returns from buying No = Yes = 1. I don't believe efficient prediction markets can answer questions like this correctly. If all parties are rational and profit-driven, the market being at 18% should suggest an 18% chance that yes buyers will be able to sell at 100%, or a 36% to sell at 50%, etc. (apart from N/As, as you mention), not an 18% chance that the resolution criteria are met.

It's wrong in the sense that it is meaningless; a prediction market can be relied on to produce accurate numbers only if participants are trying to profit. Otherwise it is just a weighted poll, like Isaac mentions above. Except it's slightly worse than that because some people are trying to profit and are driving the number towards equillibrium.

@Frogswap Didn't seem overboard, you have been very patient with me.

I am not saying the one indifference suggests the other, but that it is at least as unlikely for an AI to care about Manifold as it is to care about humans.

That seems reasonable., and a different thing to what you said earlier. Pushing in the "care about humans" direction is that AIs will initially be raised by humans. Pushing in the "care about Manifold" direction is that accurate predictions are instrumentally convergent.

I think that the likelihood that AI wipes out humanity by 2100 is >18%.

Sure, sounds reasonable, I have wide error bars.

I think that an efficient prediction market should have it at <<1%.

This roughly implies that P(Manifold exists and Humanity exists) > 100 x P(Manifold exists and Humanity does not exist). I don't think that is obvious.

Suppose we have P(Humanity exists in 2100) at 50%. Then we learn that X exists in 2100, for some X. How much does that increase our probability of human survival? Well, it increases our probability if X is more valuable for 2100 humans than for 2100 AIs. So X = Breathable Atmosphere is a big update. But X = Play Money Prediction Market is not.

predicts NO

@MartinRandall I'm glad I didn't offend, but I do think it was uncharitable for me to suggest that 18% represents your overall prediction of the future. Briefly considering your perspective would have reminded me that N/A is the vastly more likely outcome.

What I'm saying is that P(Manifold exists and Humanity exists) > 100 x P(Manifold exists and Humanity does not exist and humans are able to profit from nonhuman bettors during their transition to extinction). I do think P(Manifold exists, Humanity doesn't) is very low relative to the other, as it depends on AI being composed of distributed, self-interested entities who have not come up with a better way to produce predictions. But factoring in that humans have to profit makes it obvious (to me) that the ratio is greater than 100:1- AI will need to care about betting on that market, to do so non-deceptively, and to leave enough space between the bets and the wipeout for humans to meaningfully profit.

I do think learning that Manifold exists in 2100 would be a huge update toward humanity existing in 2100. Manifold is written and maintained by humans, designed for humans, and runs on infrastructure with the same constraints. It's certainly possible that AI decides that the internet is sufficiently optimal and that it is worthwhile to hijack Manifold rather than start over from scratch, but I think that's much more contrived than Manifold continuing to exist for human usage. I'd have to do some thinking to compare it to a breathable atmosphere (an AI that kills everyone and then goes to another planet would yield a more breathable atmosphere, an AI that maximizes utility may decide that it's better for humans to breathe compressed air and use the atmosphere for other purposes) but intuitively I'd say both are big updates even if one is relatively small.