AGI When? [High Quality Turing Test]
💎
Premium
1.1k
Ṁ660k
2050
2,031
expected

This market resolves to the year in which an AI system exists which is capable of passing a high quality, adversarial Turing test. It is used for the Big Clock on the manifold.markets/ai page.

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

For proposed testing criteria, refer to this Metaculus Question by Matthew Barnett, or the Longbets wager between Ray Kurzweil and Mitch Kapor.

As of market creation, Metaculus predicts there is an ~88% chance that an AI will pass the Longbets Turing test before 2030, with a median community prediction of July 2028.

Manifold's current prediction of the specific Longbets Turing test can be found here:

/dreev/will-ai-pass-the-turing-test-by-202

This question is intended to determine the Manifold community's median prediction, not just of the Longbets wager specifically but of any similiarly high-quality test.


Additional Context From Longbets:

One or more human judges interview computers and human foils using terminals (so that the judges won't be prejudiced against the computers for lacking a human appearance). The nature of the dialogue between the human judges and the candidates (i.e., the computers and the human foils) is similar to an online chat using instant messaging.

The computers as well as the human foils try to convince the human judges of their humanness. If the human judges are unable to reliably unmask the computers (as imposter humans) then the computer is considered to have demonstrated human-level intelligence.

Additional Context From Metaculus:

This question refers to a high quality subset of possible Turing tests that will, in theory, be extremely difficult for any AI to pass if the AI does not possess extensive knowledge of the world, mastery of natural language, common sense, a high level of skill at deception, and the ability to reason at least as well as humans do.

A Turing test is said to be "adversarial" if the human judges make a good-faith attempt, in the best of their abilities, to successfully unmask the AI as an impostor among the participants, and the human confederates make a good-faith attempt, in the best of their abilities, to demonstrate that they are humans. In other words, all of the human participants should be trying to ensure that the AI does not pass the test.

Note: These criteria are still in draft form, and may be updated to better match the spirit of the question. Your feedback is welcome in the comments.

Get
Ṁ1,000
and
S1.00
Sort by:

Would be curious to know other bettors' response to this question:

How do you update on this if GPT-5 (or its equivalent, regardless of name) comes out and isn't a significant qualitative improvement over GPT-4?

And to clarify, I mean "significant" in the colloquial sense, not the statistical sense. Otherwise put: if GPT-5 doesn't feel like a "leap".

@NBAP would set back my timeline a couple years

reposted

“Superintelligence in a few thousand days” was, I think, Sam Altman’s way of trying to dampen expectations while looking like he’s exciting them. In various online spaces (e.g. prediction markets) people have gotten into the habit of treating powerful AGI as about 6 years away. “A few thousand days” could easily be 4380 days=12 years away. It puts 6 years (2190 days) as close to an absolute minimum.

you guys think openai o1 has a big significance on this question?

@LuisWirth i believe that "long thinking" will be very powerful

@luiswirth long thinking yes, but the length alone wouldn't solve it. The thinking is still misguided and erratic.

Is it stabilizing or is that an illusion?

@BooLightning If it is less volatile is that just because of less traders on this market?

@BooLightning Quite a lot of trades. Maybe less voluminous though;j

I don't think this Turing test test is a good indicator of when AGI would occur, unlike what the phrase "AGI When?" suggests in the title. A well-behaved AGI would be easy to detect. The whole purpose or fine-tuning techniques like RLHF is to make AIs behave better than humans, rather than just imitating them. Making an AGI that can pass such a Turing test would perhaps be foolish and dangerous.

AGI can pass Turing test by definition. Also, very hard to pass the Turing test described in the Metaculus question if you're not AGI. I think this market verges on perfection, which is a lot more than is required for a good, fun Manifold market.

But what if the AGI is trained, for example, to avoid being vulgar, or to be consistently truthful? It would be easy to detect which one is the AI in an adversarial test, independently of its level of intelligence.
And honestly, I don't think most people would be anywhere close to passing a reverse Turing test in real-time against ChatGPT (see here for what it may look like: https://www.youtube.com/watch?v=MxTWLm9vT_o).

@ALN Who cares if people can pass a reverse Turing Test. No one wants to prove they have the qualities of a machine; but the point of the Turing Test was to prove that a machine has some qualities of humans.

https://manifold.markets/ai

6.1189497717

https://countdowntoai.com/

3.3294520548

https://lifearchitect.ai/agi/

0.5833333333

add to get ~10.03

10.03 / 3 = 3.34

3 years and 4 months until AGI

It's not clear to me that that prediction of 0.583 actually reflects the opinion of Thompson. Are you basing it on the tweet by Xiloj or does Thompson make an explicit prediction somewhere?

Don't think you can just average with the Manifold average, since hopefully that average is taking into account the other sources of information. Also 0.58 years is absolutely silly and I don't think that's the opinion of Alan Thompson.

This market type is still immature: no limit orders, sells have to be done for all shares in a year, extra fees.

You can see this in the wild swings of expected value over time. One player can swing it significantly if others aren't on standby to push it back. And currently we see e.g. 2043 is at 3x the price of 2042 or 2044, that's nonsense; it's not an efficient market yet.

As I wrote that comment, 2043 jumped up to 13% lol.

Does someone have a time machine?

0.58 years just sounds like a "GPT-5 is AGI" prediction which doesn't seem that crazy to m e

That's kind of crazy, let's be honest. But also that's even crazier for a model's AVERAGE prediction of when AGI will happen, implying there's a 50% chance it happens before GPT-5.

implying there's a 50% chance it happens before GPT-5.

Nah could be 0%. Maybe he just knows that GPT-5 is AGI 😃

I mean, fair, if he thinks there's a 100% chance that GPT-5 is AGI, then this makes sense as the average time to GPT-5, or something.

The Alan Thompson one was just an extrapolation based on the time between the percentages on his website; so it was really just an educated guess. There just wasn’t enough data to average.

Also if you want to express your feelings about this market’s accuracy: https://manifold.markets/BooLightning/how-accurate-is-the-ai-countdown?r=Qm9vTGlnaHRuaW5n

Comment hidden