Will an AI outcompete the best humans on any one programming contest of IOI, ICPC, or CodeForces before 2025?
➕
Plus
32
Ṁ13k
Jan 1
1.8%
chance

Resolves YES if before 2025, an AI solves at least as many points worth of problems as the best human competitor on any single contest of the following competitive programming contests: IOI, ICPC World Finals, or CodeForces division 1. Otherwise NO. (In particular, this ignores scoring points based on quickly a problem is solved, so that the AI can't win just by submitting solutions inhumanly fast.)

This is similar to the https://imo-grand-challenge.github.io/ but for contest programming instead of math, and with a requirement to rank first, not just get a gold medal (typically top 5-10%).

Detailed rules:

See detailed rules on this market.

Related questions

Background:

In Feb 2022, DeepMind published a pre-print stating that their AlphaCode AI is as good as a median human competitor in competitive programming: https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode. When will an AI system perform as well as the top humans?

Get
Ṁ1,000
and
S1.00
Sort by:

Codeforces div. 4? This seems likely to resolve YES soon.

bought Ṁ200 NO

@patrik from resolution criteria: “any CodeForces Division 1 contest (the highest division) will count. CodeForces Division 2+ contests do not count (since they don't reflect the top humans).”

predicts NO

I updated the resolution criteria and scoring definitions as per the discussion on the 2024 market https://manifold.markets/jack/will-an-ai-outcompete-the-best-huma.

Please let me know if there are any feedback or objections. I think this doesn't change the probability of the market in any significant way, just makes the definitions better.