Will OpenAI inference costs fall by 100x over the next 18 months?
➕
Plus
22
Ṁ5712
2026
32%
chance

Resolves to YES if either:

  • A fast model (in the same class as gpt4o-mini) costs less than $0.150 / 100M input tokens AND $0.600 / 100M output tokens.

  • A best-in-class model (e.g. gpt4o) costs less than $5.00 / 100M input tokens input tokens AND $15 / 100M output tokens.

Get
Ṁ1,000
and
S1.00
Sort by:

The criteria are not clear about if you mean at equivalent quality or at equivalent model class (whose quality increases over time).

@MaximeRiche yep this is important

I could easily see a GPT-5-tiny that costs 1/4 as much as 4o-mini outperform today's 4o.

but if it's a relative standard it won't happen

This is about the models they will serve then, including quality improvement.

If they still happen to serve 4o mini at 100x less of the costs, should this resolve YES? Based on the headline of the question.

@jgyou it's still not clear.

Is it at equivalent quality? Equivalent benchmark scores? Iso performance? Eg. Gpt5 tiny VS Gpt-4 large. (I am expecting this criterion)

Or is it at equivalent model class (frontier, fast)? Given that the perf of models in a class increase over time. E.g. Gpt5 mini Vs gpt-4 mini?

@MaximeRiche The original criteria are about model classes (frontier, fast). I don't think that benchmark scores are necessary to assess this. But I suppose that in the spirit of the question, if they still serve gpt4o, say, and the cost fell by 100x, it should resolve YES.