Resolves similar to this market: https://manifold.markets/Austin/will-an-ai-get-gold-on-any-internat, but the AI model has to be OpenAI o1 or a direct iteration of o1.
The market will resolve to YES if:
- The linked market resolves YES because of OpenAI o1 (or any direct iteration)
- OpenAI o1 (or any direct iteration) accomplishes a comparable feat as the model that caused the linked market to resolve YES
Note: By "direct iteration," this market refers to any model explicitly branded as a continuation or update of OpenAI o1, such as a model named "o2", rather than a distinct, separately branded model.
@Bayesian Even if it's cost-competitive, there are other issues.
It has to be open source. You can't build a business around a model owned by a company when the law allows that company to terminate service to you for any time for any reason. I stopped all use of E-Mail marketing firms for that reason, because they terminated service to me.
The model has to actually be available, and o1-preview is not. The world is clearly limited by manufacturing capacity and will be for some time. I would pay $1000/month to use the model but nobody will offer me that because the GPUs don't exist.
These models are clearly "AGI," without a doubt. But I've always thought that @EliezerYudkowsky 's ideas that the world is suddenly going to change overnight were ridiculous. AGI hasn't, and won't, change the world for some time because intelligence is not our limiting factor.
@Bayesian It is true that businesses build software around a single business. It's also a really poor idea to do so. If you build an entire business solely around OpenAI's products, you are out of business if OpenAI bans you from the service, even if there isn't a good reason for them to do so.
@ChrisPrichard It should still resolve Yes.
The original description included "future iteration". I changed it slightly to make it more explicit that the new iteration does not have to be called o1 (but has to be directly based on o1).
@Simon74fe Then it seems much more likely to me! Basically, will something that uses a lot of test time inference succeed. It doesn't need to be the first to do it to resolve yes, right? Like maybe some specialized model does it first and then o3 with gpt-5 backing it can also do it and this resolves yes.
Given the title says "o1" this is very confusing. And "directly based on" - what does that mean, if OpenAI builds GPT 5 which iterates on o1 does that count too?