If Sam Altman starts a new AI company, will it be less safe than Meta?
➕
Plus
17
Ṁ738
Jan 1
37%
chance

Right now Meta pretty much has a monopoly on the "extremely unsafe/uncaring AI company" leaderboard. However even they seem to have some high-minded altruistic ideals like open-source. If a new startup is founded that's purely profit-driven, it could be even worse.

If he heads up a new subsidiary of Microsoft, that counts too.

This market resolves to a vote among Manifold users. Regular users get 1 vote, moderators get 3 votes, and I get votes equal to 20% of the other votes cast.

The vote is specifically on the probability that Meta vs. Sam's company causes human extinction or some other form of existential risk.

Get
Ṁ1,000
and
S1.00
Sort by:

Just wanted to comment that, purely on the risk dimension, open-source is very unlikely to reduce risk.

@SantiagoRomeroBrufau Right, that's why I'm using Meta as my example of an extremely unsafe company. :)

predicts YES

Does this resolve N/A if there's no Altman company? (Including if he's back at OpenAI?)

bought Ṁ20 YES from 37% to 44%

The vote is specifically on the probability that Meta vs. Sam's company causes human extinction or some other form of existential risk.

This seems like it could easily resolve YES based on who seems to be more likely to make capabilities advances, even if Altman's company is more pro-safety.

@StevenK That's a good point, but I think it should indeed be factored in. A company that chooses to be slower at capabilities advancement is being safer than another company that doesn't, and I don't see any reason not to count accidental safety either.

predicts YES

@IsaacKing They would also claim (rightly or wrongly) that their faster advances allowed them to better reduce existential risk from other causes, including other people's AI. Is the probability being voted on meant to be net of those effects?

@StevenK Yes, if people think that developing GAI faster makes the world safer, they're welcome to vote on that belief.