Will the US Federal Government spend more than 1/1000th of its budget on AI Safety by 2028?
➕
Plus
34
Ṁ3772
2028
13%
chance

A follow-up to this poll, but with a much lower threshold. 1/1000th right now would be about 6 billion dollars.

This question resolves yes if, at any time before market close, more than 1/1000th of the US Federal budget is directed specifically to preventing existential risk from artificial intelligence.

I will leave the specifics of what counts as anti-x-risk spending undefined for now, and resolution will necessarily be somewhat subjective.

Government funding for AI research in general does not account, but if the government had a 2027 budget of 10 trillion and spent 90 billion on increasing AI capacity but 10 billion on AI Safety, that would count.

I will not trade in this market.

Get
Ṁ1,000
and
S1.00
Sort by:

Happy to hear any suggestions for what should and should not count as AI safety spending. My instict would be count meta-spending as long as it was direclty targeted enough, like scholorships specifically for AI safety researches.

I would also be inclined to count spending on enforcement of AI regulation with the goal of preventing any AI from getting above a certain level of intelligence because of X-risk.

@Joshua Do the following research areas count as safety? 1) fairness (making machine learning models respect certain constraints on how they make their predictions such as e.g. being gender blind) 2) security (preventing adversaries from exploiting vulnerabilities in machine learning models) 3) interpretability (making model decisions transparent to humans) 4) related areas such as visualization, causality, etc.

I can see the government spending a lot on 1) because it resonates ideologically at least with the left, while 2) is an obvious need if international competition intensifies.

This is about the level of granularity at which I hesitate to offer any of my non-expert judgement without hearing arguments on both sides/doing research/consulting experts.

But my initial instict is that 1 is not relevant to existenstial risk, 2 I'm very unsure of, and 3/4 lean towards counting if funds are being directed specifically towards them seperately from increasing overall AI capability.