Will the AI Safety summit in France of 2025 weaken AI Safety commitments?
➕
Plus
10
Ṁ183
Mar 1
56%
chance

Question resolves to Yes if the number of previous commitments and proposals that are cancelled or denied is equal or superior to the number of new commitments related to AI Safety.

If zero commitments are secured, question resolves to Yes, except if the most read article on the summit in mainstream media depicts AI Safety work on X-risks AND does so positively or neutrally.

Get
Ṁ1,000
and
S1.00
Sort by:

@mods The summit got postponed to February 2025, is it possible to push forward the resolution date to march 1st 2025?

@CamilleBerger you can do that yourself, the setting is just a little hidden. You need to click on the close time in the upper right corner and it will open the dialog.

I think that counting commitments isn't a good proxy for effective safety

@RemNi also the idea that France is somehow more inclined to encourage AI race dynamics (to the detriment of safety) than the UK seems misguided.

@RemNi I don't aim to measure safety effectiveness, merely the general dynamics.

Said otherwise, I think that it is somewhat probable that Lecun is a big marker of how the summit will go.

@CamilleBerger maybe rephrase the title to something along the lines of "Will the AI Safety summit in France of 2024 weaken AI Safety commitments" ?

@CamilleBerger then the title would be aligned with the intent of the question

@CamilleBerger otherwise the intent reads as swaying readers towards an ideological position

Rationale: France has appointed an AI Safety board whose members are all known, either publicly or privately, to be hostile to regulations related to X-risks and AI Safety at large. This includes, for example, Yann Lecun and Luc Julia. Media in France seems also hostile to X-risk or avoids mentioning it. France, finally, has incentives to partake in and encourage race to AGI.

Organizing the summit in France thus seems to be an opportunity for underlining and pushing forward anti-regulatory agendas.

On the other end, the UK summit seemed to indicate some favorable views on Safety, and the EU seems more favorable to regulations. A lot can happen in AI capabilities or safety or safety field building /advocacy by then, possibly shifting the Overton window. Also, I never studied geopolitics, which grants me some uncertainty.