Will effective altruism be "winning" over effective accelerationism at the end of 2024?
➕
Plus
33
Ṁ1669
Jan 2
56%
chance

Effective altruism and effective accelerationism share opposite views on AI risk and AI safety. Recent articles suggest that effective altruism has established awareness in Washington, but that effective accelerationists are becoming frustrated and looking to expand their spending and influence in 2024.

Which movement appears to be influencing policy more may be influenced by political spending, the results of the 2024 Presidential election, whether there are additional FTX and OpenAI-like EA blunders, and so on. At the end of 2024, which movement will have more influence in Washington DC?

During December 2024, a survey will be conducted of mainstream media outlets to review "year end lists" and "top 10 lists" that review the year. If a majority of the most prominent articles state that effective altruism became well known and affected US policy, the market will resolve to YES. If effective accelerationism is mentioned as dominating policy, the market will resolve to NO. If the decision is close, the market will resolve to either YES or NO. It will only resolve to N/A if there is an exact and unreconcilable tie, or if no articles whatsoever mention either movement.

It is not relevant whether articles mention that a movement "made progress catching up," is "up and coming," or "is expected to be a big player in 2025." If one of the movements changes its name or if it is replaced by a sucessor with the same goals, that movement will be used instead for consideration of this market.

Get
Ṁ1,000
and
S1.00
Sort by:

I'm not betting on this market, but to me doesn't it seem like a Trump victory, which is probable, would favor a lack of AI regulation, which is a key priority of effective altruism? Maybe I'm wrong and future policy won't affect articles at the end of the year.

EA is like early christianity. "Omg ${famous_person} dunked on EA on twitter". "Look at all of these EAs we burned at the stake". But EA is not kill

What properties define e/acc or its successors?

Simply being pro-faster-AI? “We need to get AI before global competitors like China” is a common sentiment, but it isn’t e/acc.

“Nothing can go wrong with AI.” -unlikely to catch on in politics.

“Humanity should go extinct” - political suicide to associate with such a group (thank God).

Pro-open-source- I’d say that e/acc is a separate thing from the pro-open-source group.

“AI is not going to be all that important, doomers are lame, progress is good, so I’ll call myself e/acc”- I think these people aren’t really e/acc. I consider “AI is important” to be a central part of e/acc.

“Whichever group contains Beff Jezos is e/acc”- a reasonable resolution criteria. Ironically opens up the possibility Beff switches into EA and EA would be e/acc.

@MatthewKhoriaty I'd say that it's pretty clear with the Politico article that, whatever it was originally intended to be, effective altruism is solidly an AI safety movement. The Politico article refers to it as a "cult" focused on AI doom. Whether you agree with that language or not, it is clear that EA is now defined for good or bad by AI risk.

Therefore, an EA successor movement would be one which favors safety at the expense of progress. An E/acc successor movement would be one that favors progress, even at the expense of safety.

Note that if this market were for 2023, the answer would be an unambiguous YES.

I expect that people who believe in what EA used to be - the part that Politico refers to as malaria nets - will break off in 2024 and create a new movement that is smaller and explicitly excludes AI risk, but that's not the purpose of this market.

@SteveSokolowski In that case, this market is named and portrayed incorrectly, so I’m out. It isn’t “EA vs e/acc”. It’s “AI Safety vs AI Bloomers.” I’d be willing to bet on that market, though.

EA isn’t just AI Safety. AI Safety isn’t just EA Remember that there is also the “AI will worsen society” set of worries that counts as Safety but isn’t the EA focus. Job loss, consolidation of power, and bias are all AI Safety positions that aren’t EA but(/because) there’s widespread political worry.