Will humanity exit the semi-anarchic default condition before 2035?
➕
Plus
15
Ṁ1312
2035
23%
chance

In his paper "The Vulnerable World Hypothesis," Nick Bostrom introduces the term "semi-anarchic default condition" to describe the current state of human affairs where people can, roughly, do what they want and as a population want a wide variety of things. His full definition:

"By the ‘semi-anarchic default condition’ I mean a world order characterized by three features:

1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions – particularly actions that are very strongly disfavored by > 99 per cent of the population.

2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons – particularly in high-stakes situations where vital national security interests are involved.

3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level) – in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (‘the apocalyptic residual’) who would act in ways that destroy civilization even at high cost to themselves."

This question resolves YES if at any point before 2035, it becomes widely understood that humanity has exited the semi-anarchic default condition.

Get
Ṁ1,000
and
S1.00
Sort by:

Does it still count if it happens as a result of ASI rather than in preparation for it? It intuitively seems like that's the bulk of the probability

@TheAllMemeingEye Yes that counts, assuming we get to resolve this market.

So this is basically asking whether humanity will become a totalitarian world-state of homogenous people? And Bostrom is heavily implying that this is necessary to prevent existential risk? Yikes, no wonder people are suspicious of him lol

@TheAllMemeingEye I think his point is more nuanced. He’s not endorsing exiting the SADC, he’s saying that if humanity finds itself in specific game-theoretical situations (according to a taxonomy he develops in the paper), and we have not exited the SADC, then catastrophic outcomes are likely.

@AdamK I’m not wild about delving into the paper based on that. Surveillance capability is accelerating and already at dangerous levels, it’s not at all clear that there’s momentum for global cooperation much less governance, and as for diversity of self-interest — if that were to disappear so would the field of game theory.