The first Anthropic-trained AI system to qualify as ASL-3 qualifies because of misuse risks rather than autonomy risks
Plus
14
แน58810000
69%
chance
1D
1W
1M
ALL
This will be evaluated according to the AI Safety Levels (ASL) standard v1.0 defined by Anthropic here, which gives two different ways that an AI system could qualify for AI Safety Level 3 (ASL-3). This resolves based on the first clear public disclosure by Anthropic that indicates that they have trained a model and found it to qualify for ASL-3.
If Anthropic announces a policy that would prevent this information from being disclosed, announces that it has permanently ceased developing new AI systems, or ceases to operate, this will resolve N/A after six months.
Get
1,000
and1.00
Related questions
Related questions
When will Anthropic first train an AI system that they claim qualifies as ASL-3?
When will there first be an AI system that qualifies as ASL-3?
Will Anthropic announce one of their AI systems is ASL-3 before the end of 2025?
59% chance
When will there first be a credible report that an AI system qualifies as ASL-3?
Will technical limitations or safeguards significantly restrict public access to smarter-than-almost-all-humans AGI?
45% chance
If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
There will be a name for escaped self-perpetuating AI systems in the wild, and it will be commonly used by mid 2027
28% chance
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
41% chance
Before 2050, will an AI system be shut down due to exhibiting power-seeking behavior?
66% chance
Will there be serious AI safety drama at Meta AI before 2026?
58% chance