Will xAI significantly rework their alignment plan by the start of 2026?
➕
Plus
15
Ṁ454
2025
63%
chance

Supposedly the xAI plan is something like this:

The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality.

This is a terrible idea and I wonder if they are gonna realize that.

Resolves YES if they have meaningfully reworked their alignment plan by the beginning of 2026. They don't have to have a good plan, it resolves YES even if their plan is focused on some other silly principle, but they do have to move meaningfully away from curiosity/truthseeking as the goal for the AI. Adding nuances to curiosity/truthseeking doesn't count unless I become convinced that those nuances genuinely solve alignment.

Get
Ṁ1,000
and
S1.00
Sort by:

Musk is stubborn and narcissistic and probably personally attached to the plan.

@DavidMathers He's walked back his plans before due to unpopularity, even almost instantly with some of the worst new twitter changes. If everyone in the AI alignment community is convincingly saying this idea is bad I can easily see him changing it, especially by 2026 as new research and results on alignment occur.