Resolution criteria
Each answer option will resolve to "Yes" if the individual or organization publicly endorses "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares. An endorsement is defined as a public statement of support or recommendation for the book, which can be verified through reputable sources such as official press releases, interviews, or social media posts. A repost of someone else's opinion counts (e.g., if OpenAI retweets a recommendation from Sam Altman, that counts as an endorsement). For media, unless an "editorial" is added, endorsement in a feature article counts. A mixed opinion that overall recommends the book and can be cited for a blurb counts. If no such endorsement is made by the resolution date, the answer will resolve to "No." The market will close on December 16, 2025, three months after the book's release date, and will resolve based on endorsements made up to that date.
Background
Eliezer Yudkowsky is an American artificial intelligence researcher and writer, known for his work on AI safety and decision theory. He is the founder of the Machine Intelligence Research Institute (MIRI). Nate Soares is the president of MIRI and has co-authored several papers on AI safety. Their upcoming book, If Anyone Builds It, Everyone Dies, is scheduled for publication by Little, Brown and Company on September 16, 2025.
uhm very unsure what to do with that one.
"In this urgent clarion call to prevent the creation of artificial superintelligence (ASI), Yudkowksy and Soares, co-leaders of the Machine Intelligence Research Institute, argue that while they can’t predict the actual pathway that the demise of humanity would take, they are certain that if ASI is developed, everyone on Earth will die. The profit motive incentivizes AI companies to build smarter and smarter machines, according to the authors, and if “machines that think faster and better than humanity” get created, perhaps even by AIs doing AI research, they wouldn’t choose to keep humans around. Such machines would not only no longer need humans, they might use people’s bodies to meet their own ends, perhaps by burning all life-forms for energy. The authors moderate their ominous outlook by noting that ASI does not yet exist, and it can be prevented. They propose international treaties banning AI research that could result in superintelligence and laws that limit the number of graphic processing units that can be linked together. To drive home their point, Yudkowsky and Soares make extensive use of parables and analogies, some of which are less effective than others. They also present precious few opposing viewpoints, even though not all experts agree with their dire perspective. Still, this is a frightening warning that deserves to be reckoned with."
ChatGPT says Yes:

But it did not (from what I can see without a subscription) get a star, and it's more like an average review than an endorsement/"overall recommends"?
Added more answers

@ms To clarify, I am referring to Pope Tawadros II of Alexandria and Patriarch Theodore II of Alexandria.

<3
Why is Greg Egan 50%?
@ms https://www.vox.com/future-perfect/414087/artificial-intelligence-openai-ai-2027-china
Vice President JD Vance has reportedly read AI 2027, and he has expressed his hope that the new pope — who has already named AI as a main challenge for humanity — will exercise international leadership to try to avoid the worst outcomes it hypothesizes. We’ll see.
Dude thinks the pope is gonna save us lmao, we're toast
@TheAllMemeingEye I mean it makes sense that a high-level cleric might know some powerful spells to stop AGI from killing everyone
@TheAllMemeingEye Tbh, if the pope himself came out and said "AI is going to kill us all unless we do something", that would probably have a very large impact. Imo, that's a step above "making the headlines", probably past smashing the mainstream overton window to pieces (moreso than statements from people like Trump or Guterres would be), and possibly all the way to a shift in the global conversation.
@UnspecifiedPerson I fear his approach may be a continuation of the current strategy of praying it goes ok and writing extremely long extremely low-news-coverage posts that from a momentary skim seem to talk about how AI isn't truly smart like God's in-his-image creation surely is (https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html ).
However, if as you say he does the sane thing of leveraging his position in advocacy of the actual AI safety movement in a publicly engaging way, then that might actually help a lot.
[Deleted]