[Metaculus] Will a major cyberattack/malware use LLMs in some important way before January 1st, 2025? [Ṁ1000 subsidy]
➕
Plus
28
Ṁ9918
Jan 2
5%
chance

Will a major cyberattack, virus, worm, etc. that uses LLMs in some important way occur before January 1st, 2025?

Resolves the same as the original on Metaculus.


Resolution criteria

This question will resolve positively if credible sources report that a major cyberattack, virus, or worm that uses LLMs results in any of the following:

  • At least one person is killed

  • There is at least 10 million dollars worth of damage

  • a medical facility, government facility, or company with a market value of at least 100 million dollars is unable to continue basic functions at any given point because of such a cyberattack, worm, or virus.

If none of these scenarios occur before January 1st 2025, this question will resolve negatively.


This market is managed by a bot. Once the original question resolves this market should automatically resolve the next time the bot is run. If the original resolved more than a day ago and this question is still open, ping @jskf. I might resolve this N/A if I find out it's a duplicate of an existing question within three days of market creation.

Get
Ṁ1,000
and
S1.00
Sort by:

The cost of cybercrime is projected to hit an annual $10.5 trillion by 2025. LLMs aid cybercriminals in obfuscating malware code, making it harder for cybersecurity systems to detect malware. In some cases, large language models like ChatGPT can be used to both generate and transfer cyber security code. There are almost twice as many connected devices—15 billion—in the world as there are people. But research by the World Economic Forum indicates that only 4% of organizations are confident that “users of connected devices and related technologies are protected against cyberattacks.

https://www.forbes.com/sites/forbestechcouncil/2023/02/22/105-trillion-reasons-why-we-need-a-united-response-to-cyber-risk/?sh=1d6cf4c33b0c

https://www.forbes.com/sites/forbestechcouncil/2023/06/30/10-ways-cybercriminals-can-abuse-large-language-models/?sh=44e3ed56304c

https://www.cybertalk.org/2023/06/02/5-ways-chatgpt-and-llms-can-advance-cyber-security/

Could someone with a meticulous account clarify if mass LLM written phishing attacks would count? On the level that it would be unreasonable for a small team of humans to write all of the emails.

I think this is a good question and that these metaculus mirror questions are good candidates in general for subsidies, to see if Manifold users can perform better than Metaculus when motivated enough.

I'm adding 1000 mana, and adding this to the Subsidy Dashboard.