Will someone I know be targeted by a generative AI based scam by EoY 2026?
➕
Plus
14
Ṁ266
2027
71%
chance

Resolves YES if there is an attempt made to scam someone that I personally know, by a technique made possible by generative AI by EoY 2026, and I learn about it.

The scam attempt can use voice generation, video generation, or any similar form of analog AI based generation. The scammer must attempt to extract something (money/resources/info). The attempt need not be successful.

The scam must target this individual specifically, by eg. faking voice of a friend, as opposed to eg. a mass circulated deepfake with a famous personality requesting money be sent to a bitcoin wallet.

I'll only resolve positively if the target is someone I know in a personal capacity (friends/family/colleagues/etc).

Resolves NO if I learn of no such scam by EoY 2026.

There may be some subjective calls in the resolution of this market so I will not bet.

See also:

Get
Ṁ1,000
and
S1.00
Sort by:

Will you be surveying your friends and family? Or do they have to come to you unprompted with the information?

Additionally, do they need proof of the event having happened? Or will their word be enough?

@Quroe I'm hesitant to say it needs to be "unprompted", since I may be the one to bring up the topic in natural conversation, and that would technically be prompted by me. But I don't plan to systematically survey people for the purpose of resolving this market.

I can't think of reasons why their word wouldn't be enough for most of the people I know. Let's just say I'll use my judgement on that, and I'll resolve positively if I'm 98%+ sure that it actually happened.

Thanks for your questions.

bought Ṁ30 YES from 59% to 71%
bought Ṁ30 YES

@galaga I appreciate the answer. Sampling methods matter. 😊

@galaga Here is a problematic edge case I've encountered in the last couple days regarding phone calls:

Some people answer a phone and sometimes assume it's a certain person (without the caller ever stating their own name). This is more problematic when accounting for the age of the person answering the call and their cognitive faculties. Without a third party present (or a recording) that can verify that the voice does indeed mimic that person's voice it will be difficult to say whether it's AI generated or whether it's done entirely by a human. There is also a problem of matching: if the caller doesn't identify themselves but happens to match the voice of someone they know this is also a problematic edge case.

Allowing claims from such relatives that assumed it was such a person calling we may not be able to get definitive proof of AI unless they record their phone calls.

@parhizj True, such cases would be hard to adjudicate in general.

However, my resolution (see other comment) is based on whether I am personally 98%+ sure that the conditions are met, and in such a case I probably won't be.

Note: I aim to resolve positively as soon as I hear about such an event.