Publish on 2026-03-31

The Rising Threat of AI Voice Scams: What Every Tibetan Needs to Know

Imagine you receive a phone call — it is your close friend. They tell you they are in serious trouble and urgently need you to send money. Panic sets in, and without a second thought you transfer the funds immediately to help them out. Later, when you speak to them in person, you discover the truth. They didn’t call, weren’t in trouble, and didn’t ask for money.

You have been scammed. On the surface, it looks like an ordinary fraud. But if you think carefully about what happened, you may be shocked. How could someone copy your friend’s voice so perfectly? This is not a regular scam — it is something far more dangerous. This is AI-powered voice phishing.

What is AI voice phishing?

Voice phishing is a scam where a criminal uses a phone call or recorded voice message, pretending to be someone you trust, to trick you into giving them money or sensitive information. Unlike older versions of this scam, AI-powered voice phishing perfectly clones the actual voice of the person being impersonated. Most people simply cannot tell the difference between the real voice and the fake one generated by AI.

How does AI voice cloning work?

With the rapid advancement of artificial intelligence, it takes as little as three seconds of audio for cybercriminals to clone a real person’s voice. The cloned voice is nearly indistinguishable from the original, and AI can make it say anything with alarming accuracy.

How do scammers steal your voice?

Criminals do not need to break into your home to steal your voice — they simply go online. Every time you post a video on Facebook, make a reel on Instagram, appear in a YouTube clip, or speak on a podcast, your voice is out there. Once scammers have even a few seconds of your audio, they use cheap and widely available AI tools to generate a convincing fake. The attacker then uses this cloned voice to call your loved ones, creating a false emergency to demand an urgent money transfer or sensitive information.

A massive and growing problem

This is not a rare threat — it is happening right now, on a massive scale. A study by McAfee involving 7,000 people found that the majority of respondents could not tell the difference between a cloned voice and a real one, and that many who received AI-generated messages lost money as a result1. Just like other scams, cybercriminals use these tools to manufacture a sense of urgency and distress2. The low cost and ease of this technology has significantly lowered the barrier to entry for attackers.

According to research by Truecaller, voice-based fraud results in $25 billion in annual losses. Meanwhile, 37% of organisations worldwide have already fallen victim to voice deepfake scams, according to Microsoft.3 Compared to 2023, AI-related fraud attempts surged by 194% in 2024.4

The specific threat to the Tibetan community

China is well known for using disinformation to create mistrust and disunity among Tibetans in the diaspora, and AI-based voice cloning is a natural extension of these efforts.5 AI-generated deepfake videos are not new to the Tibetan community. The deepfake videos of Migyur Rinpoche that circulated in 2024 are one example — while those were video-based, they featured highly convincing cloned audio that led some people to believe they were real.

Who is most vulnerable?

Elderly people and those with limited digital literacy are among the most at risk. Familiar-sounding voices often bypass natural skepticism, leading to emotionally driven decisions that people later regret.

How to protect yourself

The good news is that protecting yourself does not require any technical expertise. Here are practical steps anyone can take.

Hang up and call back. If you receive an urgent call asking for money, hang up and call the person back using the number already saved in your contacts to confirm it was really them.

Protect your identity online. Minimise the personal audio and video you share publicly. Setting your social media accounts to “Private” or “Friends only” limits how much material scammers can access.

Use your phone’s built-in protections. Turn on spam call blocking and enable the option to silence calls from unknown numbers.

Set a verbal codeword. Agree on a secret codeword with your close family and friends. If anyone calls claiming an emergency and asking for money, ask for the codeword — something an AI voice clone would never know.

These simple steps, combined with a habit of pausing before acting on any urgent request, can go a long way toward keeping you and your loved ones safe.

References

[1] McAfee: Artificial Imposters - Cybercriminals Turn to AI Voice Cloning

[2] Reality Defender: AI Calls as the New Battleground for Social Engineering

[3] Group-IB: Voice Deepfake Scams - How They Work

[4] Reality Defender: AI Fraud Statistics and Surges

[5] The Hacker News: Chinese Hackers Using AI for Advanced Operations