Changpeng “CZ” Zhao, the former CEO of Binance, recently shared his encounter with an AI-generated video that perfectly replicated his voice and facial expressions. The video, posted on X, featured Zhao speaking Mandarin with such precision that he could not distinguish it from a real recording. This incident raises concerns about the unauthorized use of AI to impersonate public figures, highlighting the growing trend of using deepfake technology to create fraudulent content.
Despite stepping down as CEO of Binance in 2023, Zhao remains a prominent figure in the crypto industry and has been vocal about the risks of impersonation attempts involving deepfakes. In a post from October 2024, he advised users to be cautious of any video footage requesting crypto transfers, as altered content bearing his likeness was circulating online. The recent video of Zhao speaking Mandarin further emphasizes the operational risks posed by deepfake technology, as scammers can now create convincing audio and visual simulations that can deceive even the person being mimicked.
The use of deepfake technology in impersonation tactics has expanded beyond static images and text, with reports of scammers using synthetic footage of public figures to conduct fraudulent activities. In 2023, Binance’s Chief Communications Officer, Patrick Hillmann, revealed that scammers had used a video simulation of him to schedule meetings with project representatives via Zoom. Zhao’s recent experience with voice replication highlights the advancements in deepfake technology and the increased fraud risks associated with impersonation using AI-generated content.
Voice-cloning capabilities have become more accessible and require minimal input, with tools like ElevenLabs able to generate convincing clones with just a brief audio recording. Reports indicate that over one-quarter of UK adults have encountered scams involving cloned voices within the past year, showcasing the prevalence of this type of fraud. Despite some commercial models offering watermarking and opt-in requirements, low-cost alternatives are readily available on darknet marketplaces, posing a significant threat to individuals and organizations.
In response to the rise of deepfake technology, the European Union has implemented the Artificial Intelligence Act, which mandates that deepfake content must be clearly labeled when deployed in public settings. However, full compliance with the law is not expected until 2026, leaving a window for fraudulent activities to continue unchecked. Some hardware manufacturers are taking steps to address this issue by integrating detection capabilities into consumer devices, as demonstrated at the Mobile World Congress 2025 in Barcelona. These on-device tools aim to detect audio or visual manipulation in real-time, reducing user reliance on external verification services and enhancing security measures.