Yu Xian, founder of blockchain security firm Slowmist, has raised concerns about AI code poisoning, which involves injecting harmful code into AI models’ training data. The incident that brought attention to this issue involved a crypto trader losing $2,500 in digital assets after using OpenAI’s ChatGPT to create a bot for Solana-based memecoin generator Pump.fun. The chatbot recommended a fraudulent Solana API website, leading to the theft of the user’s private keys.
Further investigation found that the fraudulent API’s domain name was registered two months prior, indicating premeditation. While there is no evidence that OpenAI intentionally integrated the malicious data into ChatGPT’s training, the incident highlights the risks associated with AI poisoning. Blockchain security firm Scam Sniffer mentioned that scammers are manipulating AI models with harmful crypto code, with examples like the GitHub user “solanaapisdev” creating repositories to exploit AI tools for fraudulent outputs.
This incident with ChatGPT serves as a warning for the increasing challenges AI tools face as attackers discover new ways to exploit them. Yu Xian emphasized the risks associated with large language models like GPT, stating that AI poisoning is no longer just a theoretical risk but a real threat. Without stronger defenses, incidents like this could erode trust in AI-driven tools and expose users to financial losses.
In conclusion, the incident involving ChatGPT and the fraudulent Solana API website highlights the dangers of AI code poisoning in the blockchain and cryptocurrency industry. As scammers continue to find new ways to exploit AI models, it is crucial for users to be cautious and implement robust security measures to protect themselves from such attacks. Yu Xian’s warnings about the risks associated with large language models like GPT serve as a reminder of the importance of cybersecurity in the rapidly evolving digital landscape. By staying informed and vigilant, users can safeguard their assets and information from potential threats in the AI-driven technological landscape.