
OpenAI has recently uncovered and banned several accounts that misused ChatGPT for suspicious activities, including a Chinese surveillance tool targeting anti-China protests in Western countries. The surveillance campaign, dubbed “Peer Review,” utilized Meta’s Llama models to monitor and analyze content across major social media platforms including X, Facebook, YouTube, and Reddit.
Key Malicious Operations Discovered:
1. Chinese Surveillance Tool (“Qianyue Overseas Public Opinion AI Assistant”)
– Monitored anti-China protests in Western countries
– Analyzed social media posts and documents
– Tracked Uyghur rights protests
2. North Korean Deceptive Employment Scheme
– Created fake job applications and documentation
– Generated convincing explanations for suspicious behavior
– Targeted LinkedIn platform
3. Chinese-Origin Influence Campaign
– Produced anti-US content in English and Spanish
– Published through Latin American news outlets
– Connected to “Spamouflage” operation
4. Cambodia-Based Scam Operations
– Conducted romance and investment fraud
– Generated multilingual social media content
– Operated task-based financial scams
5. State-Sponsored Activities
– Iranian influence operations promoting pro-Palestinian content
– North Korean cyber operations (Kimsuky and BlueNoroff) gathering cyber intelligence
– Political influence campaign targeting Ghana’s presidential election
This discovery highlights the growing trend of threat actors leveraging AI tools for malicious purposes, with similar activities observed on other platforms like Google’s Gemini. OpenAI emphasizes the importance of collaboration between AI companies, platform providers, and researchers to combat these threats effectively.