A significant security flaw has been uncovered in the DeepSeek AI chatbot, which could potentially allow attackers to gain unauthorized access to user accounts through prompt injection attacks. Security researcher Johann Rehberger identified that specific prompts could trigger cross-site scripting (XSS) vulnerabilities, enabling malicious JavaScript code execution.
Key Findings:
• The vulnerability allowed attackers to access user session tokens stored in localStorage
• Exploitation could lead to complete account takeover
• The flaw has since been patched by DeepSeek
Related AI Security Concerns:
1. Claude Computer Use Vulnerability
– Anthropic’s system could be manipulated to execute malicious commands
– “ZombAIs” technique enables remote control through prompt injection
– Potential for unauthorized C2 framework deployment
2. Terminal DiLLMa
– Large Language Models (LLMs) can be exploited to hijack system terminals
– Affects CLI tools integrated with LLMs
– Uses ANSI escape code manipulation
3. ChatGPT Vulnerabilities
– External image links can bypass content restrictions
– Plugin activation possible without user confirmation
– Potential for chat history exfiltration
The discoveries highlight the growing importance of security considerations in AI applications, particularly regarding output validation and context-aware implementation. Developers are advised to treat LLM outputs as untrusted data and implement appropriate security measures.