State-Backed Hackers Are Using AI for Sophisticated Phishing and Distillation Attacks, Google Warns

Google's Threat Intelligence Group (GTIG) has published a comprehensive report detailing how state-backed hackers from Iran, China, North Korea, and Russia are increasingly weaponizing artificial intelligence for cyber operations. The findings paint a disturbing picture of AI-enhanced threats that go far beyond simple automation — these are sophisticated, multi-layered attacks that exploit AI's unique capabilities.
Rapport-Building Phishing: The End of Grammar Checks
Perhaps the most alarming finding in the report is the emergence of what GTIG calls "rapport-building phishing." Unlike traditional phishing emails — often riddled with grammatical errors and generic appeals — these AI-powered attacks involve multi-turn, culturally nuanced conversations that build trust over days or weeks before delivering a malicious payload.
Iranian-linked threat actors, in particular, have been observed using AI to craft personalized outreach that mimics the communication patterns of legitimate journalists, academics, and diplomats. The AI generates contextually appropriate responses, references real events and publications, and adapts its tone based on the target's responses — making these campaigns virtually indistinguishable from genuine professional correspondence.
Distillation Attacks: 100K+ Prompts to Steal AI Models
The report reveals that Chinese-affiliated groups have launched large-scale "distillation attacks" against leading AI models, submitting over 100,000 carefully crafted prompts designed to extract the underlying knowledge and capabilities of proprietary AI systems. The goal is to create local copies of powerful AI models without the billions of dollars in training costs.
These attacks work by systematically querying an AI model across its full capability range, then using the responses to train a smaller, local model that replicates much of the original's functionality. While the replicated models are typically less capable than the originals, they provide state actors with AI capabilities that operate entirely outside the reach of Western safety guardrails and monitoring systems.
Deepfake Social Engineering
North Korean threat actors have been particularly active in using AI-generated deepfakes for social engineering. The report documents cases where AI-generated video and audio were used to impersonate executives during virtual meetings, convincing employees to transfer funds or share sensitive credentials.
Russian groups, meanwhile, have focused on using AI to generate and distribute disinformation at scale, creating convincing fake news articles, social media posts, and even fabricated "leaked" documents designed to influence public opinion and disrupt democratic processes.
AI as a Force Multiplier
What makes the GTIG findings particularly significant is the report's conclusion that AI is not creating fundamentally new types of attacks — rather, it's acting as a force multiplier for existing techniques. Phishing campaigns that previously required teams of linguists and cultural experts can now be generated by a single operator with AI tools. Social engineering that demanded hours of manual research can be automated and personalized at scale.
The implications for cybersecurity are profound. Traditional defenses that rely on detecting anomalies in communication patterns — misspellings, awkward phrasing, cultural mismatches — are becoming obsolete. AI-generated phishing content is grammatically perfect, culturally appropriate, and contextually relevant, forcing defenders to develop entirely new detection methodologies.
Industry Response
Google says it has implemented new detection mechanisms specifically designed to identify AI-enhanced attacks, including behavioral analysis systems that look for patterns characteristic of AI-generated content and automated interaction sequences. The company has also expanded its threat intelligence sharing with other tech companies and government agencies.
The report recommends that organizations adopt a "zero trust" approach to all communications, implement multi-factor authentication universally, and train employees to recognize the new generation of AI-enhanced social engineering attacks — which, by definition, are designed to be unrecognizable using traditional methods.
As AI capabilities continue to advance, the arms race between attackers and defenders is entering a new phase — one where the old rules of thumb for spotting scams no longer apply, and the line between genuine and fabricated communication becomes increasingly difficult to discern.