Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
Ever wonder what spies and scammers are doing with ChatGPT? Spoiler: it’s not just writing homework essays.
OpenAI just dropped a chaotic new threat report—honestly, it could’ve been titled “Threat Level: Midnight”—and it reads like a crossover episode of Mr. Robot and Black Mirror. It outlines how threat actors from China, Russia, North Korea, Iran, and beyond are using OpenAI models for cyberattacks, misinformation campaigns, and digital espionage.
AI is supercharging bad actors—but also exposing them.
By using ChatGPT, these threat groups are leaving behind a digital trail of prompts, code, and intentions, giving OpenAI a behind-the-scenes look at how modern threats are evolving.
Here are the Top 5 Wildest Operations uncovered in the report:
🔹 North Korea: Faking Résumés & Hacking Interviews
Hackers linked to North Korea used ChatGPT to generate realistic résumés for fake remote job applications—especially in IT. They even researched how to bypass live video interviews using VPNs and spoofing tools to fake identity during the call.
🔹 China: Influence Campaigns & Self-Written Performance Reviews
A group dubbed “Sneer Review” created fake debate on TikTok and X using AI-generated comments. The twist? They also used ChatGPT to write their own internal performance reviews, mapping out content timelines and campaign metrics like it was an actual company.
🔹 Russia: Chatbot-Built Malware
In an operation called “ScopeCreep,” a Russian-speaking hacker used ChatGPT to write and debug Windows malware, which they then embedded inside a popular gaming tool. The model helped troubleshoot every step like an obedient little cybercrime intern.
🔹 China (Again): Dividing America with “Uncle Spam”
This campaign created AI-generated political content—on both sides of controversial U.S. topics like tariffs. They even used AI image tools to invent fake advocacy group logos, like a fake “Veterans for Justice” group aimed at stoking division.
🔹 Philippines: Troll Army for Hire
A PR firm in the Philippines—code-named “Operation High Five”—used AI to flood TikTok and Facebook with pro-government comments, even mocking political opponents with nicknames like “Princess Fiona.”
This report isn’t just juicy—it’s a crystal ball.
AI is lowering the barrier to entry for cybercrime and propaganda, letting under-resourced actors run campaigns that once required a whole team. One person with a laptop and ChatGPT can now punch way above their weight.
And here’s the twist: AI is also their downfall.
Everything these bad actors ask the model—every line of code, every propaganda draft, every debugged error—becomes a breadcrumb. OpenAI is watching. It’s like they’re unknowingly narrating their schemes out loud to the very platform they're abusing.
So yes—the spies are being spied on.
And in this thriller, the chatbot talks back.