Generative AI helps workers increase efficiency by automating many routine tasks. It can gather information, draft emails, translate text into multiple languages, create images and videos, and even write software code.
Unfortunately, all those capabilities help cybercriminals become more efficient, too.
Cybercriminals are increasingly using AI for more sophisticated attacks, including phishing, malware creation and social engineering. AI allows them to create personalized and convincing scams, automate various aspects of attacks, and generate ransomware. As we noted in a previous post, cyber criminals are also jailbreaking AI models to generate malicious content.
Cybercriminals even have their own gen AI chatbot called GhostGPT. Designed without any of the safety mechanisms built into mainstream AI tools, GhostGPT can be used to identify and exploit software vulnerabilities, develop sophisticated malware and generate phishing emails that are virtually impossible to detect. In February 2025, the CISA and FBI issued an advisory about known GhostGPT attacks.
Organizations should be taking steps to protect against this growing threat. Developing an effective mitigation strategy starts with understanding how cybercriminals are using gen AI.
Phishing
Organizations have generally relied on a “human firewall” to protect against phishing by training users to spot phishing emails. While training is still important, gen AI is making it increasingly difficult for humans to detect these attacks.
AI tools can be used to generate well-written phishing emails that are more effective at tricking victims into divulging sensitive information. Cybercriminals use AI to gather information about the targeted individual so they can craft convincing messages. Translation tools eliminate the grammar and syntax errors that once riddled phishing emails. Automation enables cybercriminals to generate an immediate response if the victim takes the bait.
Vishing is similar to phishing but involves using voice messages to trick people into revealing personal information. According to the CrowdStrike 2025 Global Threat Report, vishing attacks increased 442 percent in the second half of 2024. Deepfakes are making vishing more effective.
Deepfakes
In January 2024, a multinational company fell victim to a deepfake scam in which it lost more than $25 million. An employee in the finance department joined a Zoom meeting with the organization’s CFO and other colleagues — except everyone else on the call was fake. As instructed, he wired the money to the cybercriminals to facilitate a “secret transaction.”
Cybercriminals are using recursive neural networks to create deepfake videos and audio recordings that impersonate trusted individuals. One algorithm generates a fake image, video or audio recording and another tries to determine if it is real. The algorithms perform these tasks recursively and keep improving until the deepfake is impossible to detect.
While real-time video manipulation remains difficult, pre-recorded deepfake videos are increasingly common. In a 2024 survey conducted by Deloitte, about 15 percent of executives say their companies were targeted with deepfake scams at least once in the preceding year. The 2025 Identity Fraud report from Entrust and Onfido found that a deepfake incident occurred every five minutes in 2024.
Malware Generation
Cybercriminals are using AI to create malware variants with customized features, making them more difficult to detect using traditional cybersecurity measures. AI-powered tools enable even low-skilled hackers to generate phishing kits, ransomware scripts and other attacks and scale their attacks for maximum effectiveness. AI algorithms can also be used to accelerate brute-force attacks and crack passwords more quickly.
In addition to generating malware, gen AI can understand code, including machine code. Attackers who get a foothold in the network can use AI to analyze publicly facing code to identify vulnerabilities. AI can then analyze source code insider the network faster than humans, and generate malware variants that infect everything they can.
While gen AI has trouble generating malware from scratch, it is adept at rewriting or obfuscating malware. In a recent report, cybersecurity researchers said that gen AI can rapidly create 10,000 malware variants with the same functionality, and these variants evaded detection 88 percent of the time.
How Cerium Can Help
The rise of AI-powered cybercrime poses a significant challenge that requires new approaches to threat detection and prevention. The good news is that AI can combat malicious AI. AI-based cybersecurity solutions are emerging that are effective at mitigating these threats.
The security experts at Cerium are here to help you implement the advanced tools and techniques needed to protect against AI-powered phishing, malware, deepfakes and other attacks. Contact a member of our team to schedule a confidential consultation.