porkeynote
Menu
  • Home
  • About
  • Categories
    • Urban Fiction
    • People
    • CyberSecurity – My Journey
Menu

When AI Becomes the Attacker: The Rise of AI-Powered Cybercrime

Posted on February 9, 2026 by ndiki

The call that sounded exactly right

The phone rang at 3:47 PM on a Tuesday.

Sarah , CFO of a mid-sized biotech firm in San Diego, glanced at the caller ID. It was her CEO, Marcus. She picked up immediately.

“Sarah, it’s Marcus.” The voice was unmistakable, the slight rasp, the Boston accent that crept in when he was stressed, even the way he paused before getting to the point. “Listen, I’m in a meeting with the acquisition team. We need to wire $340,000 to secure the due diligence escrow by end of day. I’m sending you the account details now.”

Sarah frowned. “Marcus, I don’t remember seeing this on the…”

“I know, I know. It came up last minute. The board approved it this morning. Check your email, the wire instructions should be there.”

She opened her inbox. There it was. Subject line: “URGENT: Acquisition Escrow – EOD Required.” The email looked legitimate. Marcus’s signature. The company logo. Even the legal disclaimer at the bottom.

“Got it,” Sarah said. “I’ll process it now.”

“Thanks, Sarah. You’re a lifesaver. I’ve got to get back to this meeting.”

The call ended.

Sarah initiated the wire transfer. $340,000. Sent.

Twenty minutes later, her phone rang again. It was Marcus. The real Marcus.

“Hey Sarah, quick question…did accounting close out the Q2 reconciliation? I need it for the board deck.”

Sarah’s stomach dropped. “Marcus… weren’t you just in a meeting about the acquisition team?”

“What? No. I’m in the parking lot. I just left the dentist.”

The acquisition didn’t exist.

The voice on the phone wasn’t Marcus. It was AI.

And $340,000 was gone.

Welcome to the age of AI-powered cybercrime

Sarah’s story isn’t fiction. It’s a composite of thousands of real attacks happening right now, enabled by artificial intelligence that’s sophisticated enough to fool even the most security-aware professionals.

We’re not talking about clumsy phishing emails with broken English anymore. We’re talking about AI systems that can:

  • Clone your voice from a 3-second audio clip
  • Generate personalized phishing emails that reference your actual projects, colleagues, and deadlines
  • Create deepfake videos of your CEO authorizing fraudulent transactions
  • Write malware that adapts and evolves to evade detection
  • Automate attacks at a scale that would require armies of human hackers

The numbers tell a chilling story. AI-driven cyberattacks increased by 72% year-over-year globally, and we’re now facing a projected $193 billion in global damages from AI-powered cybercrime in 2025.

But here’s what keeps me up at night: the AI tools making this possible aren’t science fiction. They’re commercially available. Some are even free.

Let’s break down the three horsemen of AI-powered cybercrime: deepfakes, AI phishing, and autonomous malware.

Deepfakes: when seeing is no longer believing

Remember when your parents told you “don’t believe everything you see on the internet”?

Now you can’t believe anything you see. Or hear.

The $25 million video call

In February 2024, a finance worker at Arup, the global engineering firm behind the Sydney Opera House, joined what appeared to be a routine video conference with the company’s CFO and other senior executives. Everyone on the call looked real. Everyone sounded real.

The executives asked the finance worker to execute several transactions. Total amount: $25 million.

Every single person on that video call was a deepfake. AI-generated faces and voices, indistinguishable from the real thing. The attackers had scraped publicly available videos of executives from earnings calls and conference presentations, fed them into deepfake software, and orchestrated an entirely fake meeting.

The money was gone before anyone realized what happened.

This isn’t an isolated incident. It’s the new normal.

The Deepfake explosion

The numbers are staggering:

  • Deepfake files surged from 500,000 in 2023 to a projected 8 million in 2025, a 1,500% increase in just two years.
  • Fraud attempts spiked 3,000% in 2023, with North America experiencing a 1,740% increase in deepfake fraud.
  • The number of deepfake incidents in Q1 2025 alone surpassed all of 2024, a 19% increase in just three months.
  • 53% of financial professionals have experienced attempted deepfake scams.

But here’s the really terrifying part: only 0.1% of people can consistently identify deepfakes. That’s not a typo. 99.9% of us cannot tell the difference.

Even sophisticated security professionals are getting fooled. Human detection rates for high-quality video deepfakes are just 24.5%, basically a coin flip.

Voice cloning: your voice as a weapon

Voice cloning has become the top deepfake attack vector because it’s:

  1. Cheap: The AI voice cloning market was valued at $2.1 billion in 2023 and is expected to reach $25.6 billion by 2033.
  2. Fast: AI can clone a voice from as little as 3 seconds of audio.
  3. Convincing: Scientific research found that people can correctly identify AI-generated voices only 60% of the time.

The attacks are everywhere:

  • 1 in 10 adults globally has experienced an AI voice scam, representing hundreds of millions of victims.
  • 77% of people who experienced an AI voice scam lost money.
  • 1 in 4 adults have experienced an AI voice scam, according to a 2024 McAfee study.

In one notable case, fraudsters attempted to impersonate Ferrari CEO Benedetto Vigna through AI-cloned voice calls that perfectly replicated his southern Italian accent. The attack was only defeated when an executive asked a question only the real CEO would know.

The message is clear: technical detection doesn’t work. Procedural verification does.

Where do attackers get your voice?

If you’ve ever:

  • Posted a video on social media
  • Appeared on a podcast
  • Spoken on a conference call
  • Left a voicemail
  • Posted a TikTok, Instagram Reel, or YouTube video

…your voice is already in the wild. 53% of adults share their voice data online at least once per week.

That three-second clip of you saying “Hey everyone!” in your Instagram story? That’s all attackers need.

AI phishing: goodbye grammar errors, hello perfect personalization

For years, we taught people to spot phishing emails by looking for telltale signs:

  • Bad grammar
  • Generic greetings (“Dear Customer”)
  • Suspicious sender addresses
  • Urgent threats (“Your account will be closed!”)

Those days are over.

The new phishing landscape

AI has fundamentally changed phishing. Here’s what we’re dealing with now:

  • 82.6% of phishing emails now use AI in some form, a 53.5% increase since 2024.
  • 73.8% of phishing emails used some form of AI in 2024, rising to over 90% for polymorphic attacks.
  • 78% of people open AI-generated phishing emails, and 21% click on malicious content inside.
  • There was a 202% increase in phishing email messages in the second half of 2024.
  • According to cybersecurity researchers, AI-generated phishing became the top enterprise email threat by October 2025, surpassing ransomware, insider risk, and traditional social engineering combined.

But the real game-changer isn’t just volume, it’s personalization at scale.

APT-level attacks for everyone

Remember when spear-phishing (highly targeted, personalized phishing) was something only nation-state hackers and Advanced Persistent Threats (APTs) could pull off? That required human intelligence, research, and time.

Not anymore.

A 2024 campaign targeting 800 small accounting firms used AI to generate customized tax deadline reminder emails referencing each firm’s specific state registration details and recent public filings. The attacks achieved a 27% click rate by providing perfect local context that appeared impossible for mass campaigns.

Another example: AI-powered phishing now extends across multiple attack vectors, creating coordinated campaigns. An initial phishing email establishes context, followed by a deepfake voice call from the “CFO” referencing that email to authorize wire transfers.

The critical insight: AI democratizes advanced spear-phishing capabilities, making APT-level personalization accessible to low-skill criminals with limited resources.

How AI makes phishing unstoppable

AI-powered phishing tools allow attackers to:

  1. Write perfect emails: No more grammar errors. AI tools like ChatGPT can mimic writing styles and craft convincing messages in any language. Generative AI tools help hackers compose phishing emails up to 40% faster.
  2. Personalize at scale: AI can scrape LinkedIn, company websites, and social media to gather information about targets, their role, projects, colleagues, recent activities, and craft emails that reference real context.
  3. Adapt in real-time: Polymorphic phishing uses AI to generate slightly different versions of emails for each recipient, making it nearly impossible for spam filters to catch patterns.
  4. Automate everything: From reconnaissance to delivery to follow-up, AI can run entire phishing campaigns with minimal human intervention.

Security teams report a staggering 1,265% surge in phishing attacks linked to generative AI since 2023.

Autonomous malware: the worm that thinks for itself

If deepfakes and AI phishing weren’t enough, we now have malware that doesn’t need human operators.

Meet Morris II: the first AI worm

In April 2024, researchers from Cornell University, the Technion-Israel Institute of Technology, and Intuit revealed Morris II, a self-replicating AI worm designed to target generative AI ecosystems.

Named after the infamous Morris Worm of 1988 that took down a significant portion of the early internet, Morris II represents an entirely new class of threat: malware that exploits AI systems themselves.

Here’s how it works:

  1. Self-Replicating Prompts: Morris II uses “adversarial self-replicating prompts”, carefully crafted inputs that, when fed into an AI model like ChatGPT, Gemini, or other large language models (LLMs), trick the model into replicating the malicious input as output.
  2. Zero-Click Propagation: Unlike traditional malware that requires a user to click on a link or download a file, Morris II spreads automatically without any human interaction. It’s stored in the Retrieval Augmented Generation (RAG) system and moves “passively” to new targets.
  3. Malicious Payload: Once inside a system, Morris II can: Exfiltrate sensitive data (emails, documents, credentials) Send spam containing malicious software Spread to other AI-powered applications through shared APIs and embedding stores

How Morris II was tested

Researchers demonstrated Morris II against GenAI-powered email assistants in two scenarios:

Scenario 1: Spamming The worm forced AI email assistants to send commercial spam to all contacts.

Scenario 2: Data Exfiltration Morris II accessed emails, extracted confidential information (credit card details, social security numbers, personal data), and forwarded it to attackers.

The worm was tested against three different AI models:

  • Google’s Gemini Pro
  • OpenAI’s ChatGPT 4.0
  • Open-source LLM LLaVA

It worked on all of them.

The economics of AI cybercrime

Why are attackers investing so heavily in AI?

Because it works. And it’s insanely profitable.

The cost of AI-powered cybercrime

  • The annual cost of cybercrime is projected to reach $10.5 trillion by 2025,if cybercrime were a country, it would be the world’s third-largest economy, trailing only the U.S. and China.
  • AI-driven cybercrime is projected to exceed $193 billion in 2025.
  • The average cost per AI-related breach reached $5.72 million in 2025, a 13% increase from the prior year.
  • Small-to-midsize enterprises (SMEs) spent 27% more on cyber incident response in 2025 due to AI-specific threats.
  • AI-aided insider threats caused over $2.4 billion in damages in 2025.

The ROI for attackers

From the attacker’s perspective, AI is a force multiplier:

  • Speed: AI can compose phishing emails up to 40% faster than humans.
  • Scale: Automated, AI-driven phishing operations now send millions of personalized emails per day.
  • Success Rate: AI-automated phishing emails achieved a 54% click-through rate, significantly higher than traditional phishing.
  • Low Skill Requirement: AI tools make sophisticated attacks accessible to low-skill criminals. You no longer need to be a master hacker to run an APT-level campaign.

What can we do about this?

The situation sounds bleak. And honestly, it kind of is.

But there are strategies that work, though they require a fundamental shift in how we think about security.

1. Behavioral verification over technical detection

Stop trying to spot fakes. Start verifying requests.

Technical detection of high-quality deepfakes is nearly impossible for humans. So the solution isn’t “get better at spotting deepfakes.” The solution is to assume everything could be fake and verify accordingly.

Practical steps:

  • Implement out-of-band verification: If your CEO calls asking for a wire transfer, hang up and call them back on a known number. If they email you, call them. Never verify through the same channel the request came from.
  • Use verbal passwords or security questions: Agree on challenge phrases that only the real person would know. When Ferrari’s security team stopped the deepfake CEO attack, they did it by asking a question only the real CEO would know.
  • Establish approval thresholds: Any financial transaction over a certain amount requires multi-person approval and video verification (with the understanding that video can be faked, so add verbal challenges).
  • Create friction intentionally: The goal isn’t perfect detection, it’s to create enough friction that breaks the social engineering flow.

Organizations implementing behavior-based phishing training see a 50% reduction in actual phishing-related incidents over 12 months.

2. Fight fire with fire: use AI for defense

If attackers are using AI, defenders need to as well.

  • AI-powered threat detection: AI can monitor millions of endpoints and analyze over 150 billion security events daily, something no human team could accomplish.
  • Automated response: AI can automate up to 85% of threat detection and response protocols, allowing human analysts to focus on high-priority threats.
  • Anomaly detection: AI excels at spotting unusual behavior patterns, like an email assistant suddenly forwarding confidential data to external addresses.

Organizations using security AI and automation see lower average breach costs compared to those that don’t. Security responses are now 50% faster for organizations using IBM’s AI-powered security tools.

3. Secure Your AI Infrastructure

If you’re using AI-powered tools in your organization, you need to protect them:

  • Regular updates: Ensure GenAI email assistants and other AI tools are regularly updated with the latest security patches.
  • Input validation: Implement robust security measures like content filtering, anomaly detection, and user authentication.
  • Output monitoring: Secure your output to ensure it doesn’t consist of pieces similar to malicious input.
  • Jailbreaking countermeasures: Use countermeasures against jailbreaking techniques that attackers use to replicate malicious input into output.

4. Training and awareness

The old “spot the fake” training doesn’t work anymore when 99.9% of people can’t spot deepfakes.

The new training focuses on:

  • Process over detection: Train employees to follow verification procedures regardless of how convincing a communication appears.
  • AI threat awareness: Educate teams about voice cloning, deepfakes, and AI-generated phishing.
  • Simulation exercises: Organizations implementing vishing simulation programs report 65% improvement in verification behavior during voice-based attack scenarios.

5. Phishing-resistant MFA

Multi-factor authentication (MFA) is essential, but traditional SMS-based MFA can be bypassed. Use:

  • Hardware security keys (FIDO2/WebAuthn)
  • Biometric authentication
  • Phishing-resistant MFA for admins and finance teams

Require verified callback for any payment, payroll, or vendor bank change.

6. Zero trust architecture

Assume breach. Assume nothing is trusted by default.

  • Least privilege access: Give users only the permissions they need, nothing more.
  • Just-in-time access: Provide elevated permissions only when needed, then revoke them.
  • Continuous verification: Don’t trust, always verify, even for authenticated users.

The bottom line

Sarah story, the CFO who lost $340,000 to a voice-cloned AI attack, is happening every single day.

AI has fundamentally changed the threat landscape:

  • Deepfakes make it impossible to trust what we see and hear
  • AI phishing makes it impossible to rely on grammar errors and generic greetings
  • Autonomous malware makes it impossible to depend on user interaction and signature-based detection

The old defenses don’t work anymore.

But that doesn’t mean we’re helpless. It means we need to adapt:

  • Stop trying to detect fakes. Start verifying everything.
  • Use AI to fight AI.
  • Train people on processes, not detection.
  • Assume breach. Build resilience.

The scary part? We’re still in the early days of AI-powered cybercrime. The attacks will get more sophisticated. The tools will get more accessible. The scale will get larger.

But here’s the thing: security has always been an arms race. Attackers evolve. Defenders adapt. The cycle continues.

The question isn’t whether AI will be used for cybercrime, it already is, at massive scale. The question is whether we’ll take the threat seriously enough to fundamentally change how we approach security.

Because the next phone call you get from your CEO?

It might not be them.

Category: CyberSecurity - My Journey

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • March 30, 2026 by ndiki The Frequency of Desperation
  • March 30, 2026 by ndiki Can AI Fix Security Problems Before Humans Even Notice
  • March 23, 2026 by ndiki The Corner Office
  • March 23, 2026 by ndiki AI Is Now Fighting AI: What This Means for Cybersecurity
  • March 16, 2026 by ndiki The Road to Nowhere
April 2026
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  
« Mar    
© 2026 porkeynote

Powered by
...
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by