porkeynote
Menu
  • Home
  • About
  • Categories
    • Urban Fiction
    • People
    • CyberSecurity – My Journey
Menu

Month: March 2026

AI Is Now Fighting AI: What This Means for Cybersecurity

Posted on March 23, 2026 by ndiki

We are no longer in the era of: We are now in the era of: AI is now fighting AI. And everything has changed. The attacker’s arsenal: how AI is weaponized The numbers are staggering 87% of organizations worldwide experienced AI-driven cyberattacks in 2025. 73% of security professionals say AI-powered threats are already hitting their…

Read more

The Road to Nowhere

Posted on March 16, 2026 by ndiki

Cissy stopped counting after that.

The road stretched ahead of her, perfectly straight, cutting through a landscape that never quite changed. Desert to her left, always. Mountains to her right, distant and purple and never any closer no matter how long she walked. The sky was perpetually sunset, not moving toward night, not retreating to day, just caught in that golden-pink liminal space between.

Read more

Will AI Replace Cybersecurity Professionals – or Make Them More Powerful?

Posted on March 16, 2026 by ndiki

The question everyone’s asking
Will AI replace cybersecurity professionals?

Let me answer that directly:

No.

But it will fundamentally transform what “being a cybersecurity professional” means.

Here’s why.

The crisis that AI is solving
The numbers are brutal
The global cybersecurity workforce gap is 4.8 million unfilled positions.

72% of security professionals agree that reducing security personnel significantly increases breach risk.

SOC analyst burnout-driven churn rates exceed 25% annually—among the highest in IT.

Replacing a trained analyst takes 6-12 months.

Organizations cannot hire their way to resilience.

The data deluge
Industry telemetry in 2025 reached 308 petabytes across more than 4 million identities, endpoints, and cloud assets.

This produced nearly 30 million investigative leads.

Analysts confirmed only around 93,000 genuine threats from that mountain.

That’s a hit rate of 0.3%.

Translation: 99.7% of alerts are noise.

Security teams receive an average of 4,484 alerts per day and spend up to 27% of their time on false positives.

Studies show SOC teams routinely ignore or dismiss up to 30% of incoming alerts—not through negligence, but necessity.

When every alert looks the same and context arrives fragmented across disconnected consoles, skilled analysts are forced to triage by instinct rather than evidence.

Without automation, the volume alone would be unmanageable.

What AI actually does for security seams
Two out of three organizations now deploy AI and automation across their SOC environments.

And the results are measurable:

Speed
Companies using AI and automation in security operations contained breaches 108 days faster than those without AI-driven defenses.

AI-augmented SOCs have demonstrated a 50% reduction in mean time to detect (MTTD).

Cost
Extensive use of AI cut breach costs by an average of $2.2 million.

Workload reduction
AI-augmented SOCs saw a 60% drop in manual triage workload.

Torq’s Socrates platform, an AI SOC analyst, achieves 90% automation of Tier-1 analyst tasks (auto-remediated without human involvement), 95% reduction in manual tasks, and 10x faster response times.

What AI does best
Alert Triage:

Automatically categorize and prioritize incidents
Correlate events across multiple systems
Eliminate false positives based on historical context

Threat Detection:

Analyze massive datasets for anomalies
Identify patterns humans would miss
Detect novel attack techniques

Incident Investigation:

Automatically enrich alerts with threat intelligence
Map events to MITRE ATT&CK framework
Generate incident timelines

Automated Response:

Isolate compromised hosts
Block malicious IPs/domains
Reset compromised credentials
Contain threats before they spread

Vulnerability Management:

Continuous scanning for weaknesses
Risk-based prioritization
Automated patch recommendations

Reporting:

Generate compliance documentation
Create executive summaries
Produce audit trails automatically

The rise of AI-powered security operations
The agentic SOC
The concept of an “agentic SOC” is a system of task-based AI agents orchestrated toward a shared outcome.

Think of it like this:

Traditional SOC:

Analyst receives alert → Manually investigates → Manually enriches → Manually responds → Manually documents

Agentic SOC:

AI detection agent identifies threat → AI investigation agent correlates events → AI enrichment agent adds context → AI response agent contains threat → AI documentation agent logs everything → Human analyst reviews and approves escalation

The agents work like a team:

Detection Agent – Monitors telemetry streams
Triage Agent – Prioritizes alerts by risk
Investigation Agent – Correlates events across systems
Enrichment Agent – Adds threat intelligence context
Response Agent – Executes containment actions
Documentation Agent – Creates audit trails
Orchestration Agent – Coordinates the workflow

Human analyst’s role:

Oversee the process
Make judgment calls on ambiguous cases
Handle complex investigations
Approve high-impact actions
Strategic threat hunting

With enterprises expected to deploy a massive wave of AI agents in 2026, the cyber gap narrative will fundamentally change.

For an SOC, this means triaging alerts to end alert fatigue and autonomously blocking threats in seconds.

These agents drastically cut response and processing times, enabling human teams to move from manual operators to commanders of the new AI workforce.

AI-powered vulnerability detection: the penetration testing revolution
AI isn’t just transforming SOCs—it’s revolutionizing how we find vulnerabilities.

Traditional penetration testing:
Hire a pentester: $60/hour
Schedule engagement: 2-4 weeks lead time
Test duration: 1-2 weeks
Report delivery: 1 week
Total time: 4-7 weeks
Total cost: $20,000-$50,000

Then your code changes, and it’s all outdated.

AI-powered penetration testing:
Real results from research:

ARTEMIS, a multi-agent AI pentesting framework, placed second overall in a live enterprise network test, discovering 9 valid vulnerabilities with an 82% valid submission rate and outperforming 9 of 10 human participants.

Cost: $18/hour vs. $60/hour for human pentesters.

According to research published in May 2025, an AI agent outperformed 9 out of 10 human penetration testers in a controlled capture-the-flag (CTF) environment, identifying valid vulnerabilities with 82% precision.

How AI pentesting works:
Autonomous Testing Platforms:

Reconnaissance Agent – Maps attack surface, gathers intelligence
Scanning Agent – Identifies services, enumerates endpoints
Vulnerability Analysis Agent – Evaluates weaknesses
Exploit Agent – Executes proof-of-concept attacks
Post-Exploitation Agent – Assesses impact and lateral movement
Reporting Agent – Documents findings with evidence

BlacksmithAI is an open-source pentesting framework that uses multiple AI agents to execute different stages of a security assessment lifecycle in a hierarchical system where an orchestrator coordinates task execution.

AI pentesting platforms (2026):
Top Tools:

ARTEMIS – Multi-agent framework with dynamic prompt generation and automatic vulnerability triaging
BlacksmithAI – Open-source, hierarchical agent orchestration
Zen-AI-Pentest – Autonomous reconnaissance, vulnerability scanning, exploitation, and reporting
PentestGPT – GPT-powered assistant suggesting exploit paths
Specular – Offensive platform using Gemini 2.5 Pro for automated attack surface management
XBOW – Coordinates hundreds of autonomous agents for adversarial realism
Escape – Business logic flaw detection with continuous testing
Pentera – Advanced lateral movement simulation and risk-based prioritization

The skills that matter in 2026
So if AI is doing alert triage, investigation, and even pentesting…

What do humans do?

What AI cannot replace
1. Strategic Thinking

AI can detect that 50 failed logins occurred.

Humans understand that those logins coincided with:

A company acquisition announcement
A sensitive legal case
A political event affecting the organization

Context matters. AI lacks it.

2. Adversarial creativity

AI thinks in patterns. Attackers think in “what shouldn’t be possible.”

The best security researchers discover new attack vectors by asking:

“What if I combine these two legitimate features in an unexpected way?”
“What assumptions is the system making that I can violate?”
“How can I make the system behave in ways the developers never intended?”

AI pattern-matches. Humans break patterns.

3. Ethical and legal judgment

Scenario: AI flags an executive’s account for suspicious behavior.

Technical decision: Block the account.

Business decision:

Is this executive closing a $50M deal today?
Is this a false positive that will damage credibility?
What’s the political fallout if we’re wrong?
What are the legal implications?

Humans make these calls. AI doesn’t understand politics, business risk, or organizational dynamics.

4. Novel threat adaptation

When a completely new attack technique emerges—something never seen before—AI has no training data.

Humans adapt. Humans reason through unknowns. Humans experiment.

5. Governing AI itself

97% of breached organizations that experienced an AI-related security incident lacked proper AI access controls.

IBM’s 2025 report found that shadow AI added an average of $670,000 to breach costs.

63% of organizations admitted they have no AI governance policies in place at all.

Who’s going to secure the AI? Other AI?

Someone needs to:

Design AI security architectures
Audit AI decision-making
Prevent AI from being weaponized
Ensure AI operates within legal/ethical boundaries

That’s a human job.

The skills you need to thrive
46% of security professionals agree they’re not adequately prepared for AI-powered threats.

The number-one thing holding defenders back? Insufficient knowledge and skills related to AI.

Not budget. Not headcount. Knowledge.

Critical skills for 2026:
1. Understanding How AI Works

You don’t need to be a data scientist.

But you need to understand:

How AI models make decisions
What biases they might have
When AI is reliable vs. when it’s guessing
How to interpret AI confidence scores
How to tune AI models for your environment

2. AI Security Governance

With the EU AI Act’s most substantive obligations taking effect August 2, 2026, high-risk AI systems need to demonstrate compliance with requirements around:

Risk management
Data governance
Technical documentation
Transparency
Human oversight
Accuracy
Robustness
Cybersecurity

Someone needs to implement this. That’s you.

3. Threat Intelligence + AI Context

AI can correlate events. Humans provide the “why it matters” context:

What’s the geopolitical situation?
What threat actors target our industry?
What’s the attacker’s likely motivation?
How does this fit into broader attack campaigns?

4. Automation & Orchestration

Security teams are building workflows where:

AI detects → AI investigates → AI recommends → Human approves → AI executes

You need to design these workflows.

5. Communication

AI generates technical findings.

Humans translate for:

Executives (business risk)
Legal (compliance implications)
Board members (strategic impact)
Developers (how to fix)

6. Continuous Learning

The threat landscape changes weekly.

AI capabilities evolve monthly.

If you stop learning, you’re obsolete.

The real threat: not AI, but complacency
Here’s the uncomfortable truth:

AI won’t replace cybersecurity professionals.

But cybersecurity professionals who use AI will replace those who don’t.

96% of cybersecurity professionals agree that AI can meaningfully improve the speed and efficiency of their work.

Top areas where AI improves work:

Anomaly detection and novel threat identification: 72%
Automated response and containment: 48%
Vulnerability management: 47%

The gap is widening.

Organizations are deploying AI whether you’re ready or not:

52% of executives in generative AI-using organizations have AI agents in production.

87% of respondents preferred platform-based security purchases in 2025. In 2026, that hit 93%.

85% of security professionals prefer managed SOC capabilities over building in-house.

Translation: If you’re not leveraging AI, you’re falling behind.

Read more
  • Previous
  • 1
  • 2
  • 3
  • Next
  • March 30, 2026 by ndiki The Frequency of Desperation
  • March 30, 2026 by ndiki Can AI Fix Security Problems Before Humans Even Notice
  • March 23, 2026 by ndiki The Corner Office
  • March 23, 2026 by ndiki AI Is Now Fighting AI: What This Means for Cybersecurity
  • March 16, 2026 by ndiki The Road to Nowhere
March 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
« Feb    
© 2026 porkeynote

Powered by
...
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by