エピソード

  • AI News Stories | Episode 36
    2026/01/22
    This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.Key stories discussed1) n8n (“n-eight-n”) zero-day → unauthenticated RCE riskhttps://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memoryhttps://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lureshttps://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps(00:00) - Intro & Sponsors(01:16) - 1) n8n zero-day → unauthenticated RCE(09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)(23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)(29:59) - 5) APT28 phishing refreshed with AI-written lures(34:15) - Closing thoughts: “AI genie is out of the bottle” + safety remindersBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for ...
    続きを読む 一部表示
    35 分
  • 2026 Predictions | Episode 35
    2026/01/08

    AI Security Ops | Episode 35 – 2026 Predictions

    In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.

    Chapters

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:14) - Prediction: Grid Power Becomes the Bottleneck
    • (10:27) - Prediction: FDA Qualifies AI Drug Development Tools
    • (15:45) - Prediction: Nation-State Threat Actors Weaponize AI
    • (17:33) - Prediction: Agentic AI Dominates App Development
    • (23:07) - Closing Thoughts: Jobs, Risk & Opportunity

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com



    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    続きを読む 一部表示
    25 分
  • AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer
    2025/12/24

    Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security.
    https://discord.gg/bhis

    AI Security Ops | Episode 34 – Why Did We Create This Podcast?
    In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.

    Chapters

    • (00:00) - Intro & Welcome
    • (00:13) - Why We Started AI Security Ops
    • (00:41) - Our Mission: Stay Informed & Ahead
    • (00:56) - What We Cover: AI News & Insights
    • (01:23) - Community Q&A & Real-World Scenarios
    • (02:18) - Special Guests & Industry Leaders
    • (02:41) - Demos, How-Tos & Practical Tips
    • (03:07) - Who Should Listen & Why Subscribe
    • (03:34) - Join the Conversation & Closing

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com



    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    続きを読む 一部表示
    4 分
  • Community Q&A on AI Security | Episode 34
    2025/12/18

    Community Q&A on AI Security | Episode 34

    In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.

    We break down:

    • Why LLMs sometimes “make stuff up” and how to reduce hallucinations
    • The role of prompts, temperature, and RAG databases in accuracy
    • Prompting best practices and reasoning modes for better results
    • Legal liability: Can you sue ChatGPT for bad advice?
    • Memory features, data retention, and privacy trade-offs
    • Security paranoia: AI apps, trust, and enterprise vs free accounts
    • Practical examples like customizing AI for writing style
    • How to explain AI to your mom (or any non-technical audience)
    • Why AI isn’t magic—just math and advanced auto-complete


    Whether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.

    Chapters

    • (00:00) - Welcome & Sponsor Shoutouts
    • (00:50) - Episode Overview: Community Q&A
    • (01:19) - Q1: Will ChatGPT Make Stuff Up?
    • (07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?
    • (11:15) - Q3: How Can AI Improve Without Ingesting Everything?
    • (22:04) - Q4: How Do You Explain AI to Non-Technical People?
    • (28:00) - Closing Remarks & Training Plug

    Brought to you by:
    Black Hills Information Security
    https://www.blackhillsinfosec.com

    Antisyphon Training
    https://www.antisyphontraining.com/

    Active Countermeasures
    https://www.activecountermeasures.com

    Wild West Hackin Fest
    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –
    https://poweredbybhis.com

    ----------------------------------------------------------------------------------------------
    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/
    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    続きを読む 一部表示
    28 分
  • AI News Stories | Episode 33
    2025/12/11

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    AI News | Episode 33
    In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.

    We break down:

    • AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous
    • Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.
    • Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.
    • Amazon’s private AI bug bounty: Nova models under the microscope.
    • Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.
    • PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.


    Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.

    ⏱️ Chapters

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:27) - AI-Orchestrated Cyber Espionage (Anthropic)
    • (08:10) - ShadowMQ: Critical RCE in AI Inference Engines
    • (09:54) - KawaiiGPT: Free Black-Hat LLM
    • (22:45) - Amazon Nova: Private AI Bug Bounty
    • (26:38) - Google Antigravity IDE Hacked in 24 Hours
    • (31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism

    🔗 Links
    AI-Orchestrated Cyber Espionage (Anthropic)
    ShadowMQ: Critical RCE in AI Inference Engines
    KawaiiGPT: Free Black-Hat LLM
    Amazon Nova: Private AI Bug Bounty
    Google Antigravity IDE Hacked in 24 Hours
    PROMPTFLUX: Malware Using Gemini for Polymorphism

    #AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malware

    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    続きを読む 一部表示
    37 分
  • Model Evasion Attacks | Episode 32
    2025/12/04

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com

    Model Evasion Attacks | Episode 32
    In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.

    We break down:
    - What model evasion attacks are and how they differ from data poisoning
    - How attackers tweak features to bypass classifiers (images, phishing, malware)
    - Real-world tactics like model extraction and trial-and-error evasion
    - Why non-determinism in AI models makes evasion harder to predict
    - Advanced threats: model theft, ablation, and adversarial AI
    - Defensive strategies: adversarial training, API throttling, and realistic expectations
    - Future outlook: regulatory trends, transparency, and the ongoing arms race

    Whether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.


    #AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreats


    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:19) - What Are Model Evasion Attacks?
    • (03:58) - Image Classifiers & Pixel Tweaks
    • (07:01) - Malware Classification & Decision Boundaries
    • (10:02) - Model Theft & Extraction Attacks
    • (13:16) - Non-Determinism & Myth Busting
    • (16:07) - AI in Offensive Capabilities
    • (17:36) - Defensive Strategies & Adversarial Training
    • (20:54) - Vendor Questions & Transparency
    • (23:22) - Future Outlook & Regulatory Trends
    • (25:54) - Panel Takeaways & Closing Thoughts
    続きを読む 一部表示
    29 分
  • Data Poisoning | Episode 31
    2025/11/27

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    Data Poisoning Attacks | Episode 31
    In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.

    We break down:

    • What data poisoning is and why it matters
    • How attackers inject malicious samples or flip labels in training sets
    • The role of open-source repositories like Hugging Face in supply chain risk
    • New twists for LLMs: poisoning via reinforcement feedback and RAG
    • Real-world concerns like bias in ChatGPT and malicious model uploads
    • Defensive strategies: governance, provenance, versioning, and security assessments


    Whether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.


    #aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreats


    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:19) - What Is Data Poisoning?
    • (03:58) - Poisoning Classifier Models
    • (08:10) - Risks in Open-Source Data Sets
    • (12:30) - LLM-Specific Poisoning Vectors
    • (17:04) - RAG and Context Injection
    • (21:25) - Realistic Threats & Examples
    • (25:48) - Defensive Strategies & Governance
    • (28:27) - Panel Takeaways & Closing Thoughts
    続きを読む 一部表示
    31 分
  • AI News Stories | Episode 30
    2025/11/20

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com


    AI News Stories | Episode 30
    In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.

    Topics Covered:

    Only 5% of Americans are unaware of AI?
    What Pew Research reveals about AI’s penetration into everyday life and workplace usage.
    AI’s Shift to the Intimacy Economy – Project Liberty
    https://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1

    Amazon to Cut Jobs and Invest in AI Infrastructure
    14,000 corporate roles eliminated—are layoffs really about efficiency or something else?
    Amazon to Cut Jobs & Invest in AI – DW
    https://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365

    Local Models Less Secure than Cloud Providers?
    Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.
    Local LLMs Security Paradox – Quesma
    https://quesma.com/blog/local-llms-security-paradox

    Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.

    Brought to you by Black Hills Information Security

    https://www.blackhillsinfosec.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:07) - AI’s Shift to the Intimacy Economy (Pew Research)
    • (19:40) - Amazon Layoffs & AI Investment
    • (27:00) - Local LLM Security Paradox
    • (36:32) - Wrap-Up & Key Takeaways
    続きを読む 一部表示
    37 分