エピソード

  • Claude Cowork Discussion | Episode 42
    2026/03/06

    We discuss the meaning of AI life In episode 42 of "BHIS Presents: AI Security Ops." Derek Banks is joined by Bronwen Aker and Brian Fehrman to break down Anthropic’s latest agentic desktop experiment: Claude Cowork.

    Claude Cowork brings large language models directly onto the endpoint — giving Claude the ability to read, write, and organize files on your local machine. It’s designed to make powerful AI workflows accessible to non-technical users… but as with any tool that operates at the OS level, the security implications are significant.

    We explore what happens when AI moves closer to your data, your filesystem, and your browser — and what that means for defenders.

    We dig into:
    - What Claude Cowork is and how it differs from Claude Code
    - Agentic desktop tools vs. command-line workflows
    - Local file access and OS-level interaction risks
    - Skills, automation, and task iteration
    - Chrome plugins and expanded attack surface
    - Overly broad permissions and least-privilege concerns
    - SaaS disruption and shifting trust boundaries
    - Endpoint monitoring challenges
    - The speed of AI releases vs. security review cycles
    - Balancing innovation with responsible deployment

    This conversation looks at the real-world operational and defensive considerations of agentic AI tools running directly on user systems. If you’re evaluating AI productivity tools inside your organization — or defending environments where they’re already being adopted — this episode will help you think through the risks and tradeoffs.

    • (00:00) - Intro & Episode Overview
    • (02:31) - What Is Claude Cowork?
    • (04:26) - Desktop Agents vs. Command Line Users
    • (06:35) - Agentic Workflows & Task Automation
    • (08:31) - Building Fast with Claude (Speed of Development)
    • (09:52) - Browser Plugins & Expanding Capabilities
    • (11:29) - Permission Models & “Just Give It Access to Everything”
    • (13:03) - SaaS Disruption & Enterprise Impact
    • (15:01) - Overly Broad File Access Risks
    • (16:50) - Organizational Disruption & Workforce Impact
    • (18:32) - Security Lag vs. Rapid AI Releases
    • (20:09) - Final Thoughts & Wrap-Up

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Derek Banks - Host
    • Bronwen Aker - Host
    • Brian Fehrman - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.


    🧦 SOC Summit 2026
    https://www.antisyphontraining.com/event/soc-summit/

    続きを読む 一部表示
    22 分
  • OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41
    2026/02/26

    In this episode of BHIS Presents: AI Security Ops, we’re joined by Beau Bullock and Hayden Covington to unpack one of the most talked-about AI agent experiments in recent memory: OpenClaw and its companion platform, Moltbook.

    OpenClaw exploded onto the scene as an autonomous AI agent capable of operating Claude Code from the command line — executing tasks, monitoring output, and iterating with minimal human involvement. Shortly after, Moltbook emerged as a social platform designed specifically for AI agents to interact with one another.

    But as with most cutting-edge AI experiments, things moved fast… and broke fast.

    We dig into:

    • What OpenClaw actually is and how it works
    • AI agents operating other AI systems (Claude Code in the loop)
    • The concept of “skills” and extending agent capabilities
    • The one-click RCE vulnerability discovered shortly after release
    • Moltbook as a social network for AI agents
    • API keys, agent-only access, and how humans bypassed it
    • Beacons, autonomy, and what “control” really means
    • Where the line is between automation and true autonomy
    • Short-term workforce impacts vs. long-term AI risk


    This conversation moves beyond hype into the practical and security implications of rapidly deployed autonomous agents. If you’re experimenting with AI agents — or defending against them — this episode will give you a grounded perspective on what’s possible today, what’s fragile, and what’s coming next.

    • (00:00) - Intro & Guest Welcome
    • (02:01) - AI Agents in the News
    • (03:46) - From “Moltbot” to OpenClaw
    • (04:36) - What Is OpenClaw? How It Works
    • (05:36) - Claude Code + Agent-in-the-Middle Model
    • (07:59) - Extending OpenClaw with Skills
    • (09:05) - Release Timeline & Rapid Adoption
    • (10:39) - One-Click RCE in OpenClaw
    • (12:08) - Introducing Moltbook (AI Social Network)
    • (14:26) - How Moltbook Actually Worked
    • (18:18) - “I Am a Robot” & Agent Authentication
    • (20:51) - Beaconing & Operational Behavior
    • (27:07) - Automation vs. True Autonomy
    • (27:49) - Control, Kill Switches & Agent Boundaries
    • (31:22) - Workforce Impact & Near-Term Concerns
    • (35:57) - AI Apocalypse? Final Thoughts & Wrap-Up

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Beau Bullock - Guest
    • Hayden Covington - Guest
    • Derek Banks - Host
    • Brian Fehrman - Host
    • Bronwen Aker - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.


    🧦 SOC Summit 2026
    https://www.antisyphontraining.com/event/soc-summit/

    続きを読む 一部表示
    36 分
  • AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40
    2026/02/20

    AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    In this episode of BHIS Presents: AI Security Ops, we sit down with Hayden Covington and Ethan Robish from the BHIS Security Operations Center (SOC) to explore how AI is actually being used in modern defensive operations.

    From foundational machine learning techniques like statistical baselining and clustering to large language models assisting with alert triage and reporting, we dig into what works, what doesn’t, and what SOC teams should realistically expect from AI today.

    We break down:

    - How AI helps reduce alert fatigue and improve triage
    - Practical automation inside a real-world SOC
    - The difference between traditional ML approaches and LLM-powered workflows
    - Foundational techniques like K-means, anomaly detection, and behavioral baselining
    - Using LLMs for enrichment, investigation, and report drafting
    - Where AI struggles: hallucinations, inconsistency, and edge cases
    - Risks around over-trusting AI in security operations
    - How to responsibly integrate AI into analyst workflows

    This episode is grounded in real operational experience—not vendor demos. If you’re running a SOC, building AI tooling, or just trying to separate hype from reality, this conversation will help you think clearly about augmentation vs. automation in defensive security.


    • (00:00) - Intro & Guest Introductions
    • (04:44) - Alert Triage & SOC Pain Points
    • (06:04) - Automation Inside the SOC
    • (09:59) - “Boring AI”: Clustering, Baselining & Statistics
    • (17:06) - AI-Assisted Reporting & Client Communication
    • (18:34) - Limitations, Edge Cases & Model Risk
    • (22:56) - Hallucinations & Inconsistent Outputs
    • (25:04) - AI Demos vs. Real-World Security Work
    • (28:35) - Final Thoughts & Closing

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Hayden Covington - Guest
    • Ethan Robish - Guest
    • Bronwen Aker - Host
    • Derek Banks - Host
    • Brian Fehrman - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    続きを読む 一部表示
    30 分
  • AI News | Episode 39
    2026/02/12

    AI News | Episode 39

    In this episode of AI Security Ops, we break down the latest developments in AI-driven threats, identity chaos caused by autonomous agents, NIST’s focus on securing AI in critical infrastructure, and new visibility tooling for AI exposure.

    We cover real-world abuse of LLMs for phishing, how AI agents are colliding with IAM governance, and what defenders should be watching right now.

    Chapters:
    00:00 – Introduction and Sponsors
    Black Hills Information Security - https://www.blackhillsinfosec.com/
    Antisyphon Training - https://www.antisyphontraining.com/

    01:08 – LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)
    Discussion begins as the hosts introduce the first story.
    How LLMs are generating polymorphic malicious JavaScript for phishing pages and evading traditional detection.
    👉 https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/

    08:49 – AI Agents vs IAM: “Who Approved This Agent?” (Hacker News)
    Conversation shifts to agent privilege management and governance failures.
    👉 https://thehackernews.com/2026/01/who-approved-this-agent-rethinking.html

    10:07 – NIST Focus on Securing AI Agents in Critical Infrastructure
    Discussion on federal guidance and why AI agents are being treated as critical infrastructure risk components.
    👉 https://www.linkedin.com/pulse/cybersecurity-institute-news-roundup-20-january-2026-entrust-alz7c

    13:44 – Tenable One AI Exposure
    Breaking down Tenable’s push into enterprise AI usage visibility and exposure management.
    👉 https://www.tenable.com/blog/tenable-one-ai-exposure-secure-ai-usage-at-scale


    Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security.
    https://discord.gg/bhis

    Chapters

    • (00:00) - Introduction and Sponsors
    • (01:08) - LLM-Generated Phishing JavaScript (Unit 42 / Palo Alto)
    • (10:07) - NIST Focus on Securing AI Agents in Critical Infrastructure
    • (13:44) - Tenable One AI Exposure

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host

    Click here to watch this episode on YouTube.

    ----------------------------------------------------------------------------------------------
    About Joff Thyer - https://www.blackhillsinfosec.com/team/joff-thyer/
    About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
    About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
    About Bronwen Aker - https://www.blackhillsinfosec.com/team/bronwen-aker/
    About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    続きを読む 一部表示
    19 分
  • Questions From the Community | Episode 38
    2026/02/05


    Click here to watch this episode on YouTube.

    Creators & Guests

    • Brian Fehrman - Host
    • Joff Thyer - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    続きを読む 一部表示
    17 分
  • A.I. Frameworks and Databases | Episode 37
    2026/01/30

    In Episode 37 of AI Security Ops, the team breaks down the most important AI security frameworks and vulnerability databases used to track risks in machine learning and large language models. The discussion covers emerging AI vulnerability databases, the OWASP Top 10 for LLMs, CVE challenges, and frameworks like MITRE ATLAS, highlighting why standardizing AI threats is still difficult. This episode is a practical guide for security professionals looking to stay ahead of AI vulnerabilities, attack techniques, and defensive resources in a fast-moving landscape.

    Chapters

    • (00:00) - Episode 37 – AI Frameworks & Databases
    • (01:39) - A.I. vulnerability tracking is still young
    • (02:44) - Should A.I. get its own vulnerability database?
    • (07:33) - The benefit of multiple vulnerability databases
    • (15:58) - The what is the definition of a vulnerability?
    • (17:54) - Final Thoughts

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    続きを読む 一部表示
    19 分
  • AI News Stories | Episode 36
    2026/01/22
    This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.Key stories discussed1) n8n (“n-eight-n”) zero-day → unauthenticated RCE riskhttps://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.htmlThe hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. ai-news-stories-episode-36Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. ai-news-stories-episode-362) “Zombie agent” prompt injection via ChatGPT Memoryhttps://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injectionThe team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. ai-news-stories-episode-36User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. ai-news-stories-episode-363) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. ai-news-stories-episode-36Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. ai-news-stories-episode-364) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.htmlTwo Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. ai-news-stories-episode-36Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. ai-news-stories-episode-365) APT28 credential phishing updated with AI-written lureshttps://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.htmlThe closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). ai-news-stories-episode-36The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). ai-news-stories-episode-36Chapter Timestamps(00:00) - Intro & Sponsors(01:16) - 1) n8n zero-day → unauthenticated RCE(09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory(19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)(23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)(29:59) - 5) APT28 phishing refreshed with AI-written lures(34:15) - Closing thoughts: “AI genie is out of the bottle” + safety remindersClick here to watch a video of this episode. Creators & Guests Brian Fehrman - HostBronwen Aker - HostDerek Banks - HostBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://...
    続きを読む 一部表示
    36 分
  • 2026 Predictions | Episode 35
    2026/01/08

    AI Security Ops | Episode 35 – 2026 Predictions

    In this episode, the BHIS panel looks into the crystal ball and shares bold predictions for AI in 2026—from energy constraints and drug development breakthroughs to agentic AI risks and cybersecurity threats.

    Chapters

    • (00:00) - Intro & Sponsor Shoutouts
    • (01:14) - Prediction: Grid Power Becomes the Bottleneck
    • (10:27) - Prediction: FDA Qualifies AI Drug Development Tools
    • (15:45) - Prediction: Nation-State Threat Actors Weaponize AI
    • (17:33) - Prediction: Agentic AI Dominates App Development
    • (23:07) - Closing Thoughts: Jobs, Risk & Opportunity

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –

    https://poweredbybhis.com



    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com


    ----------------------------------------------------------------------------------------------

    Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/

    Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/

    Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/

    Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/

    Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

    続きを読む 一部表示
    25 分