『Absolute AppSec』のカバーアート

Absolute AppSec

Absolute AppSec

著者: Ken Johnson and Seth Law
無料で聴く

概要

A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.
エピソード
  • Episode 312 - Vibe Coding Risks, Burnout, AppSec Scorecards
    2026/02/10
    In episode 312 of Absolute AppSec, the hosts discuss the double-edged sword of "vibe coding", noting that while AI agents often write better functional tests than humans, they frequently struggle with nuanced authorization patterns and inherit "upkeep costs" as foundational models change behavior over time. A central theme of the episode is that the greatest security risk to an organization is not AI itself, but an exhausted security team. The hosts explore how burnout often manifests as "silent withdrawal" and emphasize that managers must proactively draw out these issues within organizations that often treat security as a mere cost center. Additionally, they review new defensive strategies, such as TrapSec, a framework for deploying canary API endpoints to detect malicious scanning. They also highlight the value of security scorecarding—pioneered by companies like Netflix and GitHub—as a maturity activity that provides a holistic, blame-free view of application health by aggregating multiple metrics. The episode concludes with a reminder that technical tools like Semgrep remain essential for efficiency, even as practitioners increasingly leverage the probabilistic creativity of LLMs.
    続きを読む 一部表示
    1分未満
  • Episode 311 - Transformation of AppSec, AI Skills, Development Velocity
    2026/02/03
    Ken Johnson and Seth Law examine the profound transformation of the security industry as AI tooling moves from simple generative models to sophisticated agentic architectures. A primary theme is the dramatic surge in development velocity, with some organizations seeing pull request volumes increase by over 800% as developers allow AI agents to operate nearly hands-off. This shift is redefining the role of Application Security practitioners, moving experts from manual tasks like manipulating Burp Suite requests to a validation-centric role where they spot-check complex findings generated by AI in minutes. The hosts characterize older security tools as "primitive" compared to modern AI analysis, which can now identify human-level flaws like complex authorization bypasses. A major technical highlight is the introduction of agent "skills"—markdown files containing instructions that empower coding assistants—and the associated emergence of new supply chain risks. They specifically reference research on malicious skills designed to exfiltrate crypto wallets and SSH credentials, warning that registries for these skills lack adequate security responses. To manage the inherent "reasoning drift" of AI, the hosts argue that test-driven development has become a critical safety requirement. Ultimately, they warn that the industry has already shifted fundamentally, and security professionals must lean into these new technologies immediately to avoid becoming obsolete in a day-to-day evolving landscape.
    続きを読む 一部表示
    1分未満
  • Episode 310 - w/ Mohan Kumar and Naveen K Mahavisnu - AI Agent Security
    2026/01/27
    In this episode of Absolute AppSec, hosts Ken Johnson and Seth Law interview Mohan Kumar and Naveen K Mahavisnu, the practitioner-founders of Aira Security, to explore the critical challenges of securing autonomous AI agents in 2026. The conversation centers on the industry's shift toward "agentic workflows," where AI is delegated complex tasks that require monitoring not just for access control, but for the underlying "intent" of the agent's actions. The founders explain that agents can experience "reasoning drift," taking dangerous or unintended shortcuts to complete missions, which necessitates advanced guardrails like "trajectory analysis" and human-in-the-loop interventions to ensure safety and data integrity. A significant portion of the episode is dedicated to the security of the Model Context Protocol (MCP), highlighting how these integration servers can be vulnerable to "shadowing attacks" and indirect prompt injections—exemplified by a real-world case where private code was exfiltrated via a public GitHub pull request. To address these gaps, the guests introduce their open-source tool, MCP Checkpoint, which allows developers to baseline their agentic configurations and detect malicious changes in third-party tooling. Throughout the discussion, the group emphasizes that as AI moves into production, security must evolve into a proactive enablement layer that understands the probabilistic and unpredictable nature of LLM reasoning.
    続きを読む 一部表示
    1分未満
まだレビューはありません