『Cyber Sentries: AI Insight to Cloud Security』のカバーアート

Cyber Sentries: AI Insight to Cloud Security

Cyber Sentries: AI Insight to Cloud Security

著者: TruStory FM
無料で聴く

このコンテンツについて

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.© TruStory FM 政治・政府
エピソード
  • Securing AI Agents: How to Stop Credential Leaks and Protect Non‑Human Identities with Idan Gour
    2025/12/10

    Cast your vote for Cyber Sentries as Best Technology Podcast here!


    Bridging the AI Security Gap—Inside the Rise of Non‑Human Identities

    In this episode of Cyber Sentries from CyberProof, host John Richards sits down with Idan Gour, co-founder and president of Astrix Security, to unpack one of today’s fastest-emerging challenges: securing AI agents and non-human identities (NHIs) in the modern enterprise. As companies rush to adopt generative-AI tools and deploy Model Context Protocol (MCP) servers, they’re unlocking incredible automation—and a brand-new attack surface. Together, John and Idan explore how credential leakage, hard-coded secrets, and rapid “shadow-AI” experimentation are exposing organizations to unseen risks, and what leaders can do to stay ahead.

    From Non‑Human Chaos to Secure‑by‑Design AI

    Idan shares the origin story of Astrix Security—built to close the identity-security gap left behind by traditional IAM tools. He explains how enterprises can safely navigate their AI journey using the Discover → Secure → Deploy framework for managing non-human access. The conversation moves from early automation risk to today’s complex landscape of MCP deployments, secret-management pitfalls, and just-in-time credentialing. John and Idan also discuss Astrix’s open-source MCP wrapper, designed to prevent hard‑coded credentials from leaking during model integration—a practical step organizations can adopt immediately.

    Questions We Answer in This Episode

    • How can companies prevent AI‑agent credentials from leaking across cloud and development environments?
    • What’s driving the explosion of non‑human identities—and how can security teams regain control?
    • When should organizations begin securing AI agents in their adoption cycle?
    • What frameworks or first principles best guide safe AI‑agent deployment?

    Key Takeaways

    • Start securing AI agents early—waiting until “maturity” means you’re already behind.
    • Visibility is everything: you can’t protect what you don’t know exists.
    • Automate secret management and avoid static credentials through just‑in‑time access.
    • Treat AI agents and NHIs as first‑class citizens in your identity‑security program.

    As AI adoption accelerates within every department—from R&D to customer operations—Idan emphasizes that non‑human identity management is the new frontier of cybersecurity. Getting that balance right means enterprises can innovate fearlessly while maintaining the integrity of their data, systems, and brand.

    Links & Notes

    • Learn more about Paladin Cloud
    • Learn more about Astrix Security
    • Idan Gour on LinkedIn
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:47) - Meet Idan Gour
    • (04:02) - As the Vertical Started to Grow
    • (07:03) - The Journey
    • (09:50) - Struggling
    • (13:44) - Risk
    • (16:41) - Targeting
    • (18:20) - Framework
    • (20:44) - Implementing Early
    • (22:18) - Back End Risks
    • (24:30) - Bridging the Gap
    • (26:39) - When to Engage Astrix
    • (30:21) - Wrap Up
    続きを読む 一部表示
    33 分
  • AI Compliance Security: How Modular Systems Transform Enterprise Risk Management with Richa Kaul
    2025/11/12

    AI-Powered Compliance: Transforming Enterprise Security

    In this episode of Cyber Sentries, John Richards speaks with Richa Kaul, CEO and founder of Complyance. Richa shares insights on using modular AI systems for enterprise security compliance and discusses the critical balance between automation and human oversight in cybersecurity.

    Why Enterprise Security Compliance Matters Now

    The conversation explores how enterprises struggle with increasing cyber threats and complex third-party vendor networks. Richa explains how moving from reactive to proactive compliance monitoring can transform security posture, sharing real examples from Fortune 100 companies and major sports organizations.

    AI Implementation That Prioritizes Security

    Richa details their approach to implementing AI in compliance, emphasizing their commitment to data privacy and security. The company uses a modular AI infrastructure with opt-in features and minimal data access principles, demonstrating how AI can enhance security without compromising privacy.

    Questions We Answer:

    • How can enterprises shift from reactive to proactive compliance monitoring?
    • What are the key considerations for implementing AI in security compliance?
    • How should companies manage third-party vendor risks in the AI era?
    • What role does employee education play in maintaining security compliance?

    Key Takeaways:

    • Continuous monitoring beats point-in-time compliance checks
    • Modular AI systems offer better security control than all-in-one solutions
    • Third-party vendor risk requires automated, continuous assessment
    • Human elements like training and culture can't be fully automated

    Looking Ahead: Security Challenges

    The discussion concludes with insights into future challenges, including quantum computing's impact on security and the growing complexity of AI-related risks. Richa emphasizes the importance of building nimble, configurable systems to address emerging threats.

    Links & Notes

    • More About Richa Kaul
    • Complyance on LinkedIn and the Web
    • Learn more about Paladin Cloud
    • Learn more about Cyberproof
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (01:13) - Meet Richa Kaul from Complyance
    • (02:32) - Areas Needing Security
    • (04:19) - Reactive vs. Proactive
    • (06:17) - Integrating AI
    • (07:59) - AI Compliance Challenges
    • (10:48) - Training Their Models
    • (12:16) - Evaluating Third Parties
    • (15:49) - The Team
    • (19:04) - Looking to the Future
    • (20:44) - How Others Are Implementing AI
    • (24:04) - Creating Capacity
    • (25:44) - Companies Doing It Well
    • (27:25) - When They Don’t Have the Resources
    • (28:50) - Wrap Up
    続きを読む 一部表示
    31 分
  • AI Governance Essentials: Navigating Security and Compliance in Enterprise AI with Walter Haydock
    2025/10/08

    Cast your vote for Cyber Sentries as Best Technology Podcast here!


    AI Governance in an Era of Rapid Change

    In this episode of Cyber Sentries, John Richards talks with Walter Haydock, founder of StackAware, about navigating the complex landscape of AI governance and security. Walter brings unique insights from his background as a Marine Corps intelligence officer and his extensive experience in both government and private sectors.

    Understanding AI Risk Management

    Walter shares his perspective on how organizations can develop practical AI governance frameworks while balancing innovation with security. He outlines a three-step approach starting with policy development, followed by thorough inventory of AI tools, and assessment of cybersecurity implications.

    The discussion explores how different industries face varying levels of AI risk, with healthcare emerging as a particularly challenging sector where both opportunities and dangers are amplified. Walter emphasizes the importance of aligning AI governance with business objectives rather than treating it as a standalone initiative.

    Questions We Answer in This Episode:

    • How should organizations approach AI governance and risk management?
    • What are the key challenges in implementing ISO 42001 for AI systems?
    • How can companies address the growing problem of "shadow AI"?
    • What are the implications of fragmented AI regulations across different jurisdictions?

    Key Takeaways:

    • Organizations need clear AI policies that define acceptable use boundaries
    • Risk management should integrate with existing frameworks rather than create separate systems
    • Companies must balance compliance requirements with innovation needs
    • Employee education and flexible approval processes help prevent shadow AI usage

    The Regulatory Landscape

    The conversation delves into emerging AI regulations, from New York City's local laws to Colorado's comprehensive AI Act. Walter provides valuable insights into how organizations can prepare for upcoming regulatory changes while maintaining operational efficiency.

    Links & Notes

    • StackAware
    • Connect with Walter on LinkedIn
    • Learn more about Paladin Cloud
    • Got a question? Ask us here!
    • (00:04) - Welcome to Cyber Sentries
    • (00:56) - Walter Haydock from Stackaware
    • (01:39) - Walter’s Background
    • (03:02) - Areas Needing Improvement
    • (03:49) - Integrating AI
    • (04:59) - Stackaware’s Role
    • (06:51) - AI Certification Standard
    • (07:43) - Implementation Challenges
    • (08:54) - Thoughts on Looser Protocols
    • (11:42) - Regulations
    • (13:27) - Approaches
    • (15:23) - Areas of Concern
    • (17:52) - Handling Risk
    • (19:03) - Who Should Own AI Governance
    • (20:09) - Pushback?
    • (21:41) - Proper Techniques
    • (22:52) - What Levels
    • (24:15) - Smaller Companies
    • (26:20) - Ideal Legislation
    • (29:14) - Plugging Walter
    • (30:02) - Wrap Up
    続きを読む 一部表示
    32 分
まだレビューはありません