『Cloud Security Podcast by Google』のカバーアート

Cloud Security Podcast by Google

Cloud Security Podcast by Google

著者: Anton Chuvakin
無料で聴く

このコンテンツについて

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit. We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.Copyright Google Cloud
エピソード
  • EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
    2025/05/19

    Guest:

    • Christine Sizemore, Cloud Security Architect, Google Cloud

    Topics:

    • Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain?
    • I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain?
    • We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains?
    • We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development?
    • We are all hearing about agentic security – so can we just ask the AI to secure itself?
    • Top 3 things to do to secure AI software supply chain for a typical org?

    Resources:

    • Video
    • “Securing AI Supply Chain: Like Software, Only Not” blog (and paper)
    • “Securing the AI software supply chain” webcast
    • EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments
    • Protect AI issue database
    • “Staying on top of AI Developments”
    • “Office of the CISO 2024 Year in Review: AI Trust and Security”
    • “Your Roadmap to Secure AI: A Recap” (2024)
    • "RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)
    続きを読む 一部表示
    25 分
  • EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP3 Don Callahan on Emerging Technology
    2025/05/14

    Hosts:

    • David Homovich, Customer Advocacy Lead, Office of the CISO, Google Cloud
    • Nick Godfrey, Senior Director and Head of Office of the CISO, Google Cloud

    Guests:

    • Don Callahan, Advisor and Board Member

    Resources:

    • EP3 Don Callahan on Emerging Technology (as aired originally)
    • The Cyber-Savvy Boardroom podcast site
    • The Cyber-Savvy Boardroom podcast on Spotify
    • The Cyber-Savvy Boardroom podcast on Apple Podcasts
    • The Cyber-Savvy Boardroom podcast on YouTube
    • Now hear this: A new podcast to help boards get cyber savvy (without the jargon)

    続きを読む 一部表示
    25 分
  • EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps
    2025/05/12

    Guest:

    • Diana Kelley, CSO at Protect AI

    Topics:

    • Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
    • What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it?
    • How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
    • In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
    • How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
    • What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks?
    • Top differences between LLM/chatbot AI security vs AI agent security?

    Resources:

    • “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers”
    • “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever”
    • Secure by Design for AI by Protect AI
    • “Securing AI Supply Chain: Like Software, Only Not”
    • OWASP Top 10 for Large Language Model Applications
    • OWASP Top 10 for AI Agents (draft)
    • MITRE ATLAS
    • “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper)
    • LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
    続きを読む 一部表示
    31 分

Cloud Security Podcast by Googleに寄せられたリスナーの声

総合評価
  • 5 out of 5 stars
  • 星5つ
    1
  • 星4つ
    0
  • 星3つ
    0
  • 星2つ
    0
  • 星1つ
    0
ナレーション
  • 5 out of 5 stars
  • 星5つ
    1
  • 星4つ
    0
  • 星3つ
    0
  • 星2つ
    0
  • 星1つ
    0
ストーリー
  • 5 out of 5 stars
  • 星5つ
    1
  • 星4つ
    0
  • 星3つ
    0
  • 星2つ
    0
  • 星1つ
    0

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。