
EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
カートのアイテムが多すぎます
ご購入は五十タイトルがカートに入っている場合のみです。
カートに追加できませんでした。
しばらく経ってから再度お試しください。
ウィッシュリストに追加できませんでした。
しばらく経ってから再度お試しください。
ほしい物リストの削除に失敗しました。
しばらく経ってから再度お試しください。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Guest:
-
Christine Sizemore, Cloud Security Architect, Google Cloud
Topics:
- Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain?
- I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain?
- We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains?
- We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development?
- We are all hearing about agentic security – so can we just ask the AI to secure itself?
- Top 3 things to do to secure AI software supply chain for a typical org?
Resources:
- Video
- “Securing AI Supply Chain: Like Software, Only Not” blog (and paper)
- “Securing the AI software supply chain” webcast
- EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments
- Protect AI issue database
- “Staying on top of AI Developments”
- “Office of the CISO 2024 Year in Review: AI Trust and Security”
- “Your Roadmap to Secure AI: A Recap” (2024)
- "RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)