7. AI and Security: The Arms Race We're Losing
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode, Jadee Hansen, Chief Information Security Officer at Vanta, reveals why AI security isn't keeping pace with AI adoption—and why that should concern every organisation.
Jadee brings over 20 years of cybersecurity experience across highly regulated sectors, from Target to Code42, where she co-authored the definitive book on insider risk. Now at Vanta, a leading trust management platform, she's navigating one of security's biggest challenges: AI is being deployed by both attackers and defenders faster than either side truly understands it.
The numbers are stark: 65% of leaders say their use of agentic AI exceeds their understanding of it. Over half of organisations have faced AI-powered attacks in the past year. Yet only 48% have frameworks to manage AI autonomy and control.
Jadee's central argument: AI security isn't about applying AI everywhere, but it's about applying it wisely. The fundamental challenge? AI is non-deterministic. Unlike traditional security controls where "if this, then that" works predictably, AI models will do what they'll do regardless of training. You cannot guarantee outcomes.
The conversation explores the "human in the loop" framework: identifying which tasks are low-risk enough for AI automation and which require human oversight. Jadee argues organisations must resist the temptation to automate high-stakes decisions, even when the technology seems capable. The teams that succeed won't be those deploying AI most aggressively, but those thinking most carefully about practical applications with minimal risk.
We discuss the AI arms race in security and Jadee introduces the concept of treating AI adoption as a shared infrastructure problem requiring joint risk decisions. She challenges the static approach to policy creation, arguing we need "living, breathing" policies that evolve as rapidly as AI itself—not annual updates that are obsolete before implementation.
The episode offers particular relevance for education, where Jadee's insights expose a gap: whilst universities debate AI ethics and data usage extensively, security implications often go underdiscussed. Yet these security considerations may ultimately determine whether AI integration succeeds or fails.
Jadee closes with practical guidance on building trust in AI systems: transparency about what AI is doing, optionality in how it's applied (because risk tolerance varies), and continuous monitoring through frameworks like ISO 42001 and NIST AI RMF. Her vision? Moving from static compliance checks to real-time control monitoring.
AI Ethics Now
Exploring the ethical dilemmas of AI in Higher Education and beyond.
A University of Warwick IATL Podcast
This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the IATL module "The AI Revolution: Ethics, Technology, and Society" at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'
This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.
Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.
We will discuss:
- Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
- Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
- The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.
If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.