『AI Ethics Now』のカバーアート

AI Ethics Now

AI Ethics Now

著者: Tom Ritchie Jennie Mills IATL WIHEA University of Warwick
無料で聴く

このコンテンツについて

AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding artificial intelligence from a non-specialist perspective, including bias, ethics, privacy, and accountability. Join us as we discuss the challenges and opportunities of AI and work towards a future where technology benefits society as a whole. This podcast was first developed by Dr Tom Ritchie and Dr Jennie Mills as part of The AI Revolution: Ethics, Technology, and Society module, taught as part of IATL at the University of Warwick.Tom Ritchie, Jennie Mills, IATL, WIHEA, University of Warwick
エピソード
  • 9. AI and Bias: How AI Shapes What We Buy
    2025/12/15

    As you search for Christmas gifts this season, have you asked ChatGPT or Gemini for recommendations? Katarina Mpofu and Jasmine Rienecker from Stupid Human join the podcast to discuss their groundbreaking research examining how AI systems influence public opinion and decision-making. Conducted in collaboration with the University of Oxford, their study analysed over 8,000 AI-generated responses to uncover systematic biases in how AI systems like ChatGPT and Gemini recommend brands, institutions, and governments.

    Their findings reveal that AI assistants aren't neutral—they have structured and persistent preferences that favour specific entities regardless of how questions are asked or who's asking. ChatGPT consistently recommended Nike for running shoes in over 90% of queries, whilst both models claimed the US has the best national healthcare system. These preferences extend beyond consumer products into government policy and educational institutions, raising critical questions about fairness, neutrality, and AI's role in shaping global narratives.

    We explore how AI assistants are more persuasive than human debaters, why users trust these systems as sources of truth without questioning their recommendations, and how geographic and cultural biases develop through training data, semantic associations, and user feedback amplification. Katarina and Jasmine explain why language matters - asking in English produces US-centric biases regardless of where you're located - and discuss the implications for smaller brands, niche markets, and diverse user groups systematically disadvantaged by current AI design.

    The conversation examines whether companies understand they're building these preferences into systems, the challenge of cross-domain bias contamination, and the urgent need for frameworks to identify and benchmark AI biases beyond protected characteristics like race and gender.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    25 分
  • 8. AI and Decentralisation: Own AI or Be Owned By It
    2025/11/30

    In this episode, Max Sebti, co-founder and CEO of Score, challenges the centralised control of computer vision systems and makes the case for decentralised AI as a matter of public interest.

    Max brings experience from AI data annotation and model development, where he witnessed how closed systems collect and control vast amounts of visual data. Now at Score, running on the Bittensor network, he's building "open source computer vision" - systems that are publicly verifiable, permissionless, and collectively owned rather than corporately controlled.

    His central argument: we face a choice between "own AI or be owned by AI." As computer vision expands from sport into healthcare, insurance, and public surveillance, who controls these systems becomes existential. Max argues citizens should have access to model weights and training data as a democratic necessity.

    We explore what decentralisation means in practice: how Bittensor's incentive mechanisms unlock talent and data centralised systems can't access, why open source doesn't sacrifice performance, and the stark reality that camera systems are making decisions about you based on models you cannot see.

    Max introduces competing visions: a "Skynet" scenario where private entities own all visual data, versus a "solar punk" future of abundant energy and AGI where open AI serves collective benefit. The difference? Transparency, accountability, and public ownership.

    The conversation tackles thorny questions: where should boundaries exist in open systems? How do you prevent misuse whilst maintaining accessibility? Max admits his team hasn't solved this - decentralised AI means thousands of contributors with different values building toward the same goal.

    Max closes with a call to action: push for open source AI models where people can verify, query, and hold systems accountable. His vision moves AI from corporate product to public utility - not because it's idealistic, but because the alternative is too dangerous.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    26 分
  • 7. AI and Security: The Arms Race We're Losing
    2025/11/17

    In this episode, Jadee Hansen, Chief Information Security Officer at Vanta, reveals why AI security isn't keeping pace with AI adoption—and why that should concern every organisation.

    Jadee brings over 20 years of cybersecurity experience across highly regulated sectors, from Target to Code42, where she co-authored the definitive book on insider risk. Now at Vanta, a leading trust management platform, she's navigating one of security's biggest challenges: AI is being deployed by both attackers and defenders faster than either side truly understands it.

    The numbers are stark: 65% of leaders say their use of agentic AI exceeds their understanding of it. Over half of organisations have faced AI-powered attacks in the past year. Yet only 48% have frameworks to manage AI autonomy and control.

    Jadee's central argument: AI security isn't about applying AI everywhere, but it's about applying it wisely. The fundamental challenge? AI is non-deterministic. Unlike traditional security controls where "if this, then that" works predictably, AI models will do what they'll do regardless of training. You cannot guarantee outcomes.

    The conversation explores the "human in the loop" framework: identifying which tasks are low-risk enough for AI automation and which require human oversight. Jadee argues organisations must resist the temptation to automate high-stakes decisions, even when the technology seems capable. The teams that succeed won't be those deploying AI most aggressively, but those thinking most carefully about practical applications with minimal risk.

    We discuss the AI arms race in security and Jadee introduces the concept of treating AI adoption as a shared infrastructure problem requiring joint risk decisions. She challenges the static approach to policy creation, arguing we need "living, breathing" policies that evolve as rapidly as AI itself—not annual updates that are obsolete before implementation.

    The episode offers particular relevance for education, where Jadee's insights expose a gap: whilst universities debate AI ethics and data usage extensively, security implications often go underdiscussed. Yet these security considerations may ultimately determine whether AI integration succeeds or fails.

    Jadee closes with practical guidance on building trust in AI systems: transparency about what AI is doing, optionality in how it's applied (because risk tolerance varies), and continuous monitoring through frameworks like ISO 42001 and NIST AI RMF. Her vision? Moving from static compliance checks to real-time control monitoring.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    22 分
まだレビューはありません