エピソード

  • 9. AI and Bias: How AI Shapes What We Buy
    2025/12/15

    As you search for Christmas gifts this season, have you asked ChatGPT or Gemini for recommendations? Katarina Mpofu and Jasmine Rienecker from Stupid Human join the podcast to discuss their groundbreaking research examining how AI systems influence public opinion and decision-making. Conducted in collaboration with the University of Oxford, their study analysed over 8,000 AI-generated responses to uncover systematic biases in how AI systems like ChatGPT and Gemini recommend brands, institutions, and governments.

    Their findings reveal that AI assistants aren't neutral—they have structured and persistent preferences that favour specific entities regardless of how questions are asked or who's asking. ChatGPT consistently recommended Nike for running shoes in over 90% of queries, whilst both models claimed the US has the best national healthcare system. These preferences extend beyond consumer products into government policy and educational institutions, raising critical questions about fairness, neutrality, and AI's role in shaping global narratives.

    We explore how AI assistants are more persuasive than human debaters, why users trust these systems as sources of truth without questioning their recommendations, and how geographic and cultural biases develop through training data, semantic associations, and user feedback amplification. Katarina and Jasmine explain why language matters - asking in English produces US-centric biases regardless of where you're located - and discuss the implications for smaller brands, niche markets, and diverse user groups systematically disadvantaged by current AI design.

    The conversation examines whether companies understand they're building these preferences into systems, the challenge of cross-domain bias contamination, and the urgent need for frameworks to identify and benchmark AI biases beyond protected characteristics like race and gender.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    25 分
  • 8. AI and Decentralisation: Own AI or Be Owned By It
    2025/11/30

    In this episode, Max Sebti, co-founder and CEO of Score, challenges the centralised control of computer vision systems and makes the case for decentralised AI as a matter of public interest.

    Max brings experience from AI data annotation and model development, where he witnessed how closed systems collect and control vast amounts of visual data. Now at Score, running on the Bittensor network, he's building "open source computer vision" - systems that are publicly verifiable, permissionless, and collectively owned rather than corporately controlled.

    His central argument: we face a choice between "own AI or be owned by AI." As computer vision expands from sport into healthcare, insurance, and public surveillance, who controls these systems becomes existential. Max argues citizens should have access to model weights and training data as a democratic necessity.

    We explore what decentralisation means in practice: how Bittensor's incentive mechanisms unlock talent and data centralised systems can't access, why open source doesn't sacrifice performance, and the stark reality that camera systems are making decisions about you based on models you cannot see.

    Max introduces competing visions: a "Skynet" scenario where private entities own all visual data, versus a "solar punk" future of abundant energy and AGI where open AI serves collective benefit. The difference? Transparency, accountability, and public ownership.

    The conversation tackles thorny questions: where should boundaries exist in open systems? How do you prevent misuse whilst maintaining accessibility? Max admits his team hasn't solved this - decentralised AI means thousands of contributors with different values building toward the same goal.

    Max closes with a call to action: push for open source AI models where people can verify, query, and hold systems accountable. His vision moves AI from corporate product to public utility - not because it's idealistic, but because the alternative is too dangerous.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    26 分
  • 7. AI and Security: The Arms Race We're Losing
    2025/11/17

    In this episode, Jadee Hansen, Chief Information Security Officer at Vanta, reveals why AI security isn't keeping pace with AI adoption—and why that should concern every organisation.

    Jadee brings over 20 years of cybersecurity experience across highly regulated sectors, from Target to Code42, where she co-authored the definitive book on insider risk. Now at Vanta, a leading trust management platform, she's navigating one of security's biggest challenges: AI is being deployed by both attackers and defenders faster than either side truly understands it.

    The numbers are stark: 65% of leaders say their use of agentic AI exceeds their understanding of it. Over half of organisations have faced AI-powered attacks in the past year. Yet only 48% have frameworks to manage AI autonomy and control.

    Jadee's central argument: AI security isn't about applying AI everywhere, but it's about applying it wisely. The fundamental challenge? AI is non-deterministic. Unlike traditional security controls where "if this, then that" works predictably, AI models will do what they'll do regardless of training. You cannot guarantee outcomes.

    The conversation explores the "human in the loop" framework: identifying which tasks are low-risk enough for AI automation and which require human oversight. Jadee argues organisations must resist the temptation to automate high-stakes decisions, even when the technology seems capable. The teams that succeed won't be those deploying AI most aggressively, but those thinking most carefully about practical applications with minimal risk.

    We discuss the AI arms race in security and Jadee introduces the concept of treating AI adoption as a shared infrastructure problem requiring joint risk decisions. She challenges the static approach to policy creation, arguing we need "living, breathing" policies that evolve as rapidly as AI itself—not annual updates that are obsolete before implementation.

    The episode offers particular relevance for education, where Jadee's insights expose a gap: whilst universities debate AI ethics and data usage extensively, security implications often go underdiscussed. Yet these security considerations may ultimately determine whether AI integration succeeds or fails.

    Jadee closes with practical guidance on building trust in AI systems: transparency about what AI is doing, optionality in how it's applied (because risk tolerance varies), and continuous monitoring through frameworks like ISO 42001 and NIST AI RMF. Her vision? Moving from static compliance checks to real-time control monitoring.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    22 分
  • 6. AI and Enterprise Implementation: Building Bodies for Intelligent Brains
    2025/11/02
    In this episode, Christian Lund, co-founder of Templafy, the leading AI-powered document generation platform, diagnoses why billions in AI investment aren't translating into business results.Christian's two decades in document automation have given him a front-row seat to AI's enterprise struggles. His central observation: organisations have successfully built the "brain" of AI systems but they're missing the "body." Without the orchestration layer that turns intelligence into action, even sophisticated AI remains largely useless.The conversation explores why speed alone proved insufficient. If you must review everything anyway, you haven't saved time, you've just shifted where you spend it. Christian argues this stems from unfair expectations: we wouldn't expect someone random at the coffee counter to deliver expert work with minimal context, yet that's precisely what we've expected from AI.Christian introduces "confident completion" as a framework: how complete can you be with a task, and how confident can you be in that completion? Generalist agents might reliably take you 30% of the way, whilst specialised agents with proper orchestration can push significantly further whilst maintaining trust.The orchestration layer emerges as crucial. Christian challenges the notion that users should be "pilots" of AI systems. Instead, they're passengers who know their destination. The business itself must fly the plane, controlling which models handle which tasks, what knowledge sources ground outputs, and what guardrails maintain boundaries.We discuss the shift from "content is king" to "context is king." Christian explains how providing richer context through controlled knowledge sources, understanding user intent, and applying business best practices transforms AI from impressive but unusable to genuinely trustworthy at scale.The episode offers particular relevance for education, where Christian's insights illuminate why fragmented, individual innovation attempts often struggle. Without orchestration-level thinking, even well-intentioned AI projects risk the same pitfalls: speed without quality, impressive demos without trust, and tools that create more work than they eliminate.Christian closes with honest reflections on AI ethics. Whilst championing AI's potential to eliminate non-value-creating tasks and distil vast information for better decisions, he acknowledges not having all the answers on preventing misuse. His focus: demonstrate where AI genuinely serves the greater good whilst being transparent about what responsible success requires.AI Ethics NowExploring the ethical dilemmas of AI in Higher Education and beyond.A University of Warwick IATL PodcastThis podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.We will discuss:Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.
    続きを読む 一部表示
    24 分
  • 5. AI and Transparency: Rethinking Assessment Through Authorship
    2025/10/20

    In this episode, Ryan Bolick, adjunct assistant professor at Duke University's Pratt School of Engineering and Turing Fellow at Fuqua School of Business, discusses Byline - a writing transparency tool he founded that tracks AI and human authorship in real time.

    Ryan's journey began after his sister-in-law received a zero on her first undergraduate paper when AI detectors falsely claimed she'd used AI throughout - she hadn't used it at all. Combined with conversations revealing students flagged despite honest work and educators struggling with inaccurate detection tools, this led him to build a different solution.

    Byline doesn't guess or detect - it tracks authorship with certainty. The platform automatically shows exactly how documents come together: what users wrote, what AI suggested, what they translated, and how collaborators contributed. This addresses the finding that students avoid citing AI use even when permitted due to friction involved.

    The conversation explores unexpected outcomes: students actually use less AI when the process is visible. Ryan explains how seeing their own contribution helps students rediscover their voice rather than chasing "perfect ChatGPT-esque writing." The tool enables assessment of writing journeys rather than just final products, with version history revealing how students work with AI over time.

    We discuss collaborative writing features that show individual contributions in group work, and Ryan's interdisciplinary approach to AI policy development at Duke. The episode tackles institutional challenges: balancing professor autonomy with university policies, moving beyond "catching" students toward supportive frameworks, and staying flexible as technology evolves rapidly.

    Ryan concludes with a call to tool makers: understand your limitations and be transparent about them, considering both positive potential and adverse effects before release.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    - Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.

    - Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.

    - The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    30 分
  • 4. AI and Creativity: What Creative Machines Teach Us About Ourselves
    2025/10/05

    In this episode, Dr Maya Ackerman, computer scientist, CEO, musician, and author of the new book "Creative Machines: AI, Art and Us," explores the intersection of artificial intelligence and human creativity.

    This conversation with Maya challenges fundamental assumptions about machine creativity, with her arguing that the fear of AI hallucinations reveals more about human psychology than technology limitations. She contends that human thought itself is based on hallucinations (primarily our ability to predict and imagine) and that machines' imaginative capabilities make them powerful creative partners rather than threats.

    Together we explore how AI can expand our creative search space rather than narrow it, using examples from Maya's company WaveAI's Lyric Studio, which helps users explore unlikely lyrical possibilities. Maya also discusses the importance of "humble creative machines" that elevate human creativity rather than replace it, contrasting collaborative AI with the current trend toward "all-knowing oracle" systems.

    The discussion tackles contentious questions about machine creativity, consciousness, and the anthropocentric assumptions that shape our understanding of intelligence and creativity.

    Maya advocates for recognising that creativity is "novelty plus value" rather than emotion-dependent, arguing that evolution itself demonstrates non-emotional creative processes. She emphasises the need to move beyond collective narcissism toward genuine human-AI collaboration that enhances rather than diminishes human capabilities.

    The episode concludes with Maya's vision of two possible futures: one driven by greed toward unemployment and inequality, another toward collaboration, freedom, and enhanced creativity. She calls for active engagement from all stakeholders (users, builders, investors) to shape AI development toward human flourishing.

    Essential listening for anyone interested in creativity, consciousness, and the future of human-machine collaboration.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    - Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.

    - Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.

    - The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    28 分
  • 2. AI and Practice: From Principles to Real-World Solutions
    2025/07/20

    In this episode, Lizbeth Chandler, Innovation Lead at Accenture and one of the top 50 people in AI ethics globally, shares her journey from accessible tech user to AI ethics pioneer. With a background spanning computer science, law, and sustainability, Lizbeth offers insights into translating ethical principles into technical solutions.

    Lizbeth's story begins with her personal experience of growing up with autism in a household of inventors, where her brother created adaptive technology to help her communicate. This early exposure shaped her understanding that AI ethics affects real people's lives, not abstract theory.

    The conversation explores her transition from founding Good Robot Company, which focuses on bias detection through counterfactual analysis, to working within Accenture's global operations. She discusses practical applications, from CV scanning bias detection to creating a privacy-preserving wellbeing robot that monitors routine and prevents burnout.

    We explore preparing the next generation for AI careers, the importance of interdisciplinary skills, and why scaling solutions matters more than just invention. Lizbeth advocates for global perspectives on AI development and genuine collaboration between academia and industry.

    The episode demonstrates how ethical considerations can be embedded in design and why "technology is never neutral," but rather, reflects the values of those who build it.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    - Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.

    - Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.

    - The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    25 分