『9. AI and Bias: How AI Shapes What We Buy』のカバーアート

9. AI and Bias: How AI Shapes What We Buy

9. AI and Bias: How AI Shapes What We Buy

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

As you search for Christmas gifts this season, have you asked ChatGPT or Gemini for recommendations? Katarina Mpofu and Jasmine Rienecker from Stupid Human join the podcast to discuss their groundbreaking research examining how AI systems influence public opinion and decision-making. Conducted in collaboration with the University of Oxford, their study analysed over 8,000 AI-generated responses to uncover systematic biases in how AI systems like ChatGPT and Gemini recommend brands, institutions, and governments.

Their findings reveal that AI assistants aren't neutral—they have structured and persistent preferences that favour specific entities regardless of how questions are asked or who's asking. ChatGPT consistently recommended Nike for running shoes in over 90% of queries, whilst both models claimed the US has the best national healthcare system. These preferences extend beyond consumer products into government policy and educational institutions, raising critical questions about fairness, neutrality, and AI's role in shaping global narratives.

We explore how AI assistants are more persuasive than human debaters, why users trust these systems as sources of truth without questioning their recommendations, and how geographic and cultural biases develop through training data, semantic associations, and user feedback amplification. Katarina and Jasmine explain why language matters - asking in English produces US-centric biases regardless of where you're located - and discuss the implications for smaller brands, niche markets, and diverse user groups systematically disadvantaged by current AI design.

The conversation examines whether companies understand they're building these preferences into systems, the challenge of cross-domain bias contamination, and the urgent need for frameworks to identify and benchmark AI biases beyond protected characteristics like race and gender.

AI Ethics Now

Exploring the ethical dilemmas of AI in Higher Education and beyond.

A University of Warwick IATL Podcast

This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

We will discuss:

  • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
  • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
  • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

まだレビューはありません