『Episode 98: Building Trust in AI Through Model Interpretability』のカバーアート

Episode 98: Building Trust in AI Through Model Interpretability

Episode 98: Building Trust in AI Through Model Interpretability

無料で聴く

ポッドキャストの詳細を見る

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

When your machine learning model makes a decision that affects someone's medical treatment, financial security, or legal rights, "the algorithm said so" isn't good enough. Stakeholders need to understand why models make the decisions they do, and in high-stakes environments, model interpretability becomes the difference between AI adoption and AI rejection.

In this episode, Serg Masis joins Dr. Genevieve Hayes to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.

You'll learn:

  1. The crucial distinction between interpretable and explainable models [07:06]
  2. Why feature engineering matters more than algorithm choice [14:56]
  3. How to use models to improve your data quality [17:59]
  4. The underrated technique that builds stakeholder trust [21:20]

Guest Bio

Serg Masis is the Principal AI Scientist at Syngenta, a leading agricultural company with a mission to improve global food security. He is also the author of Interpretable Machine Learning with Python and co-author of the upcoming DIY AI and Building Responsible AI with Python.

Links

  • Serg's Website
  • Connect with Serg on LinkedIn
  • Connect with Genevieve on LinkedIn
  • Be among the first to hear about the release of each new podcast episode by signing up HERE
まだレビューはありません