『Evaluating AI Systems: Metrics, Methods, and Measurement Gaps』のカバーアート

Evaluating AI Systems: Metrics, Methods, and Measurement Gaps

Evaluating AI Systems: Metrics, Methods, and Measurement Gaps

無料で聴く

ポッドキャストの詳細を見る

概要

A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.

The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.

Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)
Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO


Hosted on Ausha. See ausha.co/privacy-policy for more information.

まだレビューはありません