
43. How Do You Assure AI For Bias and Accessibility in the NHS? With Adam Byfield
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Adam Byfield is a Principal Technical Assurance Specialist at NHS England. After his previous appearance on the podcast, discussing providing ethical assurance for AI applications in healthcare, we were keen to get him back to dive into some more specific issues. We chose bias and accessibility, two related issues that are clearly central for anyone concerned with AI, including in healthcare applications. We talked about different forms of bias, how bias can affect accessibility and what forms of bias, if any, might be acceptable.
Ethics Untangled is produced by IDEA, The Ethics Centre at the University of Leeds.
Bluesky: @ethicsuntangled.bsky.social
Facebook: https://www.facebook.com/ideacetl
LinkedIn: https://www.linkedin.com/company/idea-ethics-centre/