Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.
The discussion focuses on practical red-teaming approaches that supervisory authorities can adopt, even with limited resources.
Speaker: Rumman Chowdhury (Human Intelligence)
Interviewer: Mirela Kmetic-Marceau, Project Consultant, Ethics of AI Unit, UNESCO
Hosted on Ausha. See ausha.co/privacy-policy for more information.