5. AI and Transparency: Rethinking Assessment Through Authorship
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode, Ryan Bolick, adjunct assistant professor at Duke University's Pratt School of Engineering and Turing Fellow at Fuqua School of Business, discusses Byline - a writing transparency tool he founded that tracks AI and human authorship in real time.
Ryan's journey began after his sister-in-law received a zero on her first undergraduate paper when AI detectors falsely claimed she'd used AI throughout - she hadn't used it at all. Combined with conversations revealing students flagged despite honest work and educators struggling with inaccurate detection tools, this led him to build a different solution.
Byline doesn't guess or detect - it tracks authorship with certainty. The platform automatically shows exactly how documents come together: what users wrote, what AI suggested, what they translated, and how collaborators contributed. This addresses the finding that students avoid citing AI use even when permitted due to friction involved.
The conversation explores unexpected outcomes: students actually use less AI when the process is visible. Ryan explains how seeing their own contribution helps students rediscover their voice rather than chasing "perfect ChatGPT-esque writing." The tool enables assessment of writing journeys rather than just final products, with version history revealing how students work with AI over time.
We discuss collaborative writing features that show individual contributions in group work, and Ryan's interdisciplinary approach to AI policy development at Duke. The episode tackles institutional challenges: balancing professor autonomy with university policies, moving beyond "catching" students toward supportive frameworks, and staying flexible as technology evolves rapidly.
Ryan concludes with a call to tool makers: understand your limitations and be transparent about them, considering both positive potential and adverse effects before release.
AI Ethics Now
Exploring the ethical dilemmas of AI in Higher Education and beyond.
A University of Warwick IATL Podcast
This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the IATL module "The AI Revolution: Ethics, Technology, and Society" at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'
This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.
Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.
We will discuss:
- Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
- Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
- The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.
If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.