Drawing Red Lines w/ Su Cizem
カートのアイテムが多すぎます
ご購入は五十タイトルがカートに入っている場合のみです。
カートに追加できませんでした。
しばらく経ってから再度お試しください。
ウィッシュリストに追加できませんでした。
しばらく経ってから再度お試しください。
ほしい物リストの削除に失敗しました。
しばらく経ってから再度お試しください。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
Technology has been moving faster than policy for some time now, and the advent of AI isn't changing that, so what can we do to maintain safety despite uncertainty? Su Cizem has spent the last few years trying to answer that question. As an analyst at the Future Society, she works on global AI governance, specifically on building international consensus around AI red lines: the thresholds we collectively agree must never be crossed. In this conversation, Su walks through her path from philosophy to policy, the evolution of the global AI safety summit series, why voluntary commitments from AI labs aren't enough, and what it would actually take to make international cooperation on AI safety real.
Chapters
- (00:00) - Introduction
- (03:23) - From Philosophy to Policy
- (22:25) - What AI Governance Actually Means
- (26:49) - The Summit Series
- (43:01) - Drawing The Red Lines
- (01:10:51) - Can These Companies Govern Themselves?
- (01:24:01) - Breaking Into The Field
- (01:27:51) - Closing Thoughts & Outro
Critical Links
Below are the most important links for this episode. For more, visit the episode page on Kairos.fm.
- Su's LinkedIn
- Global Call for AI Red Lines
- The Futures Society report - “Facing the Stakes of AI Together”: 2025 Athens Roundtable Report
- Politico article - How the global effort to keep AI safe went off the rails
- TechPolicy.Press article - A Timeline of the Anthropic-Pentagon Dispute
- The Guardian article - AI got the blame for the Iran school bombing. The truth is far more worrying
- Google and OpenAI Employee open letter - We Will Not Be Divided
- The Register article - Altman said no to military AI abuses – then signed Pentagon deal anyway
- SaferAI report - Evaluating AI Providers’ Frontier AI Safety Frameworks
まだレビューはありません