『2021 MIRI Conversations』のカバーアート

2021 MIRI Conversations

2021 MIRI Conversations

著者: Peter Barnett
無料で聴く

このコンテンツについて

These are AI generated podcasts of the 2021 MIRI Conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq This podcast is a personal project because I like listening to audio, and there weren't good audio versions of the conversations. Please remember that these conversations are from 2021.Peter Barnett
エピソード
  • Shah and Yudkowsky on alignment failures
    2025/09/10

    This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.

    The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter.

    This was originally posted on 28th Feb 2022.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/tcCxPLBrEXdxN5HCQ

    続きを読む 一部表示
    2 時間 46 分
  • Christiano and Yudkowsky on AI predictions and human intelligence
    2025/09/10

    This is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the Late 2021 MIRI Conversations.

    This was originally posted on 23rd Feb 2022.

    https://www.lesswrong.com/posts/NbGmfxbaABPsspib7/christiano-and-yudkowsky-on-ai-predictions-and-human

    続きを読む 一部表示
    1 時間 13 分
  • Ngo and Yudkowsky on scientific reasoning and pivotal acts
    2025/09/10

    This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo's view on alignment difficulty.

    This was originally posted on 21st Feb 2022.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/cCrpbZ4qTCEYXbzje

    続きを読む 一部表示
    1 時間 1 分
まだレビューはありません