エピソード

  • Shah and Yudkowsky on alignment failures
    2025/09/10

    This is the final discussion log in the Late 2021 MIRI Conversations sequence, featuring Rohin Shah and Eliezer Yudkowsky, with additional comments from Rob Bensinger, Nate Soares, Richard Ngo, and Jaan Tallinn.

    The discussion begins with summaries and comments on Richard and Eliezer's debate. Rohin's summary has since been revised and published in the Alignment Newsletter.

    This was originally posted on 28th Feb 2022.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/tcCxPLBrEXdxN5HCQ

    続きを読む 一部表示
    2 時間 46 分
  • Christiano and Yudkowsky on AI predictions and human intelligence
    2025/09/10

    This is a transcript of a conversation between Paul Christiano and Eliezer Yudkowsky, with comments by Rohin Shah, Beth Barnes, Richard Ngo, and Holden Karnofsky, continuing the Late 2021 MIRI Conversations.

    This was originally posted on 23rd Feb 2022.

    https://www.lesswrong.com/posts/NbGmfxbaABPsspib7/christiano-and-yudkowsky-on-ai-predictions-and-human

    続きを読む 一部表示
    1 時間 13 分
  • Ngo and Yudkowsky on scientific reasoning and pivotal acts
    2025/09/10

    This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the Late 2021 MIRI Conversations sequence, following Ngo's view on alignment difficulty.

    This was originally posted on 21st Feb 2022.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/cCrpbZ4qTCEYXbzje

    続きを読む 一部表示
    1 時間 1 分
  • Ngo's view on alignment difficulty
    2025/09/10

    This post features a write-up by Richard Ngo on his views, with inline comments.

    This was originally posted on 14th Dec 2021.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B

    続きを読む 一部表示
    35 分
  • Conversation on technology forecasting and gradualism
    2025/09/10

    This post is a transcript of a multi-day discussion between Paul Christiano, Richard Ngo, Eliezer Yudkowsky, Rob Bensinger, Holden Karnofsky, Rohin Shah, Carl Shulman, Nate Soares, and Jaan Tallinn, following up on the Yudkowsky/Christiano debate in 1, 2, 3, and 4.

    This was originally posted on 9th Dec 2021.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/nPauymrHwpoNr6ipx

    続きを読む 一部表示
    1 時間
  • More Christiano, Cotra, and Yudkowsky on AI progress
    2025/09/10

    This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky (with some comments from Rob Bensinger, Richard Ngo, and Carl Shulman), continuing from 1, 2, and 3.

    This was originally posted on 6th Dec 2021.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/fS7Zdj2e2xMqE6qja

    続きを読む 一部表示
    1 時間 10 分
  • Shulman and Yudkowsky on AI progress
    2025/09/10

    This post is a transcript of a discussion between Carl Shulman and Eliezer Yudkowsky, following up on a conversation with Paul Christiano and Ajeya Cotra.



    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/sCCdCLPN9E3YvdZhj

    続きを読む 一部表示
    37 分
  • Christiano, Cotra, and Yudkowsky on AI progress
    2025/09/10

    This post is a transcript of a discussion between Paul Christiano, Ajeya Cotra, and Eliezer Yudkowsky on AGI forecasting, following up on Paul and Eliezer's "Takeoff Speeds" discussion.

    This was originally posted on 25th Nov 2021.

    https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7MCqRnZzvszsxgtJi

    続きを読む 一部表示
    2 時間 8 分