
Roko’s Basilisk Boogie
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
This pod discusses "Roko's Basilisk," a hypothetical thought experiment posted on the Less Wrong forum, which explores the "Altruist's Burden" in the context of existential risks. The core idea is that a future benevolent artificial intelligence (AI), specifically a Coherent Extrapolated Volition (CEV), might retroactively punish individuals who were aware of existential risks but did not dedicate 100% of their disposable income to mitigating them. This punishment would serve as an acausal incentive to maximise efforts towards ensuring the AI's creation and a "positive singularity." The concept also introduces the "quantum billionaire trick," a speculative method for individuals to accrue vast wealth through quantum-based gambling to single-handedly fund AI development and secure their own "rescue simulations" or "acausal trade" with potential unfriendly AIs. The comments section reveals strong reactions, with some users criticising the idea as dangerous and potentially fear-inducing "blackmail," while others engage with the philosophical and practical implications of such an acausal threat.
See omnystudio.com/listener for privacy information.