Endeavoring for Safe & Ethical AI - with JP Gonzales
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
JP Gonzales of the Berkeley Existential Risk Initiative (BERI) joinedTamara to explain some of the grave risks with advancement of Artifical General Intelligence (AGI), and the global movement aimed at mitigating these risks. JP is assistant to Stuart Russell, who literally wrote the book on AI, co-authoring the authoritative textbook of the field.
JP generously gathered the following resources for our listeners on Russell, the organizations referenced in the podcast, and deeper dives on the topic.
If you have any questions regarding this or other topics, please email vo@tamaralilly.com.
Stuart Russell's Wikipedia Page
Center for Human-Compatible AI (CHAI) – Founded by Stuart Russell and colleagues in 2016
-
Mission: "CHAI's goal is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems."
-
Current research-development approach to AI focuses optimizing for human objectives that we specify using algorithms, and these approaches are not adequately capturing the complexity, variability, diversity, and uncertainty inherent in human values. Current AI systems are also largely black-box in designs (i.e., human's provide input and can observe the output, but the inner-workings of the system are so opaque and difficult to interpret, current LLMs like GPT 4o have something like 1-2 trillion unique parameters that the model assigns
-
-
Website
-
Wiki
-
Stuart's popular press book on AI safety, ethics, and the problem of ensuring that humanity maintains control over powerful, frontier AI systems
Stuart Russell on YouTube – he has tons of recorded talks across the world at various organizations, focused on all aspects of AI and society. Definitely best to hear him speak in his own words, he's extremely articulate and an world-class educator.
Positive AI Economic Futures (World Economic Forum) – This fascinating study was conducted back in 2021 with Stuart and other experts in AI, futurists, policy experts to essentially ask the question – what if we achieve AGI and advanced AI systems that can do most tasks better than humans. What are the implications for our economy, the future of work, and relations.
International Association for Safe and Ethical AI (IASEAI) – Pronounced "eye-see-eye", this new organization spearheaded by Stuart Russell and a pretty impressive list of Nobel laureates, AI technical experts, policy experts, etc. with the missions of ensuring that AI development and implementation meets safety guarantees and is ultimately beneficial to humanity.
Future of Life Institute – AI Focus Area – This organization has been deeply invested in understanding and mitigating large-scale or "existential" risks from AI, that is negative consequences from AI that could significantly shape the future of humanity, our ability to survive, etc.
Slaughtbots (2017) – An arms-control advocacy film on lethal autonomous weapons systems (LAWS) and particularly swam drones. Calling for stricter regulation and a ban on the use of AI system to target adn execute people. Funded by The Future of Life Institute, vision comes from Prof. Russell
Slaughterbots Part 2: If human, kill() – arms-control advocacy film on lethal autonomous weapons systems. Prof. Russell contributed to the content and consulting of this video.