Algorithmic Morality, Synthetic Empathy, and Google’s AI Culture Shift — Episode 7
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
This episode explores three stories asking whether machines can understand human values, emotions, and the shifting culture inside the world’s most influential tech companies. We look at the debate around algorithmic morality, where AI systems are asked to weigh consequences, fairness, and intent. We examine the rise of synthetic empathy, as AI models learn to simulate concern and emotional understanding without ever feeling it. And we turn to Google’s internal AI culture shift, where rapid advances are reshaping strategy, leadership, and the company’s future direction.
Read the full articles here:
Algorithmic Morality → https://www.liveaiwire.com/2025/07/algorithmic-karma-can-ai-understand-moral-consequence.html
Synthetic Empathy → https://www.liveaiwire.com/2025/08/synthetic-empathy-ai-simulated-concern.html
Google Culture Shift → https://www.liveaiwire.com/2025/08/google-ai-culture-shift.html
If you’d like early access to every episode — and want to support the show for about the price of a coffee — you can join us on Patreon. Your support genuinely helps us keep producing this series.
If you’re part of a newsroom, blog, media outlet, or publisher and you’re interested in featuring our coverage, you can contact us anytime through the website.