
S07 E07: AI Ethics: A Beautifully Shambolic Disaster
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Season 07 Episode 07: AI Ethics: A Beautifully Shambolic Disaster
WARNING: This episode includes discussion on a workplace fatality – we advise listener discretion.
“Artificial Intelligence (AI) is off its tree!” protests Trajce. “This stuff is dangerous. Its hallucinogenic – a beautifully shambolic disaster. Society must put the rails on it.” “Are you going to try to reign in Sara?” asks Alan, “Impossible!” he follows.
Trajce explains his experiencing on using AI to research case law. Tested three times, the AI spit out a different version of events. He warns against the implications to legal practitioners on relying on such inconsistent data and poor research. In contrast, the team reflect on the benefits of closed-domain decision-support systems that are vetted and validated by domain experts, like they’ve learned through Sara’s association with Livemind AI, Lachlan Phillips.
The case in question, poorly tested by AI, is one from the rail industry. It involves the dismissal of a long-standing employee who tested positive for a banned substance upon his return to work from his leave. The rabble rousing team, Alan, Trajce, and Sara, can’t help but descend into Cheech and Chong and other such movie references. “Say ‘hello’ to my little friend,” mimics Trajce from Al Pacino Scarface movie nostalgia. The crew turn this upside down on their debate and wagers on the case outcome – listen in for more!