The argument for AI regulation after Tumbler Ridge
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
Months before the mass shooting in Tumbler Ridge, B.C., earlier this month, the shooter was banned from OpenAI, the company behind ChatGPT, for violating its usage policy. The Wall Street Journal, which first reported this, said that the interactions with ChatGPT were describing scenarios involving gun violence. That has furthered calls for the Canadian government to regulate AI companies and their products – but there are challenges.
Taylor Owen is an associate professor at McGill and founding director of McGill’s Centre for Media, Technology and Democracy. He’s also host of The Globe and Mail podcast Machines Like Us. He’ll tell us what responsibility companies have to report concerning or violent content, and what the government is up against in trying to regulate AI.
Questions? Comments? Ideas? Email us at thedecibel@globeandmail.com
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.