
When ChatGPT Broke an Entire Field
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
The study of natural language processing, or NLP, dates back to the 1940s. It gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. In less than five years, large language models broke NLP and made it anew.
In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?
Recently, John Pavlus interviewed 19 current and former NLP researchers to tell that story. In this episode, Pavlus speaks with host and Quanta editor in chief Samir Patel about this oral history of “When ChatGPT Broke an Entire Field.”
Each week on 𝘛𝘩𝘦 𝘘𝘶𝘢𝘯𝘵𝘢 𝘗𝘰𝘥𝘤𝘢𝘴𝘵, 𝘘𝘶𝘢𝘯𝘵𝘢 𝘔𝘢𝘨𝘢𝘻𝘪𝘯𝘦 editor in chief Samir Patel speaks with the people behind the award-winning publication to navigate through some of the most important and mind-expanding questions in science and math.
Audio coda from LingoJam