エピソード

  • Four Predictions on How AI Will Transform Your World This Year
    2026/01/13

    Nine months ago, Elon Musk said 2025 would be the year chatbots became smarter than humans. Sam Altman thought it would be the year fully autonomous AIs entered the work force. And Dario Amodei, the CEO of Anthropic, predicted that by the end of the year, AI would be writing 90 per cent of all software code.

    We’re two weeks into the new year, and none of those things have happened. So, full disclosure: I have no idea if we’re going to reach artificial general intelligence or see the rise of humanoid robots this year. If the people at the centre of the industry can’t figure it out, I doubt I can.

    But I do have some ideas about how AI could reshape our world over the next 12 months. I think we’re going to see a new political movement pushing back against AI adoption and leaning into our collective humanity. Democratic governments will defy an increasingly protectionist America and start taking digital regulation seriously again. And we’ll start establishing cultural norms about AI use – like whether you really need to respond to that AI-generated e-mail your colleague just sent.

    On this episode, I turn the mics around and invite my longtime producer, Mitchell Stuart, to ask me about what’s actually in store for the year ahead.

    Mentioned:

    Trust, attitudes and use of artificial intelligence (2025), KPMG

    Human-centric AI: Perspectives on trust and the future of AI (2025), Telus

    Could an Alternative AI Save Us from a Bubble? (Gary Marcus), by Machines Like Us

    GPT-5 System Card, OpenAI

    Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support, by Mahmud Ohmar et al (Nature)


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    1 時間
  • The Man Behind the World’s Most Coveted Microchip
    2025/12/30

    Jensen Huang is something of an enigma. The NVIDIA CEO doesn’t have social media and, until recently, rarely gave interviews. Yet he may be the most important person in AI.

    Under his leadership, NVIDIA has become a goliath. Somewhere between 80 and 90 per cent of AI tools run on NVIDIA hardware, making it the world’s most valuable company. But unlike his contemporaries, Huang has been remarkably quiet about the technology – and the world – he’s building.

    In his new book, The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, journalist Stephen Witt pulls back the curtain. And what he finds is, at times, shocking: Huang believes there is zero risk in developing superintelligence.

    So who is Jensen Huang? And should we worry that the most powerful person in AI is racing forward at breakneck speed, blind to the potential consequences?

    Mentioned:

    The Thinking Machine: Jensen Huang, NVIDIA, and the World’s Most Coveted Microchip, by Stephen Witt

    How Jensen Huang’s Nvidia Is Powering the A.I. Revolution, by Stephen Witt (The New Yorker)

    The A.I. Prompt That Could End the World, by Stephen Witt (New York Times)

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Media sourced from the BBC.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    53 分
  • Wikipedia Won Our Trust. Can We Use That Model Everywhere?
    2025/12/16

    It was an idea that defied logic: an online encyclopedia that anyone could edit.

    You didn’t need to have a PhD or even use your real name – you just needed an internet connection. Against all odds, it worked. Today, billions of people use Wikipedia every month, and studies show it’s about as accurate as a traditional encyclopedia.

    But how? How did Wikipedia not just turn into yet another online cesspool, filled with falsehoods, partisanship and AI slop? Wikipedia founder Jimmy Wales just wrote a book called The Seven Rules of Trust, where he explains how he was able to build that rarest of things: a trustworthy source of information on the internet. In an era when trust in institutions is collapsing, Wales thinks he’s found a blueprint – not just for the web, but for everything else too.

    Mentioned:

    The Seven Rules of Trust by Jimmy Wales and Dan Gardner

    A False Wikipedia ‘Biography’ by John Seigenthaler (USA Today)

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Photo Illustration: The Globe and Mail/Brendan McDermid/Reuters


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    44 分
  • Could an Alternative AI Save Us From a Bubble?
    2025/12/02

    Over the last couple of years, massive AI investment has largely kept the stock market afloat. Case in point: the so-called Magnificent 7 – tech companies like NVIDIA, Meta, and Microsoft – now account for more than a third of the S&P 500’s value. (Which means they likely represent a significant share of your investment portfolio or pension fund, too.)

    There’s little doubt we’re living through an AI economy. But many economists worry there may be trouble ahead. They see companies like OpenAI – valued at half a trillion dollars while losing billions every month – and fear the AI sector looks a lot like a bubble. Because right now, venture capitalists aren’t investing in sound business plans. They’re betting that one day, one of these companies will build artificial general intelligence.

    Gary Marcus is skeptical. He’s a professor emeritus at NYU, a bestselling author, and the founder of two AI companies – one of which was acquired by Uber. For more than two decades, he’s been arguing that large language models (LLMs) – the technology underpinning ChatGPT, Claude, and Gemini – just aren’t that good.

    Marcus believes that if we’re going to build artificial general intelligence, we need to ditch LLMs and go back to the drawing board. (He thinks something called “neurosymbolic AI” could be the way forward.)

    But if Marcus is right – if AI is a bubble and it’s about to pop – what happens to the economy then?

    Mentioned:

    The GenAI Divide: State of AI in Business 2025, by Project Nanda (MIT)

    MIT study finds AI can already replace 11.7% of U.S. workforce, by MacKenzie Sigalos (CNBC)

    The Algebraic Mind, by Gary Marcus

    We found what you’re asking ChatGPT about health. A doctor scored its answers, by Geoffrey A. Fowler (The Washington Post)


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    53 分
  • Can AI Lead Us to the Good Life?
    2025/11/18

    In Rutger Bregman’s first book, Utopia for Realists, the historian describes a rosy vision of the future – one with 15-hour work weeks, universal basic income and massive wealth redistribution.

    It’s a vision that, in the age of artificial intelligence, now seems increasingly possible.

    But utopia is far from guaranteed. Many experts predict that AI will also lead to mass job loss, the development of new bioweapons and, potentially, the extinction of our species.

    So if you’re building a technology that could either save the world or destroy it – is that a moral pursuit?

    These kinds of thorny questions are at the heart of Bregman’s latest book, Moral Ambition. In a sweeping conversation that takes us from the invention of the birth control pill to the British Abolitionist movement, Bregman and I discuss what a good life looks like (spoiler: he thinks the death of work might not be such a bad thing) – and whether AI can help get us there.


    Mentioned:

    Moral Ambition, by Rutger Bregman

    Utopia for Realists, by Rutger Bregman

    If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    51 分
  • How to Survive the “Broligarchy”
    2025/11/04

    At Donald Trump’s inauguration earlier this year, the returning president made a striking break from tradition. The seats closest to the president – typically reserved for family – went instead to the most powerful tech CEOs in the world: Elon Musk, Mark Zuckerberg, Jeff Bezos and Sundar Pichai. Between them, these men run some of the most profitable companies in history. And over the past two decades, they’ve used that wealth to reshape our public sphere.

    But this felt different. This wasn’t discreet backdoor lobbying or a furtive effort to curry favour with an incoming administration. These were some of the most influential men in the world quite literally aligning themselves with the world’s most powerful politician – and his increasingly illiberal ideology.

    Carole Cadwalladr has been tracking the collision of technology and politics for years. She’s the investigative journalist who broke the Cambridge Analytica story, exposing how Facebook data may have been used to manipulate elections. Now, she’s arguing that what we’re witnessing goes beyond monopoly power or even traditional oligarchy. She calls it techno-authoritarianism – a fusion of Trump’s authoritarian political project with the technological might of Silicon Valley.

    So I wanted to have her on to make the case for why she believes Big Tech isn’t just complicit in authoritarianism, but is actively enabling it.

    Mentioned:

    The First Great Disruption 2016-2024, by Carole Cadwalladr

    Trump Taps Palantir to Compile Data on Americans, by Sheera Frenkel and Aaron Krolik (New York Times)

    This is What a Digital Coup Looks Like, by Carole Cadwalladr (TED)

    The Nerve News

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    50 分
  • AI Music is Everywhere. Is it Legal?
    2025/10/21

    AI art is everywhere now. According to the music streaming platform Deezer, 18 per cent of the songs being uploaded to the site are AI-generated. Some of this stuff is genuinely cool and original – the kind of work that makes you rethink what art is, or what it could become.

    But there are also songs that sound like Drake, cartoons that look like The Simpsons, and stories that read like Game of Thrones. In other words, AI-generated work that’s clearly riffing on – or outright mimicking – other people’s art. Art that, in most of the world, is protected by copyright law. Which raises an obvious question: how is any of this legal?

    The AI companies claim they’re allowed to train their models on this work without paying for it, thanks to the “fair use” exception in American copyright law. But Ed Newton Rex has a different view: he says it’s theft.

    Newton Rex is a classical music composer who spent the better part of a decade building an AI music generator for a company called Stability AI. But when he realized the company – and most of the AI industry – didn’t intend to license the work they were training their models on, he quit. He has been on a mission to get the industry to fairly compensate creators ever since. I invited him on the show to explain why he believes this is theft at an industrial scale – and what it means for the human experience when most of our art isn’t made by humans anymore, but by machines.

    Mentioned:

    Copyright and Artificial Intelligence: Generative AI Training, by the United States Copyright Office

    A.I. Is Coming for Culture, by Josha Rothman (The New Yorker)

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Host direction by Athena Karkanis. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail. Media sourced from BBC News.

    Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    1 時間 3 分
  • Geoffrey Hinton vs. The End of the World
    2025/10/07

    The story of how Geoffrey Hinton became “the godfather of AI” has reached mythic status in the tech world.

    While he was at the University of Toronto, Hinton pioneered the neural network research that would become the backbone of modern AI. (One of his students, Ilya Sutskever, went on to be one of OpenAI’s most influential scientific minds.) In 2013, Hinton left the academy and went to work for Google, eventually winning both a Turing Award and a Nobel Prize.

    I think it’s fair to say that artificial intelligence as we know it, may not exist without Geoffrey Hinton.

    But Hinton may be even more famous for what he did next. In 2023, he left Google and began a campaign to convince governments, corporations and citizens that his life’s work – this thing he helped build – might lead to our collective extinction. And that moment may be closer than we think, because Hinton believes AI may already be conscious.

    But even though his warnings are getting more dire by the day, the AI industry is only getting bigger, and most governments, including Canada’s, seem reluctant to get in the way.

    So I wanted to ask Hinton: If we keep going down this path, what will become of us?

    Mentioned:

    If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, by Eliezer Yudkowsky and Nate Soares

    Agentic Misalignment: How LLMs could be insider threats, by Anthropic

    Machines Like Us is produced by Mitchell Stuart. Our theme song is by Chris Kelly. Video editing by Emily Graves. Our executive producer is James Milward. Special thanks to Angela Pacienza and the team at The Globe and Mail.

    Support for Machines Like Us is provided by CIFAR and the Max Bell School of Public Policy at McGill University.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    1 時間 9 分