『Future Around & Find Out』のカバーアート

Future Around & Find Out

Future Around & Find Out

著者: Dan Blumberg
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

You know what would be awesome? If we could build the future we want — before we muck it up. Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. The Webby Awards have honored the show (formerly known as CRAFTED.) as a top tech podcast three years in a row! On Tuesdays, we feature interviews with the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com2026 経済学
エピソード
  • The Goblin in the Machine | FAFO Friday
    2026/05/02

    I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant."

    Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?

    That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans.

    Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space.

    Links:

    • Where the goblins came from (OpenAI blog post)
    • My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)
    • GitHub Copilot is moving to usage-based billing (GitHub announcement)
    • ‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link)
    • Meta inks deal for solar power at night, beamed from space (TechCrunch)

    Support Future Around & Find Out

    • Follow Dan on LinkedIn
    • Get the free newsletter
    • Become a paid subscriber and help future proof FAFO!
    続きを読む 一部表示
    34 分
  • AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"
    2026/04/28

    Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are.

    Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:

    In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.”

    Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.

    In this conversation:

    • Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans made
    • How to avoid — or at least how to mitigate — creating AI that’s biased
    • Red teaming AI and creating bias bounties
    • The "grandma hack" and other ways regular people accidentally jailbreak AI models
    • How AI companies are quietly rewriting their terms of service to dodge liability when things go wrong
    • Why the benchmarks you see when a new model drops are "basically spelling tests"
    • AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is alive
    • What builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came before

    Chapters:

    • (00:00) - "The thing I believe in the most is human agency"
    • (02:14) - Why builders have more agency than they realize
    • (04:00) - What is a bias bounty?
    • (06:41) - What 2,000 hackers at DEF CON found
    • (09:40) - The grandma hack
    • (11:30) - Why guardrails fall apart
    • (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game
    • (19:10) - Why most evals are "basically spelling tests"
    • (21:30) - How to actually evaluate an AI agent
    • (26:20) - "Moral outsourcing" and the AI layoff lie
    • (28:45) - Inside Rumman's tenure as U.S. AI Science Envoy
    • (32:10) - The legal loophole AI companies use to dodge liability
    • (35:35) - AI psychosis and the cold emails Rumman gets
    • (38:40) - Why Google's AI overview is quietly dangerous
    • (44:35) - The problem with "AI literacy"
    • (48:05) - Can we trust anything we see anymore?
    • (50:15) - What builders can do right now to take back agency

    Support Future Around & Find Out
    • Follow Dan on LinkedIn
    • Get the free newsletter
    • Become a paid subscriber and help future proof FAFO!
    続きを読む 一部表示
    54 分
  • We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?
    2026/04/25

    We won the Webby Award for best tech podcast of 2026!!!

    I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind...

    But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz.

    What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so?


    It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because…


    The Future Needs a Word.


    That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions…

    ---
    FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow.

    続きを読む 一部表示
    39 分
まだレビューはありません