エピソード

  • Episode 7 – Class Time: Gemini, Ford, and Building Better Tools
    2025/08/23

    🎙 Episode 7 – Class Time: Gemini, Ford, and Building Better Tools


    This week on The Tinker Table, we’re blending history, tech news, and AI tools in another quick “class time” session.


    In this episode:

    Innovator Spotlight: Henry Ford — From the Model T to the moving assembly line, how Ford reshaped production, labor, and access to technology.

    Stay in the Know:

    🔧 Robots rebuild wildfire-hit homes in California

    🤖 China’s World Humanoid Robot Games — robots sprint, fight, and flop, showing both promise and limits of embodied AI

    🌊 Seashell-inspired recycling at Georgia Tech — turning plastics into stronger, more consistent composites

    Tools for the Table: Google Gemini — Google’s integrated AI assistant in Gmail, Docs, Sheets, and Slides. We ask: when should AI stay in the sandbox, and when should it help build the sandbox?


    Reflection: What everyday tool in your life could benefit from the right kind of AI support?


    Sources & References:

    Henry Ford – Wikipedia

    ArchDaily. (2025). AI-powered robotics support rebuilding homes in Los Angeles fire zones.

    AP News. (2025). Photos of Beijing’s World Humanoid Robot Games.

    Georgia Tech. (2025). Seashells inspire better way to recycle plastic.

    Google Gemini – Official Page

    続きを読む 一部表示
    12 分
  • Episode 6: Class time - Ada Lovelace, Notebook LM and more
    2025/08/08

    🎙 Episode 6 – Class Is in Session


    This week on The Tinker Table, we’re kicking off a brand-new “class time” format—short, thoughtful episodes packed with stories, tools, and questions to keep your curiosity thriving between our deep dives.


    In this episode:


    Innovator Spotlight: Ada Lovelace—the visionary who imagined generative computing more than a century before it existed, blending logic and imagination into what she called poetical science.


    Stay in the Know:

    🚀 SpaceX Crew-9 sets a record for the fastest human spaceflight to the ISS

    🤖 The White House AI Action Plan outlines guardrails for AI use in public services

    🧠 Google’s NotebookLM gets a major update, making it an even stronger closed-network AI study partner for students and educators


    Tools for the Table: A deep dive into NotebookLM—how it works, why it matters, and simple ways to try it this week.


    We’ll wrap with a reflection prompt and invite you to help choose the topic for our next deep dive: Digital Literacy or Systems Thinking.


    Sources & References:


    NASA. (2025). Crew-9 mission update. nasa.gov


    The White House. (2025). AI Action Plan. whitehouse.gov


    Google. (2025). NotebookLM update. notebooklm.google


    To learn more about Ada Lovelace:


    Essinger, J. (2014). Ada’s Algorithm: How Lord Byron’s Daughter Ada Lovelace Launched the Digital Age. Melville House.


    Fuegi, J., & Francis, J. (2003). Lovelace & Babbage and the creation of the 1843 'notes'. IEEE Annals of the History of Computing, 25(4), 16–26.


    🎧 Listen now to join our first “class time” session and see how history, tech news, and practical tools can spark your thinking for the week ahead.

    続きを読む 一部表示
    15 分
  • Episode 5: Everyday Ethics: Should I Let AI do that?
    2025/07/29

    🎙️ Episode 5 – Everyday Ethics: Should You Let AI Do That?

    with Hannah from The Tinker Table


    This is the final part in our 5 episode deep dive into AI Ethics.


    We all talk about the big questions in AI—regulation, safety, bias, policy.

    But what about the quiet ones?

    The ones that happen on an ordinary Tuesday?


    Today’s episode zooms in on the everyday ethics of using AI tools like ChatGPT or Midjourney to help with writing, art, or communication. Whether it’s a student turning in an AI-edited paper, a teacher testing lesson plans, a parent using AI to write a birthday caption, or a creator navigating authorship—you’ve probably wondered:

    Should I let AI do that?


    In this episode, Hannah explores:


    💡 The classroom tension between academic integrity and accessibility

    💬 Whether AI-generated condolences or thank-you notes still count as sincere

    🎨 The creative gray area of using AI in illustration and art—when it’s helpful, when it’s harmful, and when it’s just… complicated

    🧠 Why being honest about your creative process matters more than drawing hard lines


    Plus, we reflect on real-world cases like the 2023 lawsuit against Stability AI and Midjourney for unauthorized use of copyrighted art (Reuters, 2023).


    If you’ve ever caught yourself asking “Is this okay?”—this episode is for you.


    It’s not about shame.

    It’s about showing up with awareness, curiosity, and intention.


    ✨ Because ethics isn’t about perfection—it’s about presence.


    🔗 Referenced articles:

    • Reuters (2023): Artists sue Stability AI, Midjourney over AI-generated art

    https://www.reuters.com/legal/transactional/lawsuits-accuse-ai-content-creators-misusing-copyrighted-work-2023-01-17/

    • NPR (2024): Authors warn of AI-generated books mimicking real authors on Amazon

    https://www.npr.org/2024/01/10/1223567344


    🎧 The Tinker Table is a space for thoughtful conversations about AI, education, creativity, and the human side of technology. Subscribe wherever you get your podcasts!


    続きを読む 一部表示
    15 分
  • Episode 4: Everyday Users, Extraordinary Influence
    2025/07/22

    Episode 4 – Everyday Users, Extraordinary Influence | AI Ethics Deep-dive


    You don’t need to code to make a difference in how AI is used.


    In this episode of The Tinker Table, we spotlight teachers, nurses, caregivers, and small-town entrepreneurs who are using AI tools in thoughtful, powerful ways—raising questions, catching bias, and making tech more human-centered.


    We’ll unpack:


    • ​How everyday users influence AI outcomes
    • ​The 3 essential questions anyone can ask about AI tools
    • ​Why gatekeeping in tech leaves out critical voices
    • ​And how you belong in the conversation—even if you're not a tech expert


    This is part 4 of our 5 part deep dive into AI Ethics. It’s all about reclaiming your agency in a world shaped by algorithms.


    You don’t have to build the system to help change it. You just have to start asking better questions.

    続きを読む 一部表示
    10 分
  • Episode 3: When AI gets it Wrong
    2025/07/08

    When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


    We explore real-world case studies like:


    The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


    Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


    📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


    🔍 Sources & Further Reading:


    Facial recognition misidentification

    Machine bias

    Predictive policing

    AI hallucinations


    🎧 Tune in to learn why we need more than innovation—we need accountability.

    続きを読む 一部表示
    9 分
  • Episode 2: Who is at the AI table?
    2025/07/01
    If AI is shaping our future, we have to ask: Who’s shaping AI? In this episode of The Tinker Table, Hannah digs into the essential question of representation in technology—and why it matters who gets invited to build the tools we all use. We explore how a lack of diversity in engineering and data science has led to real-world consequences: from facial recognition tools that misidentify women of color (Buolamwini & Gebru, MIT Media Lab, 2018) to healthcare algorithms that underestimated Black patients' needs by nearly 50% (Obermeyer et al., Science, 2019). This episode blends Hannah’s own research on belonging in engineering education with broader examples across healthcare, education, and AI development. You'll hear why representation isn’t just about race or gender—it’s about perspective, lived experience, and systemic change. And most importantly, we talk about what it means to build tech that truly works for everyone. Whether you’re a developer, educator, team leader, or thoughtful user—pull up a seat. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. Gender Shades: Intersectional accuracy Disparities in commercial gender classification. (2018). Proceedings of Machine Learning Research, 81, 1–15.
    続きを読む 一部表示
    9 分
  • Episode 1: Is AI Good?
    2025/07/01
    Is AI good? Is it bad? Or is it something more complicated—and more human—than we tend to admit? In this first episode of The Tinker Table, Hannah breaks down the foundations of AI ethics—what it is, why it matters, and where it shows up in our lives. From biased hiring algorithms that penalized women (Winick, 2022) to predictive systems shaped by decades-old redlining data (The Markup, 2021), and even soap dispensers that don’t detect darker skin tones (Fussell, 2017)—this episode explores the ways AI isn’t just about what we can build, but what we should. We ask: Who gets to shape the tools shaping our world? What values are embedded in our algorithms? And what happens when human bias becomes digital infrastructure? Whether you’re a teacher, parent, technologist, or simply AI-curious—this is the conversation to start with. Winick, E. (2022, June 17). Amazon ditched AI recruitment software because it was biased against women. MIT Technology Review. The secret bias hidden in Mortgage-Approval algorithms – the Markup. (2021, August 25). Fussell, S. (2017, August 17). Why Can’t This Soap Dispenser Identify Dark Skin? Gizmodo
    続きを読む 一部表示
    10 分
  • Episode 0: What is the Tinker Table?
    2025/06/29

    Pull up a chair—this is The Tinker Table. I’m Hannah: a STEM educator, researcher, and deeply curious human. In this short introductory episode, I’ll share what this podcast is all about, what brought me here, and what you can expect in future episodes. If you’ve ever had big questions about the technologies shaping our world—and the values behind them—you’re in the right place. From AI ethics to engineering education to everyday tech choices, we’ll explore how to be more thoughtful, intentional creators and users. Let’s get into it.

    続きを読む 一部表示
    3 分