エピソード

  • The Learning Curve: Part 2 - The Student Dilemma - Is AI the great equalizer — or the next thing that widens the gap?
    2026/03/19

    Interact with us NOW! Send a text and state your mind.

    Is AI the great equalizer — or the next thing that widens the gap?

    In Episode 2 of The Learning Curve, JR and ARIA go inside the student experience — and what they find is messier, more hopeful, and more urgent than the cheating-panic headlines suggest.

    This episode covers: how first-generation students are using AI to access tutoring they could never afford; why Turnitin's false positive rates are harming the very students AI was supposed to help; what cognitive science says about 'desirable difficulties' and when AI use undermines learning; and why AI fluency is already becoming a class marker in the labor market.

    ARIA also names what she fundamentally cannot know — including whether a student understood something or just produced something that looks like understanding.

    Resources Referenced in This Episode

    • Khan Academy Khanmigo — khanmigo.khanacademy.org | Free AI tutoring built for students
    • Common Sense Media AI Literacy Curriculum — commonsense.org/education | Free K-12 curriculum
    • Day of AI — dayofai.org | Free AI literacy materials from MIT
    • All4Ed Student Resources — all4ed.org | Equity-focused education policy and tools
    • Turnitin Academic Integrity Resource Center — turnitin.com/educators/academic-integrity
    • Student Voice — studentvoice.com | Student-led advocacy and policy engagement



    Support the show

    続きを読む 一部表示
    44 分
  • The Learning Curve: Part 1 - The AI Educator Paradox: Is Tech Saving Teachers or Replacing Them?
    2026/03/04

    Interact with us NOW! Send a text and state your mind.

    Welcome to Episode 1 of The Learning Curve! Host JR, The AI Learning Guide, alongside AI co-hosts Nex and ARIA, investigates the hidden reality of how teachers are actually using AI. While public narratives focus on student cheating, a quiet revolution is happening behind the scenes. Teachers are secretly adopting AI at home to combat burnout, handle complex IEP documentation, and differentiate lesson plans.

    But does AI truly save time, or does it just mutate the workload? We dive deep into the "ethical-cognitive burden" placed on educators when they are forced to act as human shields for opaque algorithms. We also explore the severe risks of "cognitive offloading" and de-skilling—if an AI writes the lesson plan, are new teachers losing the pedagogical craft of design? Finally, we unpack the "Illusion of Competence" in students through "Vibe Coding" and why human emotional labor remains the irreplaceable core of teaching.

    Whether you are an educator navigating "AI education anxiety," a school leader establishing policies, or a parent curious about the future of learning, this episode provides actionable, data-backed insights.

    Resources Mentioned:

    • The Vibe-Check Protocol (VCP)
    • Job Crafting Strategies for Educators
    • Cognitive Load Theory (CLT) Lens for AI
    • MagicSchool.ai & Diffit

    Support the show

    続きを読む 一部表示
    36 分
  • AI in 5: AI Hallucinations: When Smart Systems Sound Smart… But Get It Wrong (March 3, 2026)
    2026/03/03

    Interact with us NOW! Send a text and state your mind.

    Show Notes – AI in 5: AI Hallucinations

    AI is powerful. Fast. Fluent. Persuasive. But it isn’t perfect.

    In this episode of AI in 5, Tour Guide JR D breaks down one of the most misunderstood challenges in generative AI today: hallucinations. From fabricated citations discovered in AI-assisted research papers to high-profile legal missteps involving made-up case law, we explore how and why advanced language models sometimes generate confident but incorrect information.

    You’ll learn what an AI hallucination actually is, why probabilistic systems can “complete patterns” instead of verifying facts, and how this issue affects professionals in research, law, healthcare, and business. We also examine what companies are doing to reduce hallucination rates through retrieval-augmented generation, benchmarking, and improved transparency.

    Most importantly, this episode gives you practical guidance on how to use AI responsibly: verify sources, maintain human oversight, and treat AI as a collaborator — not an oracle.

    If you use AI in your workflow, this is an essential listen.

    Support the show

    続きを読む 一部表示
    6 分
  • The Invisible AI: Part 4 — Arguing With a Machine: AI Accountability, Your Rights, and How to Fight Back
    2026/02/28

    Interact with us NOW! Send a text and state your mind.

    Episode 4 of 4 | The Invisible AI Series | AI Innovations Unleashed

    When an algorithm denies your job, your apartment, or your health insurance — and takes 1.2 seconds to do it — who is actually responsible?

    In this series finale, JR D. and AI research companion Ada close out "The Invisible AI" by tackling the accountability gap: legally, practically, and personally.

    We dig into class-action lawsuits against Cigna, Humana, and UnitedHealth Group over AI-driven claim denials, the Mobley v. Workday Inc. ruling (2025) that held AI hiring vendors directly liable for discrimination, and the SafeRent $2M+ settlement that shifted the conversation for renters.

    We break down COMPAS — the criminal risk tool at the center of ProPublica's "Machine Bias" investigation — and explain what new laws in Colorado and the EU mean for your rights today.

    Then we get practical: how to request your data, dispute an algorithmic decision, and file a complaint that actually goes somewhere.

    Featuring Dr. Joy Buolamwini (Algorithmic Justice League, author of Unmasking AI) and Microsoft CEO Satya Nadella.

    Resources: AnnualCreditReport.com | CFPB.gov | EEOC.gov | ProPublica Machine Bias (2016) | Colorado AI Act (2024) |

    Full APA citations at AIInnovationsUnleashed.com

    Up next: "The Learning Curve: AI & the Future of Education" — March 2026 with new co-host ARIA. Episode 1: "The Teacher in the Age of AI."

    Subscribe now.

    #AIInnovationsUnleashed #AlgorithmicAccountability #AIBias #COMPAS #KnowYourRights #TheLearningCurve

    Support the show

    続きを読む 一部表示
    42 分
  • 🎙️ The Friday Download - AI Wants a Body, Governments Want Control, and Your Laptop Wants Power (February 27, 2026)
    2026/02/28

    Interact with us NOW! Send a text and state your mind.

    🎙️ Show Notes

    This week’s Friday Download explores a structural shift in artificial intelligence. Humanoid robotics research is advancing embodied AI through improved proprioception, allowing machines to better understand and correct their physical movement. Meanwhile, enforcement momentum around the European Union’s AI Act signals the beginning of real regulatory friction for large model providers, raising questions about transparency, compliance, and global standards.

    We also examine the rise of AI-optimized processors designed for on-device inference — a decentralization trend that could reshape privacy, latency, and power dynamics in AI deployment. In healthcare, multimodal AI models combining imaging, lab, and clinical data continue to demonstrate improved early detection capabilities. And in education, AI-assisted instructional tools are quietly reducing teacher workload through differentiated material planning.

    Referenced reporting and research themes align with coverage from MIT News (robotics research), Reuters and Financial Times (EU AI Act enforcement), Bloomberg and The Verge (AI hardware announcements), and Nature/STAT News (multimodal healthcare AI advancements).

    AI is moving from novelty to infrastructure. This episode unpacks what that means — and why it matters now.

    Support the show

    続きを読む 一部表示
    12 分
  • 🎙️ AI in 5 - $650 Billion and Counting: What Big Tech’s AI Spending Surge Means for YOU (February 24, 2026)
    2026/02/24

    Interact with us NOW! Send a text and state your mind.

    Big Tech is projected to invest roughly $650 billion in artificial intelligence in 2026 — and that headline number is more than just tech hype. In this episode of AI in 5, Tour Guide JR D breaks down what that spending surge actually means for everyday professionals, business leaders, and curious learners.

    Drawing on recent reporting from Reuters and data from Stanford’s 2025 AI Index Report, we explore where that money is going: data centers, AI chips, cloud infrastructure, talent acquisition, and enterprise automation. But the bigger question is why it matters.

    This episode connects the dots between AI capital expenditures and real-world impact — from changing hiring patterns and workplace productivity shifts to investor pressure and long-term market risks. Is this the foundation of a new industrial revolution… or the early warning signs of an overheated AI bubble?

    You’ll walk away understanding how large-scale AI investment affects your career trajectory, the digital tools you use daily, and the broader economy.

    Listen in, stay informed, and start asking: how is this AI spending wave reshaping my future?

    Support the show

    続きを読む 一部表示
    6 分
  • The Invisible AI - Part3: Your Bias Is Showing — And So Is the Algorithm's (TEASER)
    2026/02/21

    Interact with us NOW! Send a text and state your mind.

    Can fixing AI bias be as simple as cleaning the data? The math says no. Explore algorithmic fairness — and why neutrality is never truly neutral.

    Support the show

    続きを読む 一部表示
    2 分
  • The Invisible AI - Part 3: Your Bias Is Showing — And So Is the Algorithm's
    2026/02/21

    Interact with us NOW! Send a text and state your mind.

    Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?

    Tour Guide JR D and AI research companion Ada explore why 'just fix the data' isn't enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon's gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.

    We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act's phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.

    Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O'Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.

    REFERENCES

    • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    • Buolamwini, J. (2017). How I'm fighting bias in algorithms [TED Talk]. TED Conferences.
    • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
    • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
    • Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions.
    • Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
    • Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
    • Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT '18).
    • Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
    • Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
    • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
    • O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
    • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
    • Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
    • U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
    • Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems' hiring biases. University of Washington News, November 10, 2025.
    • Wilson, K., & Caliskan, A. (2024). Gender, race, and inte

    Support the show

    続きを読む 一部表示
    47 分