エピソード

  • AI Holiday Hacks | New Traditions, Family Talks & What's Safe
    2025/12/16

    AI is coming to your holiday table whether you like it or not.

    In this episode, Juan and Kate share a practical AI holiday playbook for parents and families, focused on AI safety, AI for kids, and real-world holiday use cases that won’t turn dinner into a boiler room.

    They cover AI holiday hacks that can make family gatherings easier, including safe ways to entertain kids with AI, how to talk to grandparents about AI without scaring them, and which AI topics will instantly derail the room. Share your best (or worst) AI holiday conversation in the comments!


    🫟 ADDITIONAL RESOURCES

    Create new holiday traditions with AI: https://www.microsoft.com/en-us/microsoft-365-life-hacks/everyday-ai/create-new-holiday-traditions

    ‘It’s so crushing’: US families navigate divide over politics during the holidays: https://www.theguardian.com/us-news/2024/dec/23/family-politics-holiday


    🫟 TOPICS

    00:00 Why AI Keeps Coming Up at Family Holidays

    00:29 The AI Holiday Playbook Strategy

    01:29 Using AI to Entertain Kids: Helpful or Risky?

    02:41 Low-Risk AI Activities Kids Love

    03:33 Family Tech Safety: When AI Crosses a Line

    04:53 How to Explain AI Safety to Your Family

    06:46 Why AI Apps Want Faces and Family Data

    07:31 Big Tech’s Take on AI Holiday Traditions

    09:54 AI for Crafts & DIY Instructions

    10:45 The Holiday Health Tracking Fail

    11:36 AI Red Flags: Politics & Surveillance

    12:53 Parenting Safety: The Bus Station Analogy

    14:38 Economic Fears & The AI Bubble

    16:03 AI Trends: Art vs. Slop Debate

    18:51 A Simple Rule for Smarter AI Conversations


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    20 分
  • AI Browser Hacks: Prompt Injection & the Real Cost of Convenience
    2025/12/05

    How much security are you willing to trade for convenience? Juan and Kate break down how prompt injection attacks exploit AI browsers like ChatGPT Atlas and Perplexity Comet, and why invisible instructions inside webpages can hijack your agents without you knowing.

    We also discuss the resume hack going viral, the difference between direct vs. indirect prompt injection, and the real strategic trade-offs between convenience and LLM security.


    🫟 ADDITIONAL RESOURCES

    - Prompt injection: A visual, non-technical primer for ChatGPT users: https://www.linkedin.com/pulse/prompt-injection-visual-primer-georg-zoeller-tbhuc/

    - AI browsers are here, and they're already being hacked: https://www.nbcnews.com/tech/tech-news/ai-browsers-comet-openai-hacked-atlas-chatgpt-rcna235980

    - Using an AI Browser Lets Hackers Drain Your Bank Account Just by Showing You a Public Reddit Post: https://futurism.com/ai-browser-hackers-drain-bank-account-public-reddit-post


    🫟 TOPICS

    00:00 - Why AI Browsers Like Atlas and Comet Are a Security Risk

    00:50 - Invisible Instructions Hijacking Your AI Agent

    01:51 - Prompt Injection Explained for Beginners

    02:39 - The Hack That Exposes AI Browser Weaknesses

    03:40 - The Resume Hack: Watch Your Data Get Stolen

    04:43 - Phishing Attack Using Simple Meta Tags

    05:20 - Hidden Malicious Prompts in Metadata & PDFs

    06:00 - Direct Injection: Forcing Models Past Guardrails

    06:41 - Indirect Injection: Embedded Instructions for Agents

    07:22 - We're Playing With Fire: AI Browser Security Is a Mess

    09:03 - Why AI Agents Get Manipulated So Easily

    12:55 - ChatGPT Atlas & Perplexity Comet: Can We Trust These Browsers?

    14:13 - What is Your Cost of Convenience? The Risks of AI Automation

    16:01 - Why First-Gen AI Agents Will Always Be Flawed


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    17 分
  • Your AI Assistant Is Your Worst Distraction
    2025/11/14

    AI productivity tools were supposed to help, but they often end up getting in the way of getting any work done.

    Microsoft says the average worker gets hit 275 times a day, and many of those dings now come from the same AI tools that promised to keep us focused. Juan and Kate talk about how AI for business has turned into a distraction machine, how Copilot and similar tools push prompts no one asked for, and whether engagement metrics are driving this productivity mess.


    🫟 ADDITIONAL RESOURCES

    Microsoft, Work Trend Index Special Report "Breaking Down the Infinite Workday": https://www.microsoft.com/en-us/worklab/work-trend-index/breaking-down-infinite-workday


    🫟 TOPICS

    00:00 When Interruptions Take Over Your Workday

    00:08 Why AI Tools Keep Pulling Your Attention Away

    00:35 Copilot And The Problem With “Helpful” Prompts

    01:32 Why SaaS Tools Bake In Interruptions

    03:20 Every App Trying To Teach You At Once

    05:54 Your Attention As The Real Resource

    06:52 Engagement Metrics vs. Productivity

    07:50 The All-In-One AI Tools Ecosystem Theory

    09:13 Why SaaS Tools Won’t Give Up Notifications

    12:47 What People Really Do With AI at Work

    13:57 Using AI Personas To Stress-Test Your Ideas

    14:48 AI For Data Storytelling

    16:31 One Easy Step To Level Up With AI

    18:42 The Real Gap In AI Productivity At Work

    19:48 Real-Time Interruption: Meet Ramón

    20:22 How AI Could Handle Most Executive Decisions

    22:17 One More Thing...


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    23 分
  • OpenAI's New Strategy... Proves the Bubble is Bursting?
    2025/11/06

    OpenAI’s gone full YOLO: agents that work for you, a Sora app that feels like TikTok, an AI browser that wants to replace Chrome, and a new “adult freedom” stance. Juan and Kate dig into the logic, or lack of it, behind OpenAI’s everything-everywhere strategy, and why even its biggest users are starting to push back.


    🫟 Additional Resources

    The AI Resisters (Axios) https://www.axios.com/2025/10/19/ai-resistance-students-coders

    Workforce Outlook: The Class of 2026 in the AI Economy https://joinhandshake.com/themes/handshake/dist/assets/downloads/network-trends/class-of-2026-outlook.pdf

    Zuckerberg signals Meta won’t open source all of its ‘superintelligence’ AI models https://techcrunch.com/2025/07/30/zuckerberg-says-meta-likely-wont-open-source-all-of-its-superintelligence-ai-models/


    🫟 Topics

    00:00 – Intro

    00:07 – OpenAI’s new playbook: agents, Sora, and Stargate

    00:39 – AI agents everywhere: from dev tools to browsers

    02:52 – Building AGI or burning cash? What’s OpenAI’s real plan?

    06:00 – The difference between open source and closed AI models

    06:22 – Meta vs OpenAI: Competing to own AI’s Future

    09:35 – The rise of AI resistance: workers, coders, students push back

    11:24 – Using AI tools you don’t trust

    13:51 – The vibe-coding trap

    14:40 – Human-made content becoming the new luxury

    18:30 – Where’s your line in the sand with AI? Ethics and trust

    19:48 – Smarter ways to use AI

    21:49 – Puppies & babies, our weekly fix of Slop


    🫟 About Slop World

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    24 分
  • The Emotional Cost of AI Slop at Work
    2025/10/30

    AI-generated work slop is spreading, and it’s turning offices into zombie zones. In this Halloween Special, Juan and Kate dig into how automation at work and AI productivity tools are creating “zombie workers,” why Sam Altman is flirting with the D3ad Internet Theory, and how corporate AI culture is quietly eroding creativity.

    Watch the full episode for a very special Halloween-themed curated slop.


    🫟 Topics:

    00:00 – Halloween Special: AI, zombies, and slop

    00:42 – The rise of zombie workers and workplace automation

    02:39 – The emotional cost of AI-generated work slop

    03:42 – Why companies fail at AI learning and workforce training

    07:00 – The “cat’s out of the bag” moment for corporate AI

    07:38 – Can smarter workforce learning fix AI fatigue?

    09:24 – Less is more: human value in AI workplaces

    11:09 – The D3ad Internet Theory: AI’s ghost in the machine

    13:37 – Engagement farming and the end of real content

    15:08 – Is Sam Altman breaking the internet on purpose?

    18:10 – How AI filters shape online reality

    20:28 – The hidden cost of easy AI answers

    21:30 – AI Slop of the Week: Halloween Edition


    🫟 About Slop World

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.


    #OpenAI #AIethics #aiproductivity

    続きを読む 一部表示
    24 分
  • The AI Security Crisis: Vibe Coding, Deep Fakes, Scams, and More
    2025/10/23

    October is Cybersecurity Awareness Month, and Juan and Kate talk about one of the most controversial aspects in the AI arms race: SECURITY. From the rise of “vibe coding” (AI that writes your code for you), to AI-powered scams, they unpack how the "move fast and break things" approach is opening -once again?- the door to major exploits, both in computers and humans.


    🫟 About Slop World

    Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    Subscribe to @slopworldpodcast on YouTube and wherever you get your podcasts.


    🫟 Timestamps:

    00:00 – It's Cybersecurity Awareness Month!

    01:43 – The 3 Biggest AI Security Gaps

    02:39 – What Is AI Vibe Coding?

    05:05 – Is Vibe Coding a Security Nightmare?

    07:03 – What If AI Went Down Tomorrow?

    09:51 – AI-Powered Scams and Social Engineering

    14:20 – Who's the Sloppiest? Meta AI vs. Sora 2

    16:20 – The Rise of AI Slop Social Platforms

    20:27 – Are We Training Ourselves to Accept Fake Content?

    続きを読む 一部表示
    23 分
  • Your AI Companion Will Never Leave You (And That's a Big Problem)
    2025/10/16

    AI actors are moving into the spotlight and AI companions are becoming part of daily life. This episode looks at how synthetic talent like Tilly Norwood changes entertainment and how emotional AI shapes behavior, relationships, and mental health.

    The question we'll try to answer is simple, but loaded... who profits when the “talent” isn’t human, and what it means when real people start forming emotional bonds with AI companions and chatbots that never say no. It’s part ethics, part psychology, and all very 2025.


    🫟 RESOURCES FOR PARENTS

    MAMA – Mothers Against Media Addiction:

    https://wearemama.org/

    Sign up for the MAMA newsletter:

    https://wearemama.org/connect/

    Join a local chapter:

    https://wearemama.org/find-your-chapter/

    Center for Humane Technology:

    https://www.humanetech.com/

    Common Sense Discussion Guide for Kids:

    https://www.commonsense.org/system/files/pdf/2025-05/activity-guide-for-parents-talking-to-your-kids-about-ai-5.pdf


    🫟 TOPICS

    00:00 Getting Fooled By an AI Cat Clip

    01:00 Tilly Norwood And the AI Actress Debate

    02:45 What “Talent” Means When the Actor Is Synthetic

    04:50 How AI Avatars Reinforce Beauty Standards

    07:40 Why AI Influencers Hold Attention

    10:50 Unions, Labor, And the Role of AI Policy

    13:50 Why People Bond With AI Avatars

    15:30 When an AI Companion Changes Overnight

    17:00 Why Chatbots Feel So Supportive

    20:20 What Parents Should Know About AI Chat Apps

    26:30 A Useful AI Assistant Story With Leo


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    30 分
  • Karen Hao’s Empire of AI: Did OpenAI Become the Villain?
    2025/10/07

    Juan and Kate discuss Karen Hao’s new book, Empire of AI, and the power struggle that shaped the modern AI industry. From OpenAI’s idealistic nonprofit beginnings to its billion-dollar partnerships and the very public Sam Altman vs. Elon Musk rift.

    Later, we jump into AI's most solid use case to date: Italian Brainrot. And we don't want to spoil the end, but it might or might not include a discussion of Viktor Orbán's social media playbook.

    Join us in the slop. We’re all heading there anyway, so come as you are and bring some friends (maybe an old enemy or two?)


    🫟 About Slop World

    Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.


    🫟 Juan Faisal:

    - https://www.linkedin.com/in/akajuanfaisal/

    - https://juanfaisal.com/


    🫟 Kate Cook:

    - https://www.erasevenpartners.com/

    - https://www.linkedin.com/in/katecook/


    🫟 Timestamps:

    0:00 Intro

    0:40 Karen Hao's Book: Inside OpenAI's 'Empire of AI'

    1:33 The Real Story of OpenAI's Founding Vision

    2:50 From Idealism to Corporate Power: How OpenAI Changed

    3:32 Sam Altman vs. Elon Musk: What Caused The Split?

    4:52 Is OpenAI The Villain? Karen Hao's Framing

    6:52 AI Talent Wars: The Unspoken Cost of Wealth Grab

    7:52 91% of AI Research Is Corporate: The New Gatekeepers

    8:40 AI Bubble? The Growing Evidence Against the Hype

    9:31 What If LLMs Can't Reach AGI? A Different Path

    11:06 The Stochastic Parrot: Can AI Truly Be Creative?

    18:03 The Weirdest AI Trend: Decoding Italian Brainrot

    26:28 When Governments Use Brainrot: The Orbán Example

    続きを読む 一部表示
    29 分