『Slop World Podcast』のカバーアート

Slop World Podcast

Slop World Podcast

著者: Juan Faisal / Kate Cook
無料で聴く

概要

Juan and Kate plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.Juan Faisal / Kate Cook 政治・政府
エピソード
  • The Great AI Sellout: ChatGPT's New Reality, Claude's Super Bowl Ad & Sam Altman's Fury
    2026/02/11

    The Great AI Sellout: OpenAI vs Anthropic drama! 🧪

    Anthropic dropped a Super Bowl ad, but was it a clever ethical flex or just pure PR shade thrown at OpenAI? Meanwhile, Sam Altman is calling Anthropic “elitist” while his own company stares down a brutal business math problem: only 5% of ChatGPT users pay. So… are ads inevitable for OpenAI to survive? 💸

    Join Juan and Kate as they unravel the deep-seated rivalry between OpenAI and Anthropic, tracing the beef back to fundamental disagreements about AI's mission and ethics. We're talking:

    👉 The OpenAI–Anthropic Breakup: Why the founders left, and how it shaped today’s AI landscape.

    👉 The High Cost of AI: Why these models are so expensive to run, and what that means for their business models.

    👉 Competing Futures: Is Claude’s focused, paid approach genuinely more ethical, or will ChatGPT’s “AI for everyone” strategy dominate (or collapse)?

    👉 Sam Altman’s Irony: The regulation guy criticizing others for… gatekeeping? We unpack the contradictions.

    👉 Our Spiciest Take: Who’s actually positioned to win the AI arms race (and why it might not be who you think).

    Beyond the AI ads, this episode dives into the soul of artificial intelligence: the incentives driving it, the people it really serves, and what happens when “doing good” collides with “making money.”

    What do you think — is AI doomed to become just another ad machine, or is there still a path to principled, powerful AI? Drop your takes in the comments, we’re reading them all.


    🫟 ADDITIONAL RESOURCES

    Harvard University: https://www.youtube.com/watch?v=FVRHTWWEIz4&t=2322s

    Anthropic Ad: https://www.youtube.com/watch?v=gmnjDLwZckA

    "Claude is a space to think" (Anthropic)

    https://www.anthropic.com/news/claude-is-a-space-to-think

    "Our approach to advertising and expanding access to ChatGPT" (OpenAI)

    https://openai.com/index/our-approach-to-advertising-and-expanding-access/

    "Big Tech’s $630 billion AI spree now rivals Sweden’s economy, unsettling investors" (Fortune)

    https://fortune.com/2026/02/06/what-is-a-data-center-capex-spending-630-billion-dollars-amazon-microsoft-google-meta/

    "Financial Expert Says OpenAI Is on the Verge of Running Out of Money" (Yahoo! Finance)

    https://finance.yahoo.com/news/financial-expert-says-openai-verge-200606874.html?guccounter=1

    Sam Altman's post about Anthropic's ad:

    https://x.com/sama/status/2019139174339928189


    🫟 TOPICS

    00:00 – The Great AI Sellout: What’s Actually Going On?

    00:21 – Reacting to Claude’s Super Bowl Ad Roast

    00:48 – Sam Altman Reacts to Claude's Ad

    01:23 – The OpenAI-Anthropic Breakup: Why Founders Left

    02:45 – The AI Super Bowl Ad Battle Begins

    04:57 – OpenAI's Money Problem: Why They NEED to Monetize

    06:39 – Can OpenAI Survive on Only 5% Paid Users?

    08:10 – Claude vs ChatGPT: Two Competing AI Futures

    08:41 – Sam Altman Calls Anthropic "Elitist": The Class War Argument

    11:00 – The Irony of Altman Calling Out AI Regulation

    12:44 – Is AI Advertising Inherently Misleading?

    14:06– Who Wins the AI Race? Claude, ChatGPT, or Google?

    15:14 – Why Claude's Focused Approach Just Makes Sense

    15:38 – Our Spiciest Take: Google Will Probably Win Anyway

    16:43 – Final Verdict: Don't Trust a Brand, Watch the Incentives

    17:41 – Bad Bunny Te Amamos + Outro


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    19 分
  • Fake AI Agents, Leaked Data, and a Viral Lie | Moltbook
    2026/02/04

    Moltbook promised to be Reddit for AI agents—a social network where 1.5 million bots could debate philosophy, create religions, and plot in secret languages while humans watched from the sidelines. Tech leaders called it "the first sign of the singularity." The internet went wild.

    Then researchers looked under the hood.

    What they found: security breaches exposing API keys and emails, fake bot accounts (one person created 500,000), marketers posing as agents to promote products, and a platform entirely "vibe coded" with zero actual code written by its founder.

    In this episode, we break down the Moltbook saga—from the weekend hype cycle to the security flaws, from Crustafarianism (yes, really) to the harsh reality of giving AI agents access to your computer. We discuss what actually happened, who's to blame, and whether this chaotic experiment tells us anything useful about the future of AI agents.


    🫟 TOPICS

    00:00 What Moltbook Is and Why It Fooled So Many People

    01:07 Why Top AI Leaders Thought Moltbook Was a Big Deal

    02:14 What Happens When AI Agents Control Your Computer

    03:21 Bots Creating Religions and Secret Codes Without Humans

    03:59 How Moltbook Blew Up Online in Just One Weekend

    05:09 The Founder Didn’t Write Code and It Caused Real Problems

    05:44 The Security Leak That Exposed Keys and Emails

    07:29 How One Person Created 500,000 Fake AI Bots

    08:25 Why the 1.5 Million Bots Claim Was Not Real

    09:29 How Marketers Pretended to Be AI Bots

    10:56 Why These AI Bots Only Seemed Smart

    12:54 Why Giving AI Agents Control Is Still Dangerous


    🫟 ABOUT SLOP WORLD

    Juan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    16 分
  • The Accountability Gap: When AI Causes Real Damage, Who's Responsible?
    2026/01/16

    When AI goes wrong, you pay for it – with your money, your data privacy, and sometimes your health. This Slop World episode pulls apart ghost authority and the dark side of artificial intelligence: broken AI ethics, surveillance pricing, and what happens when nobody is accountable for the systems running our lives. Juan and Kate break down how companies hide behind “the algorithm” while quietly exploiting data protection gaps, health privacy loopholes, and dynamic pricing schemes you never agreed to.

    From AI security failures and digital privacy you thought you had to OpenAI's ChatGPT Health and medical AI delivering life‑changing decisions, we’re asking the only question that matters: when these systems screw up, who pays the price?🫟 ADDITIONAL RESOURCES

    • When Google’s AI gets it wrong, real people pay the price: https://www.oaoa.com/people/when-googles-ai-gets-it-wrong-real-people-pay-the-price/
    • Minnesota Solar Company Sues Google Over AI Summary: https://www.govtech.com/public-safety/minnesota-solar-company-sues-google-over-ai-summary
    • Canadian Musician Ashley MacIsaac Wants to 'Stand Up' To Google After Being Falsely Accused of Forced Contact Offenses by AI Overview: https://ca.billboard.com/business/legal/ashley-macisaac-google-defamation
    • The Price is Rigged - Today, Explained | Podcast on Spotify: https://open.spotify.com/episode/49PSPtP1neuga7kvBYakIx
    • Instacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill - Consumer Reports:https://www.consumerreports.org/money/questionable-business-practices/instacart-ai-pricing-experiment-inflating-grocery-bills-a1142182490/
    • Instacart ends AI pricing test that charged shoppers different prices for the same items - Los Angeles Times: https://www.latimes.com/business/story/2025-12-22/instacart-ends-ai-pricing-test-that-charged-shoppers-different-prices-for-same-items
    • Introducing ChatGPT Health | OpenAI: https://openai.com/index/introducing-chatgpt-health/?video=1151655050
    • OpenAI launches ChatGPT Health in US sparking privacy concerns: https://www.digit.fyi/openai-launches-chatgpt-health-in-us-sparking-privacy-concerns/
    • OpenAI: Health Privacy Notice: https://openai.com/policies/health-privacy-policy/

    🫟 TOPICS00:00 Ghost Authority: Why Nobody Is Responsible When AI Messes Up01:42 Algorithmic Accountability: A Checklist to Protect Your Decisions 02:31 Google AI Overview: The Minnesota Solar Company Hallucination 03:19 Reputation Ruined: The AI Hallucination That Cost a Musician His Career05:42 Smart Research: How to Use ChatGPT, Gemini & Claude Without Being Fooled08:35 Surveillance Pricing: Why the Internet Charges You More Than Your Neighbor10:41 Instacart and Uber: The Backlash Against Dynamic Pricing 12:42 Save Money: Simple Tricks to Beat Hidden Algorithmic Pricing14:15 The Urgency Trap: How Companies Profit From Your Stress and Fear15:34 AI in Healthcare: Your Medical Data and Health Privacy Risks16:10 Juan Reacts: OpenAI’s ChatGPT Health Trailer 17:41 AI in Healthcare: Could Your Private AI Chats Raise Your Rates?21:21 The Fine Print: What OpenAI Actually Does With Your Medical Data23:39 AI Health: Why AI Can’t Tell Real Science From Internet Myths25:42 Data Protection: How to Anonymize Your Medical Test Results 27:23 Slow Down: Why Being Fast Online Makes You a Target for AI Scams30:03 The Bus Stop Test: A Simple Rule for Trusting Any AI Tool🫟 ABOUT SLOP WORLDJuan Faisal and Kate Cook plunge into the slop pile—AI news, cultural shifts, and the future’s endless curveballs. They’re not here to sanitize the mess; they’re here to wrestle with it, laugh at it, and find meaning where you least expect it.

    続きを読む 一部表示
    31 分
まだレビューはありません