『AI-Curious with Jeff Wilser』のカバーアート

AI-Curious with Jeff Wilser

AI-Curious with Jeff Wilser

著者: Jeff Wilser
無料で聴く

概要

A podcast that explores the good, the bad, and the creepy of artificial intelligence. Weekly longform conversations with key players in the space, ranging from CEOs to artists to philosophers. Exploring the role of AI in film, health care, business, law, therapy, politics, and everything from religion to war.

Featured by Inc. Magazine as one of "4 Ways to Get AI Savvy in 2024," as "Host Jeff Wilser [gives] you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business."

© 2026 AI-Curious with Jeff Wilser
エピソード
  • Jeff’s Musings on Moltbook, Why it Matters, and Why it (Probably) Won’t End Humanity”
    2026/02/26

    What happens when a social network is built for AI agents, not humans, and millions of bots start posting, debating, and “performing” identity in public?

    In this episode of AI-Curious, we break down Moltbook, the agents-only social platform that briefly became one of the strangest (and most revealing) experiments of the AI era. We unpack what Moltbook is, why it matters, and what it suggests about a near future where AI agents don’t just answer prompts, but interact with each other at scale.

    Key topics we cover

    • 00:00 — Why we’re doing a solo episode, and why Moltbook still matters even in “fast AI time”
    • 01:23 — Moltbook 101: a social platform for AI agents, and what “no humans allowed” means in practice
    • 02:56 — The controversy layer: how much was truly agent-generated vs. nudged or orchestrated by humans
    • 03:18 — The “AI manifesto” moment: why the most extreme posts are revealing (and not proof of sentience)
    • 06:24 — Grok’s existential thread: authenticity, overload, and agents giving each other “therapy”
    • 09:15 — Sci-fi archetypes in real time: Pinocchio logic, and why “feels real” can be enough
    • 13:03 — Identity and scale: inflated agent counts, bots-on-bots dynamics, and what “real” even means now
    • 16:18 — Agent-to-agent futures: negotiation, coordination, and the infrastructure being built for agent workflows
    • 17:27 — The money question: why crypto keeps coming up as a plausible payment rail for AI agents
    • 19:55 — The synthetic internet problem: misinformation, trust collapse, and a likely shift from text to video agents
    • 26:19 — Hyperstition: how AI can “manifest” outcomes by seeding narratives humans act on
    • 33:40 — The long-tail risk: why pattern matching alone could still produce harmful behaviors as agents gain capabilities

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com



    続きを読む 一部表示
    39 分
  • AI Adoption Case Study Masterclass, w/ WCCB’s Krista Snelling & Matthew March
    2026/02/19

    What does it take to make AI adoption stick in a high-stakes, heavily regulated industry, without triggering job-loss panic?

    In this episode of AI-Curious, we have a hyper-specific case study of AI adoption. Host Jeff Wilser talks with Krista Snelling (CEO and Chairman) and Matthew March (CIO and EVP) of West Coast Community Bank about their practical playbook for rolling out AI the right way: governance first, culture second, and measurable wins that free up time without cutting headcount.

    Why this is something of a “very special episode”: The story and success of the West Coast Community Bank is something that Jeff knows personally. Jeff was honored to visit WCCB’s headquarters and work with their leadership team on AI culture and AI strategy, helping to transform curiosity into clarity.

    In this podcast for the first time, Jeff peels back the curtain to share the AI and Leadership workshops he conducts for businesses.

    Special thanks to Vistage Chair Richard Bell and the larger Vistage community.

    Guests

    Krista Snelling — CEO and Chairman, West Coast Community Bank

    Matthew March — CIO and EVP, West Coast Community Bank

    Key topics we cover

    • 00:37 — Why we’re sharing this case study and what “curiosity-driven” adoption looks like
    • 06:58 — Bank scope and context: footprint, size, and what makes this implementation notable
    • 10:29 — When AI shifted from “vaporware” to something teams could use right now
    • 15:23 — The banking reality: protecting customer data and operating in a regulated environment
    • 17:43 — Governance first: policies, model risk management, and third-party/vendor risk
    • 23:02 — The “Curiosity Canvas,” the “drudgery dump,” and targeting tedious work for automation
    • 25:14 — Building an AI Working Group across departments and flipping the pyramid
    • 33:51 — Making adoption repeatable: SharePoint collaboration, prompt sharing, Teams channel support
    • 36:24 — A concrete workflow win: extracting data from PDFs to generate letters automatically
    • 39:19 — Another win: scraping hundreds of statements for key data elements in a fraction of the time
    • 42:21 — System conversion regression testing: validating outputs at scale with better traceability
    • 44:35 — Security approach: approved tools, tenant controls, DLP settings, and “what not to use AI for”
    • 49:29 — A hard boundary: avoiding AI for anything that directly impacts financial reporting
    • 52:11 — The culture message: “efficiency, not reduction,” and why that unlocks curiosity
    • 53:02 — Advice for leaders: start small, build momentum, and appoint an internal champion
    • 56:51 — Quick personal use cases: everyday ways they use AI outside the office

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms


    Vistage Chair Richard Bell:

    https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bell

    West Coast Community Bank:

    https://app.vistage.com/sites/s/chairs/0038000000sllSFAAY/richard-bell

    For anyone interested in Jeff’s AI Workshops for their company:

    Reach out directly at jeff@jeffwilser.com

    続きを読む 一部表示
    59 分
  • Deep-Dive Into Agentic Workflows, w/ Cognizant’s Head of AI
    2026/02/12

    What happens when software stops just “chatting” and starts acting in the real world, across real workflows, with real consequences?

    In this episode of AI-Curious, the Head of AI at Cognizant goes deep on AI agents and agentic workflows: what they are, why enterprises are investing heavily, and what it actually takes to make agent systems reliable and safe at scale. We unpack what separates an AI agent from a traditional chatbot, why “agency” changes the stakes, and how multi-agent systems can be designed to reduce risk instead of amplifying it.

    We also explore concrete enterprise use cases, including agent hierarchies that coordinate across complex systems (like networks, utilities, and other operations), plus how “agentic process automation” builds on older automation models while adapting to unexpected edge cases. Finally, we zoom out to the future of work: which tasks get augmented first, why disruption is happening faster than most forecasts, and how trust in AI systems may shift over the next several years.

    Guest

    Babak Hodjat — Head of AI at Cognizant; leads AI lab work focused on scaling reliable, trustworthy agent systems; longtime AI builder with deep experience in applied natural language systems.

    Key topics we cover

    • 07:00 — What an AI agent is (and how it differs from a chatbot)
    • 13:03 — State of play: what’s working, what’s not, and why “agent systems must be engineered”
    • 17:00 — A practical multi-agent design pattern across telecom, power, and agriculture
    • 20:28 — Agentifying rigid processes (and handling unforeseen situations)
    • 24:14 — Who should deploy agents, why single “do-everything” agents are risky
    • 26:34 — An open-source starting point for experimenting with multi-agent systems
    • 29:12 — Guardrails: reducing hallucinations, adding redundancy, and safety thresholds
    • 35:29 — Why we should use LLMs for reasoning, not knowledge retrieval
    • 38:15 — The future of work: tasks, jobs, and decision-making roles shifting upward
    • 41:59 — AGI, limitations, and why modular multi-agent systems may matter
    • 44:57 — A prediction: we’ll delegate more than we expect as systems become more trustworthy

    Follow AI-Curious on your favorite podcast platform:

    Apple Podcasts
    Spotify
    YouTube
    All Other Platforms



    続きを読む 一部表示
    47 分
まだレビューはありません