『Discussing Stupid: A byte-sized podcast on stupid UX』のカバーアート

Discussing Stupid: A byte-sized podcast on stupid UX

Discussing Stupid: A byte-sized podcast on stupid UX

著者: High Monkey
無料で聴く

このコンテンツについて

Discussing Stupid returns to the airwaves to transform digital facepalms into teachable moments—all in the time it takes to enjoy your coffee break! Sponsored by High Monkey, this podcast dives into ‘stupid’ practices across websites and Microsoft collaboration tools, among other digital realms. Our "byte-sized" bi-weekly episodes are packed with expert insights and a healthy dose of humor. Discussions focus on five key areas: Business Process & Collaboration, UX/IA, Inclusive Design, Content & Search, and Performance & SEO. Join us and let’s start making the digital world a bit less stupid, one episode at a time. Visit our website at https://www.discussingstupid.com© 2025 Discussing Stupid: A byte-sized podcast on stupid UX マーケティング マーケティング・セールス 経済学
エピソード
  • S3E6 - Intentional AI: You’re asking AI to solve the wrong problems for SEO/GEO/AEO
    2025/12/16

    In Episode 6 of the Intentional AI series, Cole, Virgil, and Seth move into the visibility stage of the content lifecycle and tackle a common mistake they see everywhere. Teams keep treating SEO, GEO, and AEO as optimization problems, when in reality they are content quality, structure, and clarity problems.

    Search engines andgenerative models have both gotten smarter. Keyword tricks, shortcuts, and “secret sauce” tactics no longer work the way they once did. Instead, visibility now depends on clear intent, strong structure, accessible language, and content that actually helps people. The group looks at how SEO history is repeating itself, why organizations keep chasing hacks, and how that mindset actively works against long-term discoverability.

    They also dig into how SEO, GEO, and AEO overlap, where they differ, and why writing exclusively for AI can backfire by alienating human readers. The conversation covers content modeling, headless-style structures, and why these approaches help machines understand relationships without sacrificing usability.

    A major focus of the episode is schema. The team explains why schema is becoming increasingly important for generative engines, why it is difficult and error-prone to manage at scale, and where AI can help draft complex schema structures without fully understanding context. This leads to a broader point. AI can accelerate specific tasks, but it cannot replace judgment, prioritization, or review.

    In the second half of the episode, they continue their ongoing experiment using the same AI-written accessibility article from earlier episodes. They test how three tools approach GEO-focused improvements. Each tool surfaces different insights, none of them are complete on their own, and all of them require human decision-making to be useful. The takeaway is consistent with the theme of the series. AI is powerful when you ask it to solve the right problems, and dangerous when you expect it to fix foundational issues for you.

    In this episode, they explore:

    • Why SEO, GEO, and AEO fail when treated as optimization tricks
    • How search has shifted from keywords to clarity, structure, and intent
    • Where SEO and GEO overlap and where they meaningfully diverge
    • The risk of writing for AI instead of for people
    • Why content modeling supports both search engines and generative engines
    • How AI can assist with schema creation and where humans must intervene
    • Why repeating the same schema everywhere weakens its value
    • A GEO-focused comparison of Writesonic, Grammarly, and Claude
    • Why broad prompts underperform and targeted prompts lead to better outcomes

    A downloadable Episode Companion Guide is available below. It includes tool notes, schema examples, prompt guidance, and practical takeaways for applying AI to search without losing clarity or control.

    DS-S3-E6-CompanionDoc.pdf


    Previously in the Intentional AI series:

    • Episode 1: Applying AI to the content lifecycle
    • Episode 2: Maximizing AI for research and analysis
    • Episode 3: Smarter content creation with AI
    • Episode 4: The role of AI in content management AI
    • Episode 5: How much can you trust AI for accessibility?


    Upcoming episodes in the Intentional AI series:

    • Jan 6, 2026 – Content Personalization
    • Jan 20, 2026 – Wireframing and Layout
    • Feb 3, 2026 – Design and Media
    • Feb 17, 2026 – Back End Development
    • Mar 3, 2026 – Conversational Search (with special guest)
    • Mar 17, 2026 – Chatbots and Agentic AI
    • Mar 31, 2026 – Series Finale and Tool Review


    Holiday break notice

    Discussing Stupid will be taking a short break

    続きを読む 一部表示
    26 分
  • S3E5 - Intentional AI: How much can you trust AI for accessibility?
    2025/12/02

    In Episode 5 of the Intentional AI series, Cole, Virgil, and Seth shift into another part of the content lifecycle. This time, they focus on accessibility and how AI fits into that work.

    Accessibility is more than code checks. It is making sure people can actually use and understand what you create. The team walks through what happened when they ran the High Monkey website through an AI accessibility review, where the tool gave helpful guidance, and where it completely misread the page.


    They also talk about the pieces of accessibility that AI handles surprisingly well, especially language, metaphors, and readability, and why these areas are often missed by standard scanners.


    In the second half of the episode, they continue the ongoing experiment from earlier episodes. Using the same AI written article from before, they test how three tools handle rewriting it to an adult eighth grade reading level, then compare the results with a readability checker. The differences across models show why simple writing, clear prompts, and human review are still necessary.


    In this episode, they explore:

    • How AI evaluates accessibility on a real website
    • Where AI tools give useful insights and where they misinterpret content
    • Why conversational explanations can help non technical teams
    • How to prompt AI to look for the issues you actually care about
    • The importance of plain language and readable writing in accessibility
    • A readability comparison using Copilot, Perplexity, and Grammarly
    • Why simple content supports both accessibility and AI performance


    A downloadable Episode Companion Guide is available below. It includes key takeaways, tool notes, prompt examples, and practical advice for using AI in accessibility work.


    DS-S3-E5-CompanionDoc.pdf


    Upcoming episodes in the Intentional AI series:

    • Dec 16, 2025 - SEO / AEO / GEO
    • Jan 6, 2026 - Content Personalization
    • Jan 20, 2026 - Front End Development and Wireframing
    • Feb 3, 2026 - Design and Media
    • Feb 17, 2026 - Back End Development
    • Mar 3, 2026 - Conversational Search (with special guest)
    • Mar 17, 2026 - Chatbots and Agentic AI
    • Mar 31, 2026 - Series Finale and Tool Review


    Whether you work on websites, content workflows, or internal digital tools, this conversation is about using AI with care. The goal is to work smarter, keep content readable, and avoid handing all of your judgment over to automation.


    New episodes every other Tuesday.


    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    Chapters

    (0:00) - Intro

    (0:46) - Today’s focus: Accessibility with AI

    (1:20) - We let AI audit HighMonkey.com

    (4:00) - Finding the human value in AI feedback

    (6:25) - The power of strategic prompting

    (12:33) - We tested 3 AI tools for accessibility

    (14:49) - AI Tool findings

    (18:17) - Keep all your readers in mind

    (20:50) - Next episode preview


    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts, Spotify, or Soundcloud:

    続きを読む 一部表示
    23 分
  • S3E4 - Intentional AI: The role of AI in content management
    2025/11/11

    In Episode 4 of the Intentional AI series, Cole and Virgil move further into the content lifecycle and this time they are focusing on content management.

    Once your content’s written, the real work begins. Editing, organizing, translating, tagging, all the behind-the-scenes steps that keep content consistent and usable. In this episode, the team looks at how AI can help streamline those tasks and where it still creates new challenges.

    Joined by returning guest Chad, they break down where AI fits, where it fails, and what happens when you trust it to translate complex content on its own.

    In this episode, they explore:

    • How AI supports the content management stage of the lifecycle
    • Common use cases like translation, auto-summary fields, and accessibility checks
    • Where automation makes sense and where it doesn’t
    • The biggest risks of AI content management, from oversimplification to data privacy
    • Why good input (clear, readable content) still determines good output
    • How readable, accessible writing improves both human and AI understanding

    This episode also continues the real-world experiment from previous episodes.

    Using the accessibility article originally created with Writesonic, the team tests how well three AI tools: Google Translate, DeepL, and ChatGPT, handle translating the piece into Spanish. The results reveal major differences in accuracy, tone, and overall usability across each model.

    A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content management stage.

    DS-S3-E4-CompanionDoc.pdf

    🦃 Note: We’re taking a short Thanksgiving break, the next episode will drop on December 2, 2025.

    Upcoming episodes in the Intentional AI series:

    • Dec 2, 2025 — Accessibility
    • Dec 16, 2025 — SEO / AEO / GEO
    • Jan 6, 2026 — Content Personalization
    • Jan 20, 2026 — Front End Development & Wireframing
    • Feb 3, 2026 — Design & Media
    • Feb 17, 2026 — Back End Development
    • Mar 3, 2026 — Conversational Search (with special guest!)
    • Mar 17, 2026 — Chatbots & Agentic AI
    • Mar 31, 2026 — Series Finale & Tool Review

    Whether you’re managing websites, content workflows, or entire digital ecosystems, this conversation is about using AI intentionally, to work smarter without losing the human judgment that keeps content trustworthy.

    New episodes every other Tuesday.

    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.

    Chapters

    (0:00) - Intro

    (0:50) - Today's focus: Content management with AI

    (1:58) - Content management opportunities with AI

    (6:18) - Recurring series theme: Trust

    (8:34) - Refine your process one step at a time

    (9:53) - Better content = better everything

    (10:22) - We tested 3 AI translation tools

    (12:02) - Cole's "elephant in the room" test

    (14:28) - Poor content = poor translations

    (16:58) - True translation happens between people

    (18:45) - Closing takeaways

    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts,...

    続きを読む 一部表示
    21 分
まだレビューはありません