• S3E9 - Intentional AI: Just because AI can create images doesn't mean you should use them
    2026/02/10

    In Episode 9 of the Intentional AI series, Cole and Virgil take on one of the most common and misunderstood uses of AI today: image and graphic generation. From social media visuals to promotional graphics, AI images are fast, easy, and everywhere.

    The conversation focuses on why images became the public on ramp to AI and why that familiarity creates risk. Visuals feel harmless, but the moment AI starts generating finished looking images, teams inherit decisions around ownership, ethics, and trust that they are often unprepared to make.

    A central theme of the episode is responsibility escalation. As AI reduces the effort required to create images, the importance of human judgment increases. Treating AI generated visuals as final work can quickly introduce legal, ethical, and reputational problems.


    Virgil shares a practical experiment where he used a simple prompt to generate three social media promotional graphics from an existing article and tested the results across three tools: Canva, Claude, and Artlist.


    Canva produced the most generic and repetitive designs. Claude delivered cleaner structure and stronger messaging but struggled with fonts, formats, and variation. Artlist created the most visually interesting outputs, though it introduced workflow limitations and cost concerns.


    The episode reinforces a consistent conclusion across the series. AI can help jumpstart visual work, but it cannot replace judgment, intent, or responsibility.


    In this episode, they explore:

    1. Why AI images are so tempting to use
    2. Where AI generated graphics actually help
    3. Why most AI visuals fall flat
    4. Ethical and ownership risks teams overlook
    5. A comparison of Canva, Claude, and Artlist


    Previously in the Intentional AI series:

    1. Episode 1: Intentional AI and the Content Lifecycle
    2. Episode 2: Maximizing AI for Research and Analysis
    3. Episode 3: Smarter Content Creation with AI
    4. Episode 4: The role of AI in content management
    5. Episode 5: How much can you trust AI for accessibility
    6. Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEO
    7. Episode 7: Why AI can make your content personalization worse
    8. Episode 8: The real value of AI wireframes is NOT the wireframes


    New episodes every other Tuesday.


    For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    (0:00) - Intro

    (1:40) - You can’t escape AI imagery

    (3:18) - Why AI images are risky

    (4:40) - The legal and ethical line

    (6:15) - Creativity vs time and cost

    (9:28) - Every tool has hopped on the AI bandwagon

    (13:20) - The slippery slope of AI visuals

    (15:35) - We tested 3 tools for AI visuals

    (17:30) - Testing Canva

    (20:40) - Testing Claude...

    続きを読む 一部表示
    29 分
  • S3E8 - Intentional AI: The real value of AI wireframes is NOT the wireframes
    2026/01/28

    In Episode 8 of the Intentional AI series, Cole, Virgil, and Chad explore one of the most tempting uses of AI in digital work: wireframing and page layout. With AI now able to generate full wireframes in minutes or even seconds, the promise of speed is undeniable. But speed alone is not the point.

    The conversation focuses on where AI genuinely helps in the wireframing process and where it introduces new risks. Wireframes are meant to establish structure, hierarchy, and intent, not just visual output. While AI can quickly generate layouts, components, and patterns, it still requires strong human judgment to evaluate what is correct, what is missing, and what could cause problems downstream.


    A key theme of the episode is escalation of responsibility. As AI reduces the time required to create wireframes, the importance of human review, direction, and decision making increases. Treating AI generated wireframes as finished work can introduce serious risks, especially around accessibility, content fidelity, maintainability, and overall project direction.


    Virgil shares an experiment where he used AI to first generate a detailed prompt for wireframing, then tested that prompt across three tools: Claude, Google Gemini 3, and Figma Make. The results reveal clear differences in layout quality, accessibility handling, content retention, and how easily the outputs could be integrated into real workflows.

    Claude produced the strongest layout and structural patterns but failed badly on accessibility and removed large portions of content. Gemini generated simpler wireframes with clearer structure, but used even less content and still struggled with accessibility. Figma Make stood out for workflow integration, retaining all content and allowing direct editing inside Figma, though it also failed accessibility requirements and relied heavily on generic styling and placeholder imagery.


    Throughout the episode, the group returns to the same conclusion. AI is extremely effective at getting the first portion of wireframing done quickly. It is far less effective at making judgment calls, enforcing standards, or understanding context without guidance.


    In this episode, they explore:

    1. How wireframing fits into the content lifecycle
    2. Why speed changes the risk profile of design work
    3. Using AI to generate prompts instead of starting from scratch
    4. Where AI wireframes succeed and where they fail
    5. Accessibility and content risks in AI generated layouts
    6. A wireframing comparison of Claude, Gemini 3, and Figma Make


    A downloadable Episode Companion Guide is available below with tool comparisons and key takeaways.

    DS-S3-E8-CompanionDoc.pdf


    Previously in the Intentional AI series:

    1. Episode 1: Intentional AI and the Content Lifecycle
    2. Episode 2: Maximizing AI for Research & Analysis
    3. Episode 3: Smarter Content Creation with AI
    4. Episode 4: The role of AI in content management
    5. Episode 5: How much can you trust AI for...
    続きを読む 一部表示
    29 分
  • S3E7 - Intentional AI: Why AI can make your content personalization worse
    2026/01/13

    In Episode 7 of the Intentional AI series, Cole and Virgil focus on content personalization and why it is one of the most overpromised areas of AI. While personalization is often positioned as simple and automated, doing it well requires far more clarity and intent than most tools suggest.

    They break personalization into two main approaches. Role based personalization tailors messages for specific audiences or job functions, while behavioral personalization adapts experiences based on how people interact with content over time. The conversation also touches on predictive analysis and where AI may eventually help interpret patterns across analytics data.


    A central theme of the episode is trust. Using AI for personalization assumes the system understands audience priorities and pain points. Without clear direction, AI fills in the gaps with assumptions. Cole and Virgil explain why personalization has always been difficult to implement, why adoption remains low, and why AI does not remove the need for strategy, measurement, or human judgment.


    The episode also addresses the risks of personalization. Messages that are too generic get ignored, while messages that feel overly personal can cross into uncomfortable territory. Finding the right balance is still a human responsibility.


    In the second half of the episode, they continue their ongoing experiment using the same AI written accessibility article from earlier episodes. This time, they test three tools by asking them to generate role based promotional emails for a head of web marketing, a director of information technology, and a C level executive. The results highlight meaningful differences in tone, structure, and assumptions across tools.


    The takeaway is consistent with the Intentional AI series. AI can support personalization, but only when you define goals, outcomes, and boundaries first.


    In this episode, they explore:

    1. What content personalization actually means
    2. Role based versus behavioral personalization
    3. Why personalization adoption remains low
    4. The balance between relevance and creepiness
    5. How AI supports personalization without replacing strategy
    6. A role based email comparison of Perplexity, Copilot, and Claude


    A downloadable Episode Companion Guide is available below with tool comparisons and practical takeaways.

    DS-S3-E7-CompanionDoc.pdf


    Previously in the Intentional AI series:

    1. Episode 1: Intentional AI and the Content Lifecycle
    2. Episode 2: Using AI for Research and Analysis
    3. Episode 3: AI and Content Creation
    4. Episode 4: Content Management and AI
    5. Episode 5: How much can you trust AI for accessibility?
    6. Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEO


    New episodes every other Tuesday.


    For more conversations about AI and digital strategy, visit

    続きを読む 一部表示
    26 分
  • S3E6 - Intentional AI: You’re asking AI to solve the wrong problems for SEO/GEO/AEO
    2025/12/16

    In Episode 6 of the Intentional AI series, Cole, Virgil, and Seth move into the visibility stage of the content lifecycle and tackle a common mistake they see everywhere. Teams keep treating SEO, GEO, and AEO as optimization problems, when in reality they are content quality, structure, and clarity problems.

    Search engines andgenerative models have both gotten smarter. Keyword tricks, shortcuts, and “secret sauce” tactics no longer work the way they once did. Instead, visibility now depends on clear intent, strong structure, accessible language, and content that actually helps people. The group looks at how SEO history is repeating itself, why organizations keep chasing hacks, and how that mindset actively works against long-term discoverability.

    They also dig into how SEO, GEO, and AEO overlap, where they differ, and why writing exclusively for AI can backfire by alienating human readers. The conversation covers content modeling, headless-style structures, and why these approaches help machines understand relationships without sacrificing usability.

    A major focus of the episode is schema. The team explains why schema is becoming increasingly important for generative engines, why it is difficult and error-prone to manage at scale, and where AI can help draft complex schema structures without fully understanding context. This leads to a broader point. AI can accelerate specific tasks, but it cannot replace judgment, prioritization, or review.

    In the second half of the episode, they continue their ongoing experiment using the same AI-written accessibility article from earlier episodes. They test how three tools approach GEO-focused improvements. Each tool surfaces different insights, none of them are complete on their own, and all of them require human decision-making to be useful. The takeaway is consistent with the theme of the series. AI is powerful when you ask it to solve the right problems, and dangerous when you expect it to fix foundational issues for you.

    In this episode, they explore:

    • Why SEO, GEO, and AEO fail when treated as optimization tricks
    • How search has shifted from keywords to clarity, structure, and intent
    • Where SEO and GEO overlap and where they meaningfully diverge
    • The risk of writing for AI instead of for people
    • Why content modeling supports both search engines and generative engines
    • How AI can assist with schema creation and where humans must intervene
    • Why repeating the same schema everywhere weakens its value
    • A GEO-focused comparison of Writesonic, Grammarly, and Claude
    • Why broad prompts underperform and targeted prompts lead to better outcomes

    A downloadable Episode Companion Guide is available below. It includes tool notes, schema examples, prompt guidance, and practical takeaways for applying AI to search without losing clarity or control.

    DS-S3-E6-CompanionDoc.pdf


    Previously in the Intentional AI series:

    • Episode 1: Applying AI to the content lifecycle
    • Episode 2: Maximizing AI for research and analysis
    • Episode 3: Smarter content creation with AI
    • Episode 4: The role of AI in content management AI
    • Episode 5: How much can you trust AI for accessibility?


    Upcoming episodes in the Intentional AI series:

    • Jan 6, 2026 – Content Personalization
    • Jan 20, 2026 – Wireframing and Layout
    • Feb 3, 2026 – Design and Media
    • Feb 17, 2026 – Back End Development
    • Mar 3, 2026 – Conversational Search (with special guest)
    • Mar 17, 2026 – Chatbots and Agentic AI
    • Mar 31, 2026 – Series Finale and Tool Review


    Holiday break notice

    Discussing Stupid will be taking a short break

    続きを読む 一部表示
    26 分
  • S3E5 - Intentional AI: How much can you trust AI for accessibility?
    2025/12/02

    In Episode 5 of the Intentional AI series, Cole, Virgil, and Seth shift into another part of the content lifecycle. This time, they focus on accessibility and how AI fits into that work.

    Accessibility is more than code checks. It is making sure people can actually use and understand what you create. The team walks through what happened when they ran the High Monkey website through an AI accessibility review, where the tool gave helpful guidance, and where it completely misread the page.


    They also talk about the pieces of accessibility that AI handles surprisingly well, especially language, metaphors, and readability, and why these areas are often missed by standard scanners.


    In the second half of the episode, they continue the ongoing experiment from earlier episodes. Using the same AI written article from before, they test how three tools handle rewriting it to an adult eighth grade reading level, then compare the results with a readability checker. The differences across models show why simple writing, clear prompts, and human review are still necessary.


    In this episode, they explore:

    • How AI evaluates accessibility on a real website
    • Where AI tools give useful insights and where they misinterpret content
    • Why conversational explanations can help non technical teams
    • How to prompt AI to look for the issues you actually care about
    • The importance of plain language and readable writing in accessibility
    • A readability comparison using Copilot, Perplexity, and Grammarly
    • Why simple content supports both accessibility and AI performance


    A downloadable Episode Companion Guide is available below. It includes key takeaways, tool notes, prompt examples, and practical advice for using AI in accessibility work.


    DS-S3-E5-CompanionDoc.pdf


    Upcoming episodes in the Intentional AI series:

    • Dec 16, 2025 - SEO / AEO / GEO
    • Jan 6, 2026 - Content Personalization
    • Jan 20, 2026 - Front End Development and Wireframing
    • Feb 3, 2026 - Design and Media
    • Feb 17, 2026 - Back End Development
    • Mar 3, 2026 - Conversational Search (with special guest)
    • Mar 17, 2026 - Chatbots and Agentic AI
    • Mar 31, 2026 - Series Finale and Tool Review


    Whether you work on websites, content workflows, or internal digital tools, this conversation is about using AI with care. The goal is to work smarter, keep content readable, and avoid handing all of your judgment over to automation.


    New episodes every other Tuesday.


    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    Chapters

    (0:00) - Intro

    (0:46) - Today’s focus: Accessibility with AI

    (1:20) - We let AI audit HighMonkey.com

    (4:00) - Finding the human value in AI feedback

    (6:25) - The power of strategic prompting

    (12:33) - We tested 3 AI tools for accessibility

    (14:49) - AI Tool findings

    (18:17) - Keep all your readers in mind

    (20:50) - Next episode preview


    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts, Spotify, or Soundcloud:

    続きを読む 一部表示
    23 分
  • S3E4 - Intentional AI: The role of AI in content management
    2025/11/11

    In Episode 4 of the Intentional AI series, Cole and Virgil move further into the content lifecycle and this time they are focusing on content management.

    Once your content’s written, the real work begins. Editing, organizing, translating, tagging, all the behind-the-scenes steps that keep content consistent and usable. In this episode, the team looks at how AI can help streamline those tasks and where it still creates new challenges.

    Joined by returning guest Chad, they break down where AI fits, where it fails, and what happens when you trust it to translate complex content on its own.

    In this episode, they explore:

    • How AI supports the content management stage of the lifecycle
    • Common use cases like translation, auto-summary fields, and accessibility checks
    • Where automation makes sense and where it doesn’t
    • The biggest risks of AI content management, from oversimplification to data privacy
    • Why good input (clear, readable content) still determines good output
    • How readable, accessible writing improves both human and AI understanding

    This episode also continues the real-world experiment from previous episodes.

    Using the accessibility article originally created with Writesonic, the team tests how well three AI tools: Google Translate, DeepL, and ChatGPT, handle translating the piece into Spanish. The results reveal major differences in accuracy, tone, and overall usability across each model.

    A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content management stage.

    DS-S3-E4-CompanionDoc.pdf

    🦃 Note: We’re taking a short Thanksgiving break, the next episode will drop on December 2, 2025.

    Upcoming episodes in the Intentional AI series:

    • Dec 2, 2025 — Accessibility
    • Dec 16, 2025 — SEO / AEO / GEO
    • Jan 6, 2026 — Content Personalization
    • Jan 20, 2026 — Front End Development & Wireframing
    • Feb 3, 2026 — Design & Media
    • Feb 17, 2026 — Back End Development
    • Mar 3, 2026 — Conversational Search (with special guest!)
    • Mar 17, 2026 — Chatbots & Agentic AI
    • Mar 31, 2026 — Series Finale & Tool Review

    Whether you’re managing websites, content workflows, or entire digital ecosystems, this conversation is about using AI intentionally, to work smarter without losing the human judgment that keeps content trustworthy.

    New episodes every other Tuesday.

    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.

    Chapters

    (0:00) - Intro

    (0:50) - Today's focus: Content management with AI

    (1:58) - Content management opportunities with AI

    (6:18) - Recurring series theme: Trust

    (8:34) - Refine your process one step at a time

    (9:53) - Better content = better everything

    (10:22) - We tested 3 AI translation tools

    (12:02) - Cole's "elephant in the room" test

    (14:28) - Poor content = poor translations

    (16:58) - True translation happens between people

    (18:45) - Closing takeaways

    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts,...

    続きを読む 一部表示
    21 分
  • S3E3 - Intentional AI: Smarter content creation with AI
    2025/10/28

    In Episode 3 of the Intentional AI series, Cole and Virgil move into the next stage of the content lifecycle: content creation.

    AI can write faster than ever, but that doesn’t mean it writes well. From prompting and editing to maintaining voice and originality, AI-generated content still requires human effort and judgment. In this episode, the team explores where AI can help streamline production and where it can’t replace the creative process.

    In this episode, they explore:

    • How AI fits into the content creation stage of the lifecycle
    • Why AI-generated content often takes just as much time as writing from scratch
    • The key risks of AI content creation, including accuracy, effort, and authenticity
    • How to maintain your voice, tone, and originality when using AI tools
    • Why humans are still responsible for quality control and credibility
    • What happens when you test the same research prompt across three writing tools


    This episode also continues the real-world experiment from Episode 2. Using the research compiled with Perplexity, the team tests how three content-generation tools—Jenni AI, Perplexity Pro, and Writesonic—handle the same writing task. The results reveal just how differently each model performs when asked to create original, publishable content.


    A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content creation stage.


    DS-S3-E3-CompanionDoc.pdf


    Upcoming episodes in the Intentional AI series:

    • Nov 11, 2025 — Content Management

    • Dec 2, 2025 — Accessibility

    • Dec 16, 2025 — SEO / AEO / GEO

    • Jan 6, 2026 — Content Personalization

    • Jan 20, 2026 — Front End Development & Wireframing

    • Feb 3, 2026 — Design & Media

    • Feb 17, 2026 — Back End Development

    • Mar 3, 2026 — Conversational Search (with special guest!)

    • Mar 17, 2026 — Chatbots & Agentic AI

    • Mar 31, 2026 — Series Finale & Tool Review

    Whether you’re a marketer, strategist, or developer, this conversation is about creating content intentionally and keeping your human voice at the center of it all.


    New episodes every other Tuesday.


    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    Chapters

    (0:00) - Intro

    (0:30) - Smarter content creation with AI

    (1:00) - Effort doesn't go away

    (3:20) - Tool / LLM differences

    (5:34) - Audience fit & voice

    (7:44) - We tested 3 tools for AI content creation

    (10:08) - Testing Jenni AI

    (13:23) - Testing Perplexity

    (14:55) - Testing Writesonic

    (16:55) - Key Takeaways


    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts, Spotify, or Soundcloud:

    https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024

    https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0

    続きを読む 一部表示
    20 分
  • S3E2 - Intentional AI: Maximizing AI for research & analysis
    2025/10/14

    In Episode 2 of the Intentional AI series, Cole and Virgil dive into the first real stage of the content lifecycle: research and analysis.

    From brainstorming ideas to verifying data sources, AI is being used everywhere in the early stages of content creation. But how much of that information can you actually trust? In this episode, the team unpacks where AI helps, where it hurts, and why you still need to be the researcher of the research.

    In this episode, they explore:

    • How AI fits into the research and analysis stage of the content lifecycle
    • The major risks of using AI for research, including accuracy, bias, and misinformation
    • Why trust, verification, and validation are now part of your job
    • Security and legal concerns around AI scraping and data usage
    • How different tools handle citations, transparency, and usability
    • Why you can’t skip the human role in confirming, editing, and contextualizing AI outputs

    This episode also features the first step in a real experiment: researching a blog topic on digital accessibility using the tools Perplexity, ChatGPT, and Copilot. The results of that research will directly fuel the next episode on content creation.

    A downloadable Episode Companion Guide is available below. It includes key episode takeaways, tool comparisons, and practical guidance on how to use AI responsibly during the research stage.


    DS-S3-E2-CompanionDoc.pdf


    Upcoming episodes in the Intentional AI series:

    • Oct 28, 2025 — Content Creation
    • Nov 11, 2025 — Content Management
    • Dec 2, 2025 — Accessibility
    • Dec 16, 2025 — SEO / AEO / GEO
    • Jan 6, 2026 — Content Personalization
    • Jan 20, 2026 — Front End Development & Wireframing
    • Feb 3, 2026 — Design & Media
    • Feb 17, 2026 — Back End Development
    • Mar 3, 2026 — Conversational Search (with special guest!)
    • Mar 17, 2026 — Chatbots & Agentic AI
    • Mar 31, 2026 — Series Finale & Tool Review


    Whether you’re a marketer, strategist, or developer, this conversation is about making AI adoption intentional and keeping your critical thinking sharp.


    New episodes every other Tuesday.


    For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform.


    (0:00) - Intro

    (1:44) - Better research with AI

    (3:46) - Risk: Trust & reliability

    (5:29) - Risk: Security/legal concerns

    (7:04) - Risk: Hallucinations

    (9:17) - We tested 3 tools for AI research

    (11:03) - Testing Perplexity

    (14:38) - Testing ChatGPT

    (17:45) - Testing Copilot

    (19:54) - Comparing the tools and key takeaways

    (20:52) - Outro


    Subscribe for email updates on our website:

    https://www.discussingstupid.com/

    Watch us on YouTube:

    https://www.youtube.com/@discussingstupid

    Listen on Apple Podcasts, Spotify, or Soundcloud:

    https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024

    https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0

    続きを読む 一部表示
    22 分