エピソード

  • The Work Research Enables | Dave Hora (Consultant)
    2025/06/26
    Listen now on Apple, Spotify, and YouTube.—Dave Hora is perpetual Employee of the Month at Dave's Research Company: as a consultant, he helps leaders run strategic product initiatives and teams build well-informed product processes.In 2011, Dave was the first researcher at a mobile startup in San Francisco, then went on to work as the first research hire at five more companies including ResearchGate, PlanGrid, and Instacart. In 2020, he went independent and founded Dave's Research Company in Porto, Portugal.Dave co-led and co-designed the Research Skills Framework during his time as a Research Ops Community board member. He runs a small mailing list about how we make good software, and each year he takes a short sabbatical for winemaking or sake brewing season.In our conversation, we discuss:* Why researchers must understand the broader workflows and strategic goals their work feeds into.* How to “journey map” your research projects to identify patterns in decisions and outcomes.* The tension between pet projects and strategic alignment—especially in ambiguous organizations.* The importance of gaining visibility into upstream and downstream processes beyond the research itself.* How researchers can navigate vague strategies like “10x growth” without losing their grounding.Some takeaways:* Research is only valuable in context. Dave reminds us that insights have little power unless they directly support the work a team is trying to do. Strategic research isn’t about delivering answers in isolation, it’s about enabling action and influencing the sequence of product decisions.* Journey map your projects, not just your users. To grow as a researcher, reflect on your past projects and map the decisions, artifacts, and impacts they produced. Over time, you’ll start to see recurring patterns, what kinds of questions emerge at different phases, and how research is (or isn’t) used.* Visibility is your first step to influence. If you’re stuck in a validation loop, start by asking what happens next. Join meetings outside your immediate research bubble. Observe how decisions are made, how documents evolve, and where your insights go. Influence begins with curiosity and presence.* Without strategy, pet projects thrive. When companies lack a clear “what we are and aren’t doing,” well-intentioned ideas, often from leadership, can steamroll roadmaps. Researchers won’t always win these battles, but they can help clarify risks, expose assumptions, and steer ideas through a more thoughtful path to validation.* Your role isn’t to fix the org, but to participate wisely. You don’t need to solve your company’s strategic alignment or broken processes. But you can bring awareness to trade-offs, highlight what’s at stake, and help others reflect. Influence is surfacing the right questions at the right time.Where to find Dave:* Website* LinkedInStop piecing it together. Start leading the work.The Everything UXR Bundle is for researchers who are tired of duct-taping free templates and second-guessing what good looks like.You get my complete set of toolkits, templates, and strategy guides. used by teams across Google, Spotify, , to run credible research, influence decisions, and actually grow in your role.It’s built to save you time, raise your game, and make you the person people turn to.→ Save 140+ hours a year with ready-to-use templates and frameworks→ Boost productivity by 40% with tools that cut admin and sharpen your focus→ Increase research adoption by 50% through clearer, faster, more strategic deliveryhttps://userresearchstrategist.squarespace.com/everything-uxr-bundleInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
    続きを読む 一部表示
    27 分
  • Reporting Without Control | Steve Jenks (MeasuringU)
    2025/06/12
    Listen now on Apple, Spotify, and YouTube.—Steve is a UX researcher at MeasuringU, a full-service research agency in Denver, Colorado in the United States, and a research faculty member at the University of Denver. He has a Ph.D. in higher education with a focus on research methods and statistics, and transitioned into UX research after working in education policy and technology for over a decade. He loves the challenge of helping organizations make data-informed decisions to improve their products and services, and hopes to one day specialize in training and mentoring newer researchers in the field. Outside of work, he loves puzzles, IKEA, miniatures, volunteering, and is currently his local Disney Lorcana champion.In our conversation, we discuss:* What it’s like to run user research for products you don’t work on directly and can’t influence day-to-day.* The core differences between in-house and agency UX research and how to adapt your mindset.* How to tactfully redirect clients when they ask for the wrong method or too much scope.* Tips for managing clients and stakeholders you may never meet until the final presentation.* Why agency work can sharpen your skills in stakeholder engagement, methodology flexibility, and research storytelling.Some takeaways:* Influence doesn’t require ownership. Even when you’re not embedded in a product team, you can shape critical decisions by understanding the business need, offering the right methodology, and asking questions internal teams may overlook. Steve shows how an external researcher can become a trusted advisor by bringing fresh eyes and rigorous thinking.* Great research starts with pre-work, even under pressure. Agency research often comes with tight timelines, but skipping discovery is a mistake. Steve emphasizes getting a crash course in the product, team dynamics, and prior context before diving in. Understanding what success looks like and who the findings are for helps shape more actionable research.* Pushback is part of the job and it’s a good thing. Clients may ask for the wrong method or an excessive scope. Steve walks through how his team uses clear business reasoning, previous case studies, and budget realities to shift direction without creating friction. Being honest builds long-term trust and often leads to repeat work.* The final report isn’t just a deck, it’s a performance. You may only meet key stakeholders at the end, so make it count. Tailor insights to the audience (quant vs qual, detail vs big picture), practice storytelling, and have clear next steps ready. When done well, these sessions often spark follow-up projects or deeper buy-in from leadership.* Agency work builds layered skills fast. Steve loves the diversity of agency life: switching between domains, juggling multiple clients, and mentoring less experienced teams. It sharpens your ability to pivot between strategic and tactical work, advocate for better research, and influence teams from the outside even with limited face time.Where to find Steve:* Website* LinkedInStop piecing it together. Start leading the work.The Everything UXR Bundle is for researchers who are tired of duct-taping free templates and second-guessing what good looks like.You get my complete set of toolkits, templates, and strategy guides. used by teams across Google, Spotify, , to run credible research, influence decisions, and actually grow in your role.It’s built to save you time, raise your game, and make you the person people turn to.→ Save 140+ hours a year with ready-to-use templates and frameworks→ Boost productivity by 40% with tools that cut admin and sharpen your focus→ Increase research adoption by 50% through clearer, faster, more strategic deliveryInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
    続きを読む 一部表示
    34 分
  • Reframing Democratization | Ned Dwyer (Great Question)
    2025/05/29
    Listen now on Apple, Spotify, and YouTube.—Ned Dwyer is the Co-Founder and CEO of Great Question, the all-in-one UX research platform designed to democratize research at scale.After two successful exits as a founder, Ned launched his biggest idea to date: helping enterprise teams better understand their users. Ned has led Great Question in empowering UX researchers, designers, and product teams to collaborate seamlessly and uncover the insights needed to build something great.With over a decade of experience at the intersection of product, design & research; Ned has driven innovation and scaled businesses that solve complex challenges for enterprises.Outside of his professional pursuits, Ned loves spending time in sunny Oakland, California with his wife, two kids and three cats.In our conversation, we discuss:* What democratization really means and why it’s not just about “everyone doing research.”* The shift in sentiment and adoption—from early-stage startups to 16,000-person enterprises.* How researchers can avoid being sidelined by becoming facilitators, not gatekeepers.* The role of tools, policies, and AI in scaling high-quality research safely across teams.* Strategies for building the business case for tools and training—especially in resource-limited orgs.Some takeaways:* Democratization is already happening whether you’re involved or not. Ned emphasizes that research is already being done across organizations by non-researchers, just not always well. The opportunity for researchers is to step into a facilitator role: setting standards, defining guardrails, and ensuring quality without hoarding control.* Big orgs are leading the way, not just scrappy startups. Contrary to early assumptions, the most aggressive adopters of democratization aren’t just startups, they’re enterprises with thousands of employees. The difference? These organizations invest in scalable infrastructure, permissions, and training to empower safe, responsible research at scale.* Guardrails matter more than gatekeeping. With the right systems, democratization doesn’t have to mean chaos. Great Question includes features like eligibility criteria, access controls, incentive limits, study approval flows, and AI-powered report validation. These guardrails enable research at scale without compromising integrity or participant experience.* Make your case by speaking leadership’s language. To advocate for democratization tools or training, tie your request to business goals: reduced legal risk, better participant experience, efficiency gains, and fewer headcount needs. Use the “researcher effort score” to quantify pain points and show progress over time.* Want more influence? Get close to the money. Strategic researchers don’t wait for requests, they go to sales, marketing, and product to understand pain points and proactively solve them. Running win/loss research or unblocking customer access helps build trust, grow research demand, and elevate your role beyond usability testing.Where to find Ned:* Website* LinkedIn: Great Question* LinkedIn: Ned* Twitter/XInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    続きを読む 一部表示
    35 分
  • Resume critique series - Part one
    2025/05/19

    Hi all - this is a free series where I critique anonymized resumes that were submitted to me. If you love the work I do, please consider becoming a paid subscriber to this newsletter. It helps me continue what I do and putting this kind of work out into the community.

    Check out part two here.

    Stop applying. Start getting interviews.

    The UXR Job Bundle gives you everything you need to land better roles, faster.

    → Resume + portfolio templates that get callbacks

    → Interview frameworks that show how you think

    → Case study formats that hook hiring managers in 60 seconds

    → Negotiation scripts to help you stop settling

    Researchers who use it report 3x more interviews and stronger offers.

    Don’t just polish your resume. Change your outcome.

    Formulas:

    * [Verb] + [what you did] + [quantifier] which resulted in + [measurable or strategic impact]

    Example: Ran 4 onboarding interviews with new clients, which resulted in redesigned setup steps and a 25% drop in support tickets.

    * [Verb] + [insight you generated] + by [method], leading to + [decision/outcome]

    Example: Uncovered usability issues by synthesizing 12 support calls, leading to a streamlined payment flow.

    * [Verb] + [collaboration/project] + across [team/org], resulting in + [alignment/change]

    Example: Facilitated quarterly review across Product and Ops, resulting in better prioritization and fewer miscommunications.

    * [Verb] + [process/tool/project you led or improved] + [how many/who/what] which resulted in + [business/user impact]

    Example: Improved onboarding workflow used by 3 teams, which resulted in a 25% reduction in support queries.

    * [Verb] + [insight or decision you contributed to] + by [action taken] + leading to + [impact on project/team/metric]

    Example: Informed product roadmap by synthesizing 30 customer interviews, leading to launch of 2 new features.

    * [Verb] + [communication or output you created] + that influenced + [stakeholders/team] + to [do what]

    Example: Created user insight brief that influenced PMs to prioritize accessibility fixes.

    * [Verb] + [collaboration you facilitated] + across [teams/functions] + to [goal], resulting in + [change or outcome]

    Example: Facilitated weekly cross-functional syncs across Design and Ops to align on support triage, resulting in 30% faster escalation resolution.

    * [Verb] + [project or task] + within [timeline or budget], resulting in + [measurable business or user value]

    Example: Delivered usability testing project within 2 weeks, resulting in simplified checkout flow and 15% conversion uplift.

    * [Verb] + [problem you solved] + by [how you solved it],which [impact/result]

    Example: Resolved data duplication issue by implementing a shared tracking template, which reduced manual rework by 80%.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
    続きを読む 一部表示
    45 分
  • Inside Games User Research | Steve Bromley (Games User Research)
    2025/05/16
    Listen now on Apple, Spotify, and YouTube.—Steve is a games user research consultant, helping teams use player insight to create successful games. He works with publishers, platforms and studios of all sizes to transform their game development process, and build product strategies that combines player data with creativity. He work from ideation to post-launch in order to de-risk game development, and make games players love.Prior to this he was a senior user researcher for PlayStation and worked on many of their top European titles, including Horizon Zero Dawn, SingStar, the LittleBigPlanet series and the PlayStation VR lineup.Steve started the Games User Research mentoring scheme, which has linked hundreds of students with industry professionals from top games companies such as Sony, EA, Valve, Ubisoft and Microsoft. He wrote the bestselling book How To Be A Games User Researcher to share the expertise needed to work in the games industry.He regularly speaks at games industry conferences and on podcasts about games user research + playtesting, and has been recognised as a member of BAFTA. He also wrote the bestselling book Building User Research Teams, and helps teams build impactful research practice in-house.In our conversation, we discuss:* The evolution of Steve’s career from early days at PlayStation to running his own games UX consultancy.* The difference between research in games vs. traditional tech, especially around the lack of discovery work.* How to measure subjective experiences like “fun,” and why that starts by redefining what “fun” even means.* The influence of secrecy, creative ownership, and marketing pressure on research methods in the games industry.* Real-world methods used in games UX, like mass playtesting labs and segment-based multiplayer analysis.Some takeaways:* Research in games is heavily evaluative. Unlike traditional UX, which often starts with uncovering user needs, games UX usually kicks in once there’s a playable prototype. Because the “user need” in games is often just “make it fun,” research is focused more on assessing emotional impact and usability than on early-stage exploration.* Measuring fun is both subjective and contextual. Teams often ask, “Is this fun?”—but that question is too broad to act on. Steve explains that researchers must first help define what kind of fun is intended, whether that’s emotional engagement, replay behavior, or challenge. Only then can appropriate metrics or qualitative signals be applied.* Creative ownership adds complexity to stakeholder management. Games are seen as artistic work. Designers may be deeply emotionally invested in their ideas, which can make it harder to embrace critical feedback. This makes relationship-building, empathy, and framing feedback constructively especially important in games UX.* Secrecy shapes everything, from methods to sampling. Due to high financial stakes and aggressive marketing timelines, games researchers often can’t test publicly. This leads to lab-based studies with high participant control. Mass playtesting labs (20–80 people at once) are common for running controlled, large-scale tests without leaking content.* Toxicity and matchmaking need research too. Games with multiplayer or social components must test how players interact, especially when strangers are thrown together online. Teams look at voice/chat features, segmentation by playstyle, and matchmaking fairness to reduce toxicity and create balanced experiences.Where to find Steve:* Website* LinkedIn* Twitter/X* BlueSkyInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    続きを読む 一部表示
    31 分
  • Inside Insight: How I use Optimal to set up a prototype test
    2025/05/13

    In this episode, I cover:

    * Common mistakes teams make when prototype testing becomes routine or rushed.

    * A method for deciding whether a prototype test is even the right approach.

    * Clear goal-setting techniques that make your test focused and relevant.

    * How to define metrics that show both research quality and product value.

    * Writing user tasks that reflect real behavior and reveal friction points.

    Key Takeaways:

    * Low-fidelity prototypes limit learning. If your design doesn’t give people room to explore, or fail, you won’t see how they truly interact with it. Higher fidelity versions are much more effective for unmoderated studies.

    * Not every question needs a usability test. If you’re looking to understand motivations or needs, observing task flows may not be the right method. Start by asking what kind of data you’re actually trying to gather.

    * Goals guide everything. Strong prototype tests begin with clear goals. They shape the tasks, help with team alignment, and create a direct line between what you learn and what changes.

    * Track outcomes that matter to your team. Define a few ways you’ll measure success before the test begins, such as friction points found, task completion behaviors, or whether changes from the study affect real usage.

    * Write tasks people can relate to. Use short, specific scenarios rooted in familiar behavior. Instead of vague prompts, give people a purpose and context so their actions reflect how they’d use the product in real life.

    The prototype guide:

    Grab the full prototype guide with all the examples and formulas here and try it out with your next project (or with a project you recently did!).

    Try Optimal:

    Want to try this out on Optimal? You can grab a 20% discount using code Prototype2025 at checkout

    Interested in sponsoring the podcast?

    Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Reach out to me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    続きを読む 一部表示
    43 分
  • Designing for the Real World | Erik Stoltenberg Lahm (The LEGO Group)
    2025/05/02

    Listen now on Apple, Spotify, and YouTube.

    Erik is a behavioral scientist with a passion for understanding how people, especially kids, interact with digital experiences. He works at The LEGO Group, where he leads behavioral research to create safer, more inspiring, and more playful digital spaces for children. He specializes in using behavioral science, experimentation, and innovative research methodologies to uncover what kids need and love in digital play.

    Beyond his professional role, he is a self-proclaimed research methodology nerd, always exploring better ways to understand and test how kids engage with the digital world.

    In our conversation, we discuss:

    * Why ecological validity is critical to meaningful product testing and what it means in practice.

    * How Erik approaches testing with kids at LEGO, including the need for playful environments and cognitive load considerations.

    * The pitfalls of lab-based research and why researchers must move beyond “zoo-like” conditions to see real-world behavior.

    * Ways to mitigate social desirability and authority bias, especially when conducting research with children.

    * How remote research, diary studies, and mixed methods can provide deeper behavioral insights—if done with context in mind.

    Some takeaways:

    * Validity is about realism. Erik defines ecological validity as the extent to which research reflects real-world behavior. While traditional labs optimize for internal validity, in product development, what matters is whether your findings will translate when people are distracted, tired, or juggling multiple tasks.

    * Don’t study lions at the zoo. One of Erik’s standout metaphors urges researchers to avoid overly sanitized environments. Testing products in sterile labs might remove variables, but it also strips away the chaotic, layered reality where your product must actually succeed. Aim for the “Serengeti”—not the zoo.

    * Researching with kids requires creativity, play, and caution.

    Kids aren’t small adults, they process and respond differently. Erik emphasizes using play as a language, minimizing cognitive load, and focusing on behavioral observation over verbal responses. A child saying “I loved it” means little if they looked disengaged the whole time.

    * Remote testing can work if grounded in real-life context. Remote methods like diary studies and follow-up interviews can capture valuable insights, especially if paired with contextual in-person research first. The key is triangulating methods and validating self-reports with observed behavior.

    * Think beyond usability, map the behavior chain. A product’s ease of use in isolation means little if the behavior it enables is derailed by real-life obstacles. Erik illustrates this with a simple example: refilling soap sounds easy until you’re cold, wet, and have other priorities. Designing for behavior means understanding the entire chain around your product.

    Where to find Erik:

    * LinkedIn

    Interested in sponsoring the podcast?

    Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!

    The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit userresearchacademy.substack.com/subscribe
    続きを読む 一部表示
    33 分
  • Making Continuous Discovery Work | Petra Kubalcik (Omio)
    2025/04/25
    Listen now on Apple, Spotify, and YouTube.Petra Kubalcik is an accomplished user research professional with over two decades of international experience. Originating from Australia, she has honed her research skills across Japan, Hong Kong, the UK, Czech Republic, and most recently, Germany. Petra has led research teams at Dyson, Cookpad and currently serves as Head of User Research at Omio. She is a champion of user-centricity, ensuring that user perspectives remain central to strategy, innovation and development. Petra has personally conducted research in over 40 countries, bringing a global perspective to her work. Outside of her professional endeavors, she is dedicated to volunteering, sailing, woodworking and supporting the Wallabies.In our conversation, we discuss:* Why continuous discovery is often misunderstood and how separating continuous from discovery can clarify your goals.* What makes a strong foundation for setting up a continuous discovery program, including the importance of stakeholder goals and UX maturity.* How to design effective cadences and role-sharing models depending on whether you’re doing discovery or continuous touchpoints.* The artifacts and outputs that make these programs sustainable and useful, from pathway playbooks to Miro boards.* Red flags that indicate you shouldn’t implement continuous discovery and what to do instead.Some takeaways:* Continuous discovery is not always discovery. Petra emphasizes that many stakeholders use the term continuous discovery when they really mean frequent customer touchpoints. Researchers need to clarify whether the goal is to explore new insights (discovery) or simply maintain regular user input and adjust the program accordingly.* Start with a crystal-clear ‘why.’ Without a well-defined reason for starting continuous discovery, the effort can quickly become unsustainable or directionless. Petra urges researchers to treat these programs like any other research project: define the objective, understand stakeholder needs, and forecast what success looks like. Your “why” will be your compass when things get difficult.* Programs must match UX maturity and resources. Continuous discovery isn’t right for every organization. Petra warns against starting these programs in low-maturity teams with limited resources, unclear goals, or minimal stakeholder buy-in. If you’re fighting at every step, you risk burnout and low-impact work.* Cadence and involvement should flex by context. A one-size-fits-all cadence doesn’t work. For light-touch programs with PMs or designers leading sessions, weekly or biweekly cadences might work. For true discovery efforts, a slower pace is essential to allow for iteration, depth, and evolution in the research plan.* Build reusable frameworks and artifacts to lighten the load. To scale continuous discovery, Petra recommends investing in repeatable templates such as objective-setting docs, note-taking guides, playbooks, and pre-aligned outputs. For example, a “pathway playbook” outlines flows users will walk through and provides a structured format for collecting and analyzing data. These tools ensure quality while keeping researchers sane.Where to find Petra:* LinkedInInterested in sponsoring the podcast?Interested in sponsoring or advertising on this podcast? I’m always looking to partner with brands and businesses that align with my audience. Book a call or email me at nikki@userresearchacademy.com to learn more about sponsorship opportunities!The views and opinions expressed by the guests on this podcast are their own and do not necessarily reflect the views, positions, or policies of the host, the podcast, or any affiliated organizations or sponsors. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.userresearchstrategist.com/subscribe
    続きを読む 一部表示
    34 分