エピソード

  • How to Test a Release – Oleksandr Bolzhelarskyi
    2026/03/27

    Today, Oleksandr Bolzhelarskyi, Director of QA & Release Management at Salesfloor, shares why successful releases are impossible without responsible testing and quality management, and how teams can stop firefighting production bugs and start shipping with confidence.


    👉 5 Golden Rules for a Successful Release


    ⛔ Testing features individually → ✅ Testing the release as a package Features that work perfectly in isolation can break when merged together. Two developers changing the same component won't know about each other's work until it's combined. Always retest the full package.


    ⛔ Bug bashes → ✅ Combining structured testing with fresh eyes.

    Involving non-testers brings valuable perspective, but it's not systematic. Bug bashes catch random issues, not targeted risks. Use them as a complement, never as your only release testing.


    ⛔ Testing everything → ✅ Testing based on risk / product knowledge You'll never have time to cover everything. Know your product architecture, understand what the new changes could break, and focus regression testing where the real risks are.


    ⛔ Release pressure → ✅ Communicating untested risks clearly

    Pressure to ship means the feature matters. Instead of pushing back emotionally, tell your manager: here's what we tested, here's what we didn't, and here's what could go wrong. Let them make an informed decision.


    ⛔ Allowing "one last quick fix" → ✅ Enforcing a strict code freeze Last-minute changes during regression testing cause the biggest surprises. Say no. If it's urgent, ship it as a hot fix after the current release is done.


    Connect on LinkedIn: https://www.linkedin.com/in/oleksandr-bolzhelarskyi/

    続きを読む 一部表示
    58 分
  • How to Test with Independent QA | Guest: Tudor Brad
    2026/03/19

    If the chef shouldn't certify his own dish, why is your dev team validating their own code? Today, Tudor Brad shares why independent QA is non-negotiable. With 15+ years in QA, Tudor founded BetterQA in 2018, A team of 50+ engineers across 24+ countries. They've built in-house tools like BugBoard, Flows, and Better Flow to bring full transparency and quality to the testing process.

    続きを読む 一部表示
    40 分
  • How to Test This with AI and MCP - Deepak Kamboj
    2026/03/17

    Today, Deepak Kamboj, Senior Software Engineer and Solution Architect at Microsoft, shares how he scaled Playwright automation across 40 teams and 14,000 test cases and why AI agents are the next leap for test engineering.


    👉👉 Key takeaways from Deepak:


    🔹 Build infrastructure, not just test cases

    "My work was focused on building a large scale automation framework, not just writing test cases."


    🔹 Use AI across the full test lifecycle

    "I started building AI agents that can generate Playwright test cases, analyze failures, run accessibility checks, do performance checks, visual comparison, and ultimately create pull requests automatically."


    🔹 Learn prompt engineering

    "Prompt engineering is very good when you are about to do automation. The way you are writing a system prompt or your user prompt will play a big role in the way your agent will behave."


    🔹 Don't fear AI — use it

    "AI is your co-pilot or your companion. Don't take it as your replacement. It will complement you. It will provide you with more efficiency. It will make you more effective."


    🔹 Let AI handle the repetitive work

    "Testers can use AI agents to automate a lot of repetitive activities they were performing. They can use agents to understand why a system fails, why a particular test case fails, what type of test cases to write."

    続きを読む 一部表示
    27 分
  • How to test with HIST - Ruslan Desyatnikov
    2026/03/07

    Today, Ruslan Desyatnikov, CEO of QA Mentor and creator of the Human Intelligence Software Testing (HIST) framework, explains why the QA industry is at risk of losing its strategic role and how teams can bring human intelligence back into software testing.


    👉 Testing Problems solved with the HIST mindset


    1️⃣ STOP treating requirements as a Bible — START challenging them early

    => Actively question requirements during review sessions . Always ask why and what if to eliminate ambiguities and prevent costly downstream defects.


    2️⃣ STOP obsessing over automation — START using it strategically

    => Automation can absolutely supports testing but doesn't replace human thinking. Focus on risk-based coverage and business value, not big test cases numbers


    3️⃣ STOP being a passive button-pusher — START thinking like an investigatorGo beyond front-end clicking. Analyze backend logic, business rules, integrations, and real user purpose to uncover meaningful defects, not just cosmetic ones.


    4️⃣ STOP reporting isolated bugs — START connecting defects to business impact

    =>Map quality issues to revenue generation, client retention, and business value — so stakeholders understand what's testers bring to the table.


    5️⃣ STOP blindly trusting AI output — START keeping human intelligence in control

    =>Whether it's AI-generated test cases or automation predictions, always verify, spot-check, and apply human judgment before acting on results.


    Resources mentioned in this episode:

    - QA Mentor - https://www.qamentor.com/

    - HIST Testing Methodology - https://www.qamentor.com/what-is-hist/

    - Ruslan's LinkedIn - https://www.linkedin.com/in/ruslandesyatnikov/

    続きを読む 一部表示
    1 時間 1 分
  • How to test with Spec2TestAI - Missy Trumpler
    2026/03/04

    Episode #16 – How to Test with Spec2TestAI | Guest: Missy TrumplerIn this episode of How to Test This, Missy Trumpler, CEO of AgileAI Labs, explains why poor requirements can cause up to 70% of software defects and how Spec2TestAI uses AI safely to analyze requirements and stop defects before a single line of code is written. Why is it worth trying Spec2TestAI?1️⃣ Stop defects at the requirement stage:Spec2TestAI enhances user stories, removes ambiguity, and generates acceptance criteria before development begins.2️⃣ Get full traceability from spec to test to code:Spec2TestAI creates traceable outputs to ensure teams test exactly what was specified without misinterpretation.3️⃣ Shift from reactive testing to predictive quality:Spec2TestAI can predict potential defects by analyzing requirements, generated tests, and code alignment early in the lifecycle.4️⃣ Collaborate better between BA, Dev, and QA:By working from a shared requirement base, business analysts, developers, and testers can collaborate from the same source of truth.5️⃣ Keep humans in control with full transparency:No black box. Users can see how requirements were enhanced, how tests were derived, and how traceability is maintained.Resources to find out more:

    • AgileAI Labs’ website to request a free trial and read the white paper: https://agileailabs.com/
    • AgileAI Labs’ LinkedIn: https://www.linkedin.com/company/agileailabs/


    • Missy Trumpler’s LinkedIn: https://www.linkedin.com/in/missy-trumpler-08779611/


    • Video Spec2Test Demo: https://www.youtube.com/watch?v=-ZijVeuORZo


    続きを読む 一部表示
    44 分
  • How to test Performance - Dmytro Pozdniakov
    2026/03/02

    In this episode, we explore how to test performance with Dmytro Pozdniakov, co-founder and CTO of LoadTestExperts. With over 12 years of experience in performance engineering, he shares a practical approach to auditing systems, analyzing workload, identifying bottlenecks, and running effective load tests.We discuss the difference between performance, load, and stress testing, why workload analysis is critical, how to define meaningful acceptance criteria, and how QA teams can collaborate with developers and DevOps to build scalable, reliable systems.This episode is for QA professionals who want to go beyond functional testing and understand how to design, validate, and scale systems with confidence.

    続きを読む 一部表示
    1 時間 10 分
  • How to Test with Testing While Developing (TWD) - Kevin Martínez
    2026/02/16

    In this episode, Kevin Martinez, a Software Architect and Full Stack expert at Orbitant discusses his journey and the creation of Testing While Developing (TWD). Kevin explains why testing shouldn't be an afterthought, sharing best practices and common mistakes while demonstrating how the TWD tool directly improve the developer experience by bringing testing into the heart of the coding process.

    続きを読む 一部表示
    40 分
  • How to Test Microservices – Jay Kishore Duvvuri
    2026/02/09

    In this conversation, Jay Kishore Duvvuri shares his extensive experience in test automation, particularly focusing on microservices and the use of Playwright. He discusses his journey from manual testing to automation, the importance of AI in accelerating testing processes, and the challenges of flaky tests. Jay emphasizes the significance of CI/CD in modern testing environments and provides insights into the evolving role of manual QA testers in the age of AI. He also offers practical advice for aspiring QA professionals, highlighting essential skills and tools to learn.


    For anyone curious, you can check:Jay Kishore Duvvuri LinkedIn - https://www.linkedin.com/in/jay-kishore-duvvuri-712b1a70Jay Kishore Duvvuri GitHub - https://github.com/JayKishoreDuvvuriJay Kishore Duvvuri Blogger profile (all blogs + articles hub) - https://www.blogger.com/profile/06939442079028713822

    続きを読む 一部表示
    53 分