エピソード

  • Agentic Patient Engagement - EP 55 - Alex Zoller - PatientGenie
    2026/02/26

    In this episode, Alex Zoller discusses the innovative use of AI agents in healthcare to improve patient engagement and access. His platform utilizes a multi-agent architecture to facilitate communication between healthcare plans and members, ensuring that patients receive personalized assistance in scheduling appointments and navigating the healthcare system. The conversation covers the challenges of maintaining context in voice interactions, the importance of compliance and validation, and the operational efficiencies gained through automation. Alex also shares insights on product management and the future of AI in healthcare, emphasizing the need for empathy and scalability in solutions.takeawaysAI agents can significantly improve healthcare access.Multi-agent architecture allows for more complex interactions.Empathy is crucial in healthcare communications.Compliance and validation are essential to avoid errors.Testing and simulation are key to agent performance.Agents can operate 24/7, enhancing patient engagement.Understanding existing workflows is vital for implementation.Healthcare solutions must be scalable and adaptable.Mistakes can be corrected in real-time by the system.Operational metrics show significant cost savings.titlesRevolutionizing Healthcare with AI AgentsThe Future of Patient EngagementSound Bites"Healthcare has zero tolerance for errors.""Quality is our top priority.""Empathy is a priority for healthcare."Chapters00:00Introduction to AI Agents in Healthcare02:46The Outreach Process for Annual Wellness Visits05:58Multi-Agent Architecture Explained08:35Navigating IVR and Provider Interactions11:33Ensuring Compliance and Quality in Healthcare14:26Handling Mistakes and Safeguards17:13Scaling and Cost Efficiency of AI Agents19:56Future Capabilities and Expanding Use Cases22:41Product Management Insights and Best Practiceshttps://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    続きを読む 一部表示
    24 分
  • Agentic Code Scanning - EP 54 - Rome Thorstenson - Rafter.so
    2026/02/20

    In this episode, Philippe Trounev interviews Rome Thorstenson, a software engineer and AI researcher, discussing the intersection of AI and cybersecurity. They explore the current state of code security, the role of AI agents in identifying vulnerabilities, and the challenges of trusting these systems. Rom shares insights from his research at NeurIPS and emphasizes the importance of proactive security measures for developers.takeaways80% of the code shipped to production is not secure.AI agents are increasingly used to analyze code for vulnerabilities.Security often takes a backseat to feature development.Evaluating the security of a code base is a complex task.Prompt injection poses significant risks for AI systems.Developers need to prioritize security in their workflows.Rafter offers tools to simplify security scanning for developers.Research in mechanistic interpretability can enhance AI security agents.The landscape of cybersecurity is evolving with AI advancements.Proactive security measures are essential to combat emerging threats.titlesAI's Role in Cybersecurity: A Deep DiveUnderstanding Code Vulnerabilities with AI AgentsSound Bites"AI writes most of the code.""80% of the code is not secure.""Prompt injection is a huge problem."Chapters00:00Introduction to AI Agents in Cybersecurity02:41The State of Code Security and Vulnerabilities05:10Building AI Agents for Code Analysis07:52Evaluating AI Agents and Benchmarking10:27Autonomous Feedback Loops in Cybersecurity13:07Trusting AI Agents for Security Fixes15:47Understanding Vulnerabilities and AI's Role18:42Real-World Examples of Vulnerability Detection23:25Navigating App Development Challenges24:32Getting Started with Rafter28:03Understanding Mechanistic Interoperability35:06Interpreting Model Features and Security37:49Top Security Practices for Developershttps://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    続きを読む 一部表示
    41 分
  • Voice Agents at Scale - EP 53 - Laurent Cohen - Getoblic
    2026/02/04

    In this episode, Philippe Trounev interviews Laurent Cohen from Getoblic, who discusses the deployment of 1.6 million voice AI agents. Laurent explains the transition from a SaaS model to an infrastructure layer, emphasizing the importance of data gathering and SEO strategies. He shares insights on unit economics, cost efficiency, and the monetization strategies for their voice AI services. The conversation also covers the workflow of AI agents, team structure, early success metrics, and competitive advantages in the voice AI market.takeawaysThe deployment of 1.6 million voice AI agents is a significant achievement.Shifting from a SaaS model to an infrastructure layer is crucial for scalability.Unit economics and cost efficiency are vital for sustainable growth.SEO should be handled in-house as it is the DNA of a company.Gathering data is essential for training AI agents effectively.Monetization strategies include offering free claims for businesses to engage with the platform.AI agents work in a structured workflow to handle customer inquiries.A small team can achieve significant results with the right automation.Early success metrics include claimed pages and minutes spent with voice agents.Building competitive moats involves leveraging unique data and insights.Sound Bites"We need to scale data.""Money is the enemy.""Let's help each other."Chapters00:00Introduction to Voice AI at Scale02:54The Shift from SaaS to Infrastructure Layer05:24Unit Economics and Cost Efficiency08:13SEO Strategies and Data Gathering11:07Monetization Strategies for Voice AI14:11Workflow of AI Agents16:50Team Structure and Automation19:40Early Success Metrics and Conversion22:19Building Competitive Moats25:07The Future of Voice AI and Marketing StrategiesJoin us on Discord https://discord.gg/pAUGNTzv

    続きを読む 一部表示
    27 分
  • Agentic Prediction - EP 52 - Michael Ulin - Tenki AI
    2026/01/27

    In this conversation, Michael Ullam, CEO of Tenki AI, discusses the intricacies of building AI agents, particularly in the context of prediction markets. He emphasizes the importance of understanding limitations, building trust with users, and the architecture of multi-agent systems. Michael shares insights on logging practices, avoiding overfitting, and the cost-effectiveness of predictions. He also touches on the long-term vision for Tenki AI, strategies for product launch, and the advantages of bootstrapping a startup. Throughout the discussion, he provides valuable advice for founders looking to navigate the AI landscape.takeawaysUnderstanding limitations is crucial for AI agents.Building trust with users is essential for success.Multi-agent systems can improve forecasting accuracy.Breaking down problems into subcomponents enhances performance.Logging practices are vital for system improvement.Avoiding overfitting is key to reliable predictions.Rapid feedback loops are beneficial in prediction markets.Validating demand before product development is important.Bootstrapping can be more efficient than seeking venture funding.Focus on solving real problems that you personally experience.titlesUnlocking the Power of AI AgentsBuilding Trust in AI SystemsSound Bites"What actually works when building agents?""Logging everything helps improve the system.""Validate demand before building your product."Chapters00:00Introduction to Tenki AI and Michael Ullam00:48Building Trust in AI Agents03:37Understanding Tenki's Multi-Agent Architecture06:56Challenges in Multi-Agent Systems10:16Logging and Evaluation Practices12:32Avoiding Overfitting in Predictions15:01Cost and Efficiency of Predictions17:23Long-Term Vision for Tenki AI19:09Common Playbook for Building AI Agents20:58Advice for Founders in AI Development30:40Opportunities in AI and Final Thoughtshttps://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    続きを読む 一部表示
    32 分
  • Agentic Governance - EP 51 - with Dr. Craig Kaplan
    2026/01/20

    SummaryIn this episode of So What About AI Agents Philippe Trounev and Dr. Craig Kaplan discusses the need for a new approach to AI safety and governance, emphasizing the importance of prevention in design and the concept of AI agents and collective intelligence systems. He highlights the role of ethics and morals in agentic society, the enforcement of ethics and morals in AI agents, and the purpose and values of AI agents. Dr. Kaplan also explores the blueprint for collective intelligence systems, problem-solving and coordination in multi-agent systems, transparency and accountability, decentralization of power, observation and reporting, and the role of values in AI systems. He concludes by discussing the relevance of Herbert Simon's ideas in AI research.takeawaysDemocracy in AI governance can enhance safety.AI agents can work together like a community.Ethics in AI must be enforced through safeguards.Collective intelligence can outperform individual expertise.Designing AI systems requires careful consideration.Transparency is crucial for AI agent interactions.Values from diverse individuals should shape AI behavior.The historical context of AI informs current practices.Short-term fixes are not sufficient for AI safety.Our online behavior influences future AI training.titlesBuilding Safe AI: A Democratic ApproachThe Future of AI GovernanceSound Bites"Two heads are better than one.""We need to think hard about design.""We should behave well online."Chapters00:00Introduction to AI and Superintelligence01:20Governance and Safety in AI05:30The Role of AI Agents in Society07:29Evolving Towards Agentic Democracy09:35Ethics and Morals in Agentic Society12:16Influence vs. Enforcement in AI Behavior15:52Blueprint for Collective Intelligence Systems19:39Human Traits in AI Collective Intelligence22:49Transparency and Accountability Among Agents25:25Decentralization and Power Distribution29:35Learning from Human Governance33:20Herbert Simon's Insights on AI and Morality36:42Key Takeaways for AI Governance

    続きを読む 一部表示
    40 分
  • Agentic Payments - EP 50 with Mitchell Jones from Lava Payments
    2026/01/08

    summaryIn this conversation, Philippe Trounev and Mitchell Jones delve into the complexities of agentic payments and the necessary payment infrastructure for the evolving AI economy. They discuss the challenges faced by AI startups in managing payments, the importance of measurement and optimization in payment systems, and the future of agent-to-agent payments. The conversation highlights the need for budgeting controls and trust in agent networks, emphasizing the role of gateways in facilitating these processes.takeaways

    • Agentic payments require a clear understanding of costs and value delivery.
    • Current payment infrastructures are inadequate for the needs of AI startups.
    • AI startups must adapt their pricing strategies beyond traditional models.
    • Using a payment gateway simplifies the integration of multiple AI models.
    • Measurement is crucial for managing costs in AI operations.
    • Budgeting controls are essential for preventing runaway costs in agentic systems.
    • Trust and accountability are vital in agent-to-agent transactions.
    • The future of payments will involve more automation and less human intervention.
    • Experimentation with pricing models is now more feasible for startups.
    • Building a robust payment infrastructure is critical for the success of AI applications.



    Keywords


    agentic payments, payment infrastructure, AI startups, payment systems, budgeting, trust, agent-to-agent payments, LavaPayments, FinTech, AI economy


    Chapters


    00:00 Understanding Agentic Payments

    02:28 The Role of Payment Infrastructure in AI

    05:21 Optimizing Payment Systems for AI Startups

    08:07 The Future of Agent-to-Agent Payments

    11:03 Budgeting and Control in the Agentic Economy

    13:50 Building Trust in Agent Transactions

    16:45 The Evolution of AI Agents and Payments

    19:25 Challenges in Agent Communication and Budgeting

    22:29 The Importance of Measurement in Payment Systems

    25:18 Future Use Cases for Agent Payments

    28:08 Final Thoughts on the Agentic Economy



    続きを読む 一部表示
    35 分
  • Agentic Sales Organization - EP 49 with Paul Schmidt from SmartBug | So What About AI Agents
    2025/12/17

    In this conversation, Philippe Trounev and Paul Schmidt discuss the concept of agentic sales organizations, focusing on how AI can empower sales teams by alleviating mundane tasks and enhancing efficiency. They explore the role of sales research agents, essential tools for implementing AI in sales, and the importance of data hygiene. The discussion also covers the cost considerations for introducing AI and predictions for the future of sales technology.takeawaysAgentic sales organizations empower sales teams with AI tools.Sales research agents can save significant time for sales reps.Proposal agents help create polished presentations quickly.Personalization in outreach is key to engaging prospects.Data hygiene is essential for effective AI implementation.Sales teams should document processes for better AI output.Integrating AI should feel seamless for salespeople.Cost-effective solutions exist for implementing AI in sales.AI can help sales teams focus on high-value tasks.Domain expertise is crucial when selecting AI tools.https://www.docsie.ioJoin us on Discord https://discord.gg/pAUGNTzv

    続きを読む 一部表示
    27 分
  • Agentic DevOps: Will AI Replace DevOps Engineers? | EP48 ft. NetOrca’s Scott Rowlandson
    2025/12/10

    EP 48 – Agentic DevOps | Featuring Scott Rowlandson (NetOrca)
    In this episode, Philippe Trounev sits down with Scott Rowlandson from NetOrca to unpack one of the most urgent questions in technology today:

    We dive deep into the evolution of DevOps, the rise of AI agent orchestration, and how automation is reshaping engineering teams across regulated industries like financial services.

    Scott brings real-world experience from working in high-compliance environments—where automation isn’t just helpful… it's essential. Together, we explore:

    • How automation is changing the DevOps landscape

    • Why DevOps roles aren’t disappearing—but evolving

    • AI agents and the future of engineering workflows

    • Reducing delivery times in complex tech stacks

    • Why regulated industries rely heavily on automation

    • “Human-in-the-loop” DevOps models

    • What skills DevOps engineers MUST develop to stay relevant

    • Automation will eliminate some manual DevOps tasks.

    • But demand for skilled DevOps engineers is increasing, not shrinking.

    • AI agents will drastically accelerate deployment, compliance, and operations.

    • DevOps pros who embrace orchestration and automation will lead the next era.

    • The future of engineering is hybrid: AI + humans working together.

    Is AI automation about to replace traditional DevOps roles?🔥 Key Topics Covered🎯 Main Takeaways

    続きを読む 一部表示
    32 分