『That Tech Pod』のカバーアート

That Tech Pod

That Tech Pod

著者: Laura Milstein Gabriela Schulte and Kevin Albert
無料で聴く

概要

Welcome to That Tech Pod, a podcast co-hosted by Laura Milstein and Gabi Schulte (and occasionally Kevin Albert). Each Tuesday, That Tech Pod will feature in depth discussions about data privacy, cybersecurity, eDiscovery, and tech innovations with heavy hitters in the industry. Subscribe so you don't miss an episode! Visit thattechpod.com for more information.© 2023 That Tech Pod
エピソード
  • The Shiny Object Problem: Why AI Isn’t Fixing IT Problems with Rob Calvert
    2026/03/10

    In this episode of That Tech Pod, Kevin and Laura sit down with IT entrepreneur Rob Calvert, founder of Second Son Consulting and a longtime leader in the Apple enterprise ecosystem. After being laid off in the early 2000s, Rob built his consulting firm from a home office into the largest member of the Apple Consultants Network in Los Angeles and one of the top firms in the country. Drawing on more than 25 years advising companies across dozens of industries, he shares a grounded look at what actually makes technology succeed or fail inside real organizations. The conversation even opens with an unexpected detour into “Punch the monkey,” a viral zoo story that sparks a debate about how easily people question or misread what they see online in the age of AI (Laura swears this is a real monkey while Kevin thinks its GenAI to sell toys).

    From there, the conversation explores why most people only notice IT when something breaks and how that mindset leads to bad leadership decisions. Rob argues many “tech problems” are really culture and workflow problems, pointing to common mistakes like letting experimental tools quietly become production systems or constantly chasing new platforms without fully implementing the ones already in place. The result is wasted budgets, burned-out IT teams, and systems that drift away from how people actually work. They also get into the Mac vs. PC debate in the enterprise, the subtle ways companies waste millions in IT spending, and the gap between AI hype and real business impact. Rob says many small and mid-sized companies are spending a lot of time evaluating AI tools but seeing very little return so far, while larger organizations may eventually benefit through heavy customization. At the end of the episode, Rob finally agrees to go look up Punch the monkey. 🐒

    Rob Calvert is an entrepreneur and IT leader who has spent more than 25 years helping businesses make technology actually work for the people using it. After being laid off in the early 2000s, Rob Calvert built Second Son Consulting from his home office into the largest member of the Apple Consultants Network in Los Angeles and a top-ten firm nationwide. His work focuses on aligning technology with workflows and culture rather than treating IT as a standalone function, and his team has created widely used open-source tools for the Mac admin community. Rob has advised companies across more than 15 industries, managed millions in IT budgets, and is known for challenging cookie-cutter approaches to IT in favor of systems that support how people actually work.

    続きを読む 一部表示
    22 分
  • Why “Trust Me” Is the Most Dangerous AI Feature with Dr. Jonathan Schaeffer
    2026/03/03

    In this episode of That Tech Pod, we sit down with Dr. Jonathan Schaeffer, a longtime computer scientist who didn’t arrive in AI chasing demos or hype, but by trying to solve a much harder problem: how to keep data safe.

    Jonathan walks us through his path from privacy and security research into modern AI, and why those early concerns feel even more urgent now. While everyone is fixated on hallucinations, he argues the bigger risks are quieter and more structural, from loss of user control to systems that appear trustworthy while subtly eroding human judgment. We dig into the growing concentration of AI power among a handful of companies and whether that outcome was inevitable or the result of choices we made along the way. Jonathan reflects on the human skills he worries we may stop exercising as AI gets better, and the low-key decisions happening right now that could shape the next decade far more than any flashy model release. Finally, he shares what he’s building with Synsira: privacy-first, local AI tools designed to work with your own data without shipping it to the cloud, leaking sensitive information, or inventing answers. It’s a conversation about control, responsibility, and what trustworthy AI actually looks like when you have to live with it.

    Dr. Jonathan Schaeffer is a computer scientist and AI innovator who works at the intersection of artificial intelligence, data privacy, and security. He is the founder of Synsira and the creator of KIND, (Knowledge In Depth AI), a privacy-first desktop AI that lets users search, analyze, and interact with their own knowledge bases, documents, notes, and proprietary data, without sending information to the cloud, risking data leaks, or encountering hallucinations. With a career spanning systems design and secure computing, Jonathan focuses on building AI tools that maintain true control over sensitive and regulated data, exploring what responsible, trustworthy AI looks like in practice and how organizations can innovate without surrendering autonomy. He earned his Bachelor of Science at the University of Toronto and a Master’s and Ph.D. at the University of Waterloo, then spent more than 35 years at the University of Alberta as a Distinguished Professor of Computing Science, leading pioneering AI research before retiring in 2024 to focus on AI innovation with Synsira.

    続きを読む 一部表示
    20 分
  • AI Just Became Your Employee. Who's Liable When It Gets It Wrong? with Laura and Kevin
    2026/02/24

    AI is no longer just a background tool. It’s drafting contracts, reviewing discovery, sending emails, negotiating deals, and triggering real-world consequences. In this episode of That Tech Pod, Laura and Kevin unpack what happens when AI starts behaving less like software and more like an employee. If an AI clause costs a company millions, misses privileged evidence, or sends sensitive information to the wrong place, who’s actually on the hook?

    The conversation moves from AI as a de facto junior associate to the harder questions around liability, governance, and oversight. They explore why AI can have autonomy but no accountability, how risk gets assigned when things go wrong, and why companies are almost always left holding the bag. Then the discussion takes a turn: what happens when AI isn’t just assisting humans, but coordinating them, managing tasks, and using people as a quality-control layer?

    The episode closes with a bigger debate about power, psychology, and work itself. If software is now supervising humans, assigning tasks, and shaping outcomes, are organizations ready for that shift? And if AI is doing the work while humans carry the legal risk, is that imbalance sustainable? The most dangerous AI may not be the one that replaces people, but the one that quietly manages them.

    続きを読む 一部表示
    27 分
まだレビューはありません