『The Tech Savvy Lawyer』のカバーアート

The Tech Savvy Lawyer

The Tech Savvy Lawyer

著者: Michael D.J. Eisenberg
無料で聴く

概要

The Tech Savvy Lawyer interviews Judges, Lawyers, and other professionals discussing utilizing technology in the practice of law. It may springboard an idea and help you in your own pursuit of the business we call "practicing law". Please join us for interesting conversations enjoyable at any tech skill level!© ℗ 2020 Michael D.J. Eisenberg 政治・政府
エピソード
  • TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖
    2026/02/27
    Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖. Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool's Terms of Use can trigger a privilege waiver, and what "tech competence" really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff's wake-up-call analysis of confidentiality and third-party disclosure risk. 🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters. In our conversation, we cover the following 00:00 — The "superhuman assistant" promise, and the procedural nightmare risk. 🧠⚖️00:01 — The core warning: AI use can "blow a hole" in privilege.00:02 — Editorial overview: "The AI Privilege Trap" by Michael D.J. Eisenberg.00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.00:03 — Why Judge Jed Rakoff's opinion gets attention (tech-literate, influential).00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.00:04 — The court's conclusion: no attorney-client privilege, no work product protection.00:05 — Privilege basics applied to AI: "confidential + lawyer" and why AI fails that test.00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾00:07 — The "stranger on the street" analogy: you can't retroactively make it confidential.00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.00:10 — "Reasonable safeguards": read policies, adjust settings, and know training/logging.00:11 — Public vs. enterprise AI: why contracts and "walled gardens" matter.00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.00:13 — Redefining "tech-savvy lawyer" in 2026: judgment and restraint. 🧭00:14 — The "straight-face test": could you defend confidentiality after a judge reads the policy?00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒 RESOURCES Mentioned in the episode ABA Model Rules of Professional Conduct (Rules 1.1, 1.4, 1.6, 5.1, 5.3) Software & Cloud Services mentioned in the conversation Lexis (Lexis+ AI category mentioned) — https://www.lexisnexis.com/Microsoft Word — https://www.microsoft.com/microsoft-365/wordPublic generative AI "chatbot" tools (general category) — https://en.wikipedia.org/wiki/ChatbotWestlaw (Westlaw AI category mentioned) — https://legal.thomsonreuters.com/en/products/westlaw
    続きを読む 一部表示
    19 分
  • TSL.P Labs 🧪: Lawyers and AI Oversight: What the VA's Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖
    2026/02/20
    Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, "Lawyers and AI Oversight: What the VA's Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖" and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer. In our conversation, we cover the following: 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that's a problem ⚖️ 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a "mirror" for the legal profession 🩺➡️⚖️ 00:03:00 – "Speed without governance": what the VA Inspector General actually warned about, and why it matters to your practice 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law 00:06:00 – Shadow AI in law firms: staff "just trying out" public chatbots on live matters and the unseen risk this creates 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice 00:09:00 – Competence in the age of AI: why "I'm not a tech person" is no longer a safe answer 🧠 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations 00:12:00 – From slogan to system: why "meaningful human engagement" must be operationalized, not just admired 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test 00:14:00 – You don't need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 00:15:00 – Risk mapping: distinguishing administrative AI use from "safety-critical" lawyering tasks 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use 00:17:00 – Why governance is not "just for BigLaw" and how solos can implement checklists and simple documentation 📋 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters 00:18:00 – Redefining the "human touch" as the safety mechanism that makes AI ethically usable at all 🤝 00:19:00 – AI as power tool: why lawyers must remain the "captain of the ship" even when AI drafts at lightning speed 🚢 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for? 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time? 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI RESOURCES Mentioned in the episode American Bar Association Model Rules of Professional Conduct Interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, "VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line". Software & Cloud Services mentioned in the conversation ChatGPT — https://chat.openai.com/ Lexis - https://www.lexisnexis.com Westlaw - https://legal.thomsonreuters.com
    続きを読む 一部表示
    23 分
  • 🎙️ Ep. #131, Supercharging Litigation With AI: How StrongSuit Helps Lawyers Transform Research, Doc Review, and Drafting 💼⚖️
    2026/02/17
    My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy. Join Justin and me as we discuss the following three questions and more! What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy?What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk?Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? In our conversation, we cover the following 00:00 – Welcome and guest introductionJustin joins the show and shares his current tech setup at his desk. 00:00–01:00 – Justin's current tech stackLenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks.Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making. 01:00–02:00 – Android vs. iPhone for AI useWhy Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability. 02:00–05:30 – Q1: Top three ways litigators should be using AI right nowUsing AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities.Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment.Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks. 05:30–07:30 – StrongSuit vs. basic tools like Word grammar checkHow StrongSuit aims to "up-level" a lawyer's writing, not just catch typos.Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents. 06:00–08:00 – AI context limits and scaling doc reviewConstraints of large models' context windows (around ~1M tokens ≈ ~750 pages).How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights. 08:00–09:00 – Handling tens of thousands of documentsHow StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters. 09:00–11:30 – Origin story of StrongSuitWhy Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI.StrongSuit's focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step. 11:30–13:30 – From intake to brief drafting in minutesGenerating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions.StrongSuit's long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment. 12:00–14:30 – How StrongSuit tackles hallucinationsBuilding a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more.Validating citations by checking whether the Bluebook citation actually exists in StrongSuit's case database before surfacing it to the user.Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations. 14:30–16:30 – Coverage and jurisdictionsCoverage of all U.S. jurisdictions, federal and state, focused on precedential cases.Handling most regulations from administrative agencies, and limits around local ordinances.Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows. 15:00–17:00 – Security and confidentiality for litigatorsSOC 2 compliance and industry-standard encryption at rest and in transit.No model training on user data.Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys. 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigationMistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents.Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract ...
    続きを読む 一部表示
    35 分
まだレビューはありません