『TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖』のカバーアート

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

TSL Labs 🧪 Initiative: Attorney-Client Privilege vs. Public AI: The Hoeppner Decision Lawyers Need to Understand in 2026 ⚖️🤖

無料で聴く

ポッドキャストの詳細を見る

概要

Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖. Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool's Terms of Use can trigger a privilege waiver, and what "tech competence" really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff's wake-up-call analysis of confidentiality and third-party disclosure risk. 🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters. In our conversation, we cover the following 00:00 — The "superhuman assistant" promise, and the procedural nightmare risk. 🧠⚖️00:01 — The core warning: AI use can "blow a hole" in privilege.00:02 — Editorial overview: "The AI Privilege Trap" by Michael D.J. Eisenberg.00:02 — The case: United States v. Hoeppner (SDNY) and why it matters.00:03 — Why Judge Jed Rakoff's opinion gets attention (tech-literate, influential).00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel.00:04 — The court's conclusion: no attorney-client privilege, no work product protection.00:05 — Privilege basics applied to AI: "confidential + lawyer" and why AI fails that test.00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾00:07 — The "stranger on the street" analogy: you can't retroactively make it confidential.00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk.00:08 — ABA Model Rule 1.1: competence includes understanding tech risks.00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI.00:10 — "Reasonable safeguards": read policies, adjust settings, and know training/logging.00:11 — Public vs. enterprise AI: why contracts and "walled gardens" matter.00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings.00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor.00:13 — Redefining "tech-savvy lawyer" in 2026: judgment and restraint. 🧭00:14 — The "straight-face test": could you defend confidentiality after a judge reads the policy?00:15 — Client-side risk: clients can sabotage privilege before contacting counsel.00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒 RESOURCES Mentioned in the episode ABA Model Rules of Professional Conduct (Rules 1.1, 1.4, 1.6, 5.1, 5.3) Software & Cloud Services mentioned in the conversation Lexis (Lexis+ AI category mentioned) — https://www.lexisnexis.com/Microsoft Word — https://www.microsoft.com/microsoft-365/wordPublic generative AI "chatbot" tools (general category) — https://en.wikipedia.org/wiki/ChatbotWestlaw (Westlaw AI category mentioned) — https://legal.thomsonreuters.com/en/products/westlaw
まだレビューはありません