エピソード

  • Episode 06: AI Brief: GPT-5.3 and continuity controls
    2026/03/04

    Two current operator signals, translated into one concrete next-week action block.

    • OpenAI released GPT-5.3 Instant and published system-card details.
    • Vendor continuity pressure stayed elevated through Anthropic policy-dispute and blacklist-risk signals.
    • A 30-minute Monday control loop to keep model release and fallback controls current.
    1. Treat model releases as workflow change events, not just product updates.
    2. Run a 3-prompt regression pack before broad rollout after model changes.
    3. Confirm rollback owner + stop authority for critical AI workflows.
    4. Define one tested fallback path for top three AI-enabled workflows.
    5. Send a plain-language operator memo each Monday (approved/restricted/escalation).
    • 00:00 Cold open + framing
    • 00:39 Boundary note complete / theme intro in
    • 00:54 Signal 1: GPT-5.3 Instant and release governance
    • 02:25 Signal 2: vendor continuity pressure
    • 03:45 Monday action block (30-minute control loop)
    • 04:31 Close + outro
    • https://openai.com/index/gpt-5-3-instant/
    • https://openai.com/index/gpt-5-3-instant-system-card/
    • https://www.anthropic.com/news/statement-comments-secretary-war
    • https://techcrunch.com/2026/03/02/tech-workers-urge-dod-congress-to-withdraw-anthropic-label-as-a-supply-chain-risk/
    • https://techcrunch.com/2026/02/27/anthropic-vs-the-pentagon-whats-actually-at-stake/
    • Episode page: https://www.michaelhbm.com/AIChangeDesk/episodes/brief-2026-03-04-ai-brief.html
    • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295
    • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD

    AI-assisted tools were used in parts of research and production support. Final editorial judgment and release approval remained human-led. This is operational guidance, not legal advice.

    続きを読む 一部表示
    5 分
  • AI Brief: what changed this week
    2026/02/25

    Two operator-relevant signals from this week, translated into concrete controls teams can execute immediately.

    • Distillation attacks moved from model-lab concern to enterprise operations risk.
    • NIST's AI Agent Standards Initiative reinforced near-term interoperability and accountability expectations.
    • A 25-minute weekly governance desk loop you can run every Monday.
    1. Treat provider security bulletins as workflow events, not background reading.
    2. Classify AI usage into open-assist, controlled-assist, and restricted classes.
    3. Add interoperability and control portability checks to AI procurement intake.
    4. Require a human accountability map for every agent-like workflow.
    5. Ship a one-page operator update: what changed, what to do, what not to do.
    • 00:00 Cold open: policy that cannot survive Monday is policy theater
    • 01:00 Theme intro
    • 01:16 Framing and disclosure
    • 01:57 Signal 1: distillation attacks and model-control hardening
    • 04:30 Signal 2: standards momentum as procurement and controls signal
    • 06:57 Monday checklist: 25-minute governance desk
    • 08:06 Close
    • 08:18 Final reminder: one owner, one decision, one due date
    • 08:27 Brand outro
    • https://www.anthropic.com/news/detecting-and-preventing-distillation-attacks
    • https://www.businessinsider.com/anthropic-deepseek-distillation-minimax-moonshot-ai-2026-2
    • https://www.nist.gov/caisi/ai-agent-standards-initiative
    • https://www.ansi.org/standards-news/all-news/2-18-26-nist-launches-ai-agent-standards-initiative
    • https://www.nist.gov/news-events/news/2026/02/nist-seeks-public-input-advance-ai-agent-interoperability-and-efficiency
    • Website episode page: https://www.michaelhbm.com/AIChangeDesk/episodes/brief-2026-02-25-ai-brief.html
    • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295
    • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD

    AI-assisted tools were used in research and production support. Final editorial judgment and release approval remained human-led.

    続きを読む 一部表示
    9 分
  • AI Brief | EP008: Model release control validation
    2026/03/11

    Two current operator signals, translated into a plain-language weekly control block.

    • OpenAI announced plans to acquire Promptfoo, pushing testing/eval workflows further into default AI release practice.
    • Anthropic launched The Anthropic Institute while NIST reinforced monitoring guidance context for deployed AI systems.
    • A 35-minute operator block you can run weekly with one owner and clear pause authority.
    1. Require a tiny evidence packet for each AI behavior change (3 prompts + pass/fail + approver + rollback owner).
    2. Publish a one-page operator memo in plain language (approved, restricted, paused, exception path, next review).
    3. Run one mini pause drill each week: "output is wrong; who pauses in 10 minutes?"
    4. Block scale-up on any workflow missing named approver or rollback owner.
    • 00:00 Cold open + framing
    • 00:55 Boundary note complete / theme intro in
    • 01:10 Signal 1: OpenAI/Promptfoo and release evidence
    • 03:58 Signal 2: Anthropic Institute + NIST monitoring pressure
    • 06:05 Next-week 35-minute action block
    • 07:25 Close + outro
    • https://openai.com/index/openai-to-acquire-promptfoo/
    • https://www.promptfoo.dev/blog/promptfoo-joining-openai
    • https://techcrunch.com/2026/03/09/openai-acquires-promptfoo-to-secure-its-ai-agents/
    • https://www.anthropic.com/news/the-anthropic-institute
    • https://www.theverge.com/ai-artificial-intelligence/892478/anthropic-institute-think-tank-claude-pentagon-jack-clark
    • https://www.nist.gov/news-events/news/2026/03/new-report-challenges-monitoring-deployed-ai-systems
    • https://www.nist.gov/publications/challenges-monitoring-deployed-ai-systems-center-ai-standards-and-innovation
    • Episode page: https://www.michaelhbm.com/AIChangeDesk/
    • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295
    • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD

    AI-assisted tools were used in parts of research and production support. Final editorial judgment and release approval remained human-led. This is operational guidance, not legal advice.

    続きを読む 一部表示
    10 分
  • AI Change Desk | EP007: Security Workflow Control Contract
    2026/03/09
    AI CHANGE DESK | EP007: SECURITY WORKFLOW CONTROL CONTRACT If your AI can find a vulnerability, draft a patch, and open a PR, your biggest risk is no longer detection quality. Your biggest risk is workflow ownership: • who can analyze, • who can approve, • who can merge, • who can pause, • and who can attest the execution chain under pressure. This episode translates four current signals into one operational playbook for next week. WHAT CHANGED THIS WEEK 1. OpenAI launched Codex Security in research preview (2026-03-06). 2. Anthropic + Mozilla published concrete AI-assisted vulnerability workflow details (2026-03-06), including CVD and exploit-analysis references. 3. NIST published AI 800-4 on monitoring deployed AI systems (2026-03-06). 4. OpenAI launched GPT-5.4 and ChatGPT for Excel beta (2026-03-05), expanding business-user AI execution surfaces. OPERATOR TRANSLATION • Treat AI security pipelines as action-controlled workflows, not assistant features. • Separate discovery throughput from remediation readiness. • Move monitoring from dashboarding to a named ownership control. • Add spreadsheet-AI usage controls where sensitive decisions or data handling occur. MONDAY BLOCK (45 MINUTES, ONE OWNER) • Minute 0-10: action matrix lock (Analyze, Draft fix, Open PR, Merge, Deploy) with allowed/checkpointed/restricted levels. • Minute 10-20: credential and identity check (remove over-scoped inherited credentials). • Minute 20-30: evidence contract (logs, retention, export path, access controls). • Minute 30-40: disclosure + rollback ownership (name owners, define stop authority). • Minute 40-45: operator memo (what changed, what is approved, what is restricted, who approves exceptions, next review date). LINKS • Episode page: https://www.michaelhbm.com/AIChangeDesk/episodes/ep007-security-workflow-control-contract.html • YouTube channel: https://www.youtube.com/@AIChangeDesk • RSS show: https://media.rss.com/aichangedesk/feed.xml • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295 • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD SOURCES • OpenAI (2026-03-06): https://openai.com/index/codex-security-now-in-research-preview/ • Anthropic + Mozilla collaboration post (2026-03-06): https://www.anthropic.com/news/mozilla-firefox-security • Anthropic coordinated disclosure policy (2026-03-06): https://www.anthropic.com/coordinated-vulnerability-disclosure • Anthropic exploit analysis (2026-03-06): https://red.anthropic.com/2026/exploit/ • Mozilla Firefox blog corroboration (2026-03-06): https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/ • NIST AI 800-4 publication page (2026-03-06): https://www.nist.gov/publications/challenges-monitoring-deployed-ai-systems-center-ai-standards-and-innovation • OpenAI GPT-5.4 launch (2026-03-05): https://openai.com/index/introducing-gpt-5-4/ • OpenAI ChatGPT for Excel (2026-03-05): https://openai.com/index/chatgpt-for-excel/ DISCLOSURE AI-assisted tools were used in parts of the research and production workflow. Final editorial judgment, risk posture, and release approval stayed human-led. This is operational guidance, not legal advice. These are my opinions and are not representative of any organization.
    続きを読む 一部表示
    25 分
  • AI Change Desk | EP005: Run Agents Without Losing Control
    2026/03/02
    AI CHANGE DESK | EP005: RUN AGENTS WITHOUT LOSING CONTROL If AI systems can execute actions in your environment, governance has to move from policy language to access control execution. This episode translates current signals into practical controls for operators: action-tier permissions, scoped credentials, human approval thresholds, deployment tier decisions, and a weekly control desk teams can run quickly. WHAT YOU WILL GET • A practical access-control framework for agent-enabled workflows. • Action-tier classification you can apply this week (read, draft, update-internal, external-send, system-admin). • A deployment control checklist for connected/hybrid/disconnected environments. • A standards-aligned procurement starter (identity, interoperability, proportional controls). • A Monday control desk + metrics scorecard + 30-60-90 implementation sequence. TIMESTAMPS • 00:00 Cold open — access control is the operating risk • 00:50 Intro, disclosure, and show contract • 02:15 Why EP005 now (bridge from EP003 + EP004) • 04:10 Story 1 — Anthropic + Vercept and action-tier controls • 08:30 Story 2 — OpenAI elevated-risk controls and malicious-use patterns • 12:10 Story 3 — Sovereign deployment and architecture obligations • 15:35 Story 4 — NIST standards + proportional controls • 18:55 Scenario walkthrough + risk check • 21:40 Monday Access Control Desk • 24:15 Metrics, 30-60-90 plan, FAQ, and control drills • 25:04 Close + outro MONDAY ACTIONS (RUN THIS NEXT WEEK) 1. Classify top five AI workflows by action tier. 2. Scope credentials for the highest-impact workflow. 3. Name stop-authority owner for each critical workflow. 4. Set approval thresholds for external-send and system-admin actions. 5. Publish one-page operator update with approved/restricted actions and escalation path. SOURCES • https://www.anthropic.com/news/anthropic-acquires-vercept • https://techcrunch.com/2026/02/25/anthropic-acquires-vercept-to-expand-computer-use-agents/ • https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt-safety/ • https://openai.com/index/disrupting-malicious-ai-uses/ • https://www.microsoft.com/en-us/microsoft-cloud/blog/2026/02/24/announcing-sovereign-cloud-ai-updates/ • https://www.microsoft.com/en-us/industry/blog/government/2026/02/24/accelerating-government-mission-with-microsoft-sovereign-cloud/ • https://www.nist.gov/caisi/ai-agent-standards-initiative • https://www.nist.gov/artificial-intelligence/ai-agent-interoperability-and-efficiency-standards-request-information • https://digital-strategy.ec.europa.eu/en/library/eu-ai-office-and-jrc-publish-report-proportionality-ai • https://ai-watch.ec.europa.eu/publications/eu-ai-office-and-jrc-report-proportionality-trustworthy-ai LISTEN • YouTube: https://www.youtube.com/@AIChangeDesk • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295 LISTENER QUESTION Where is your organization most exposed right now: permission scope, approval thresholds, or action logging? DISCLOSURE AI-assisted tools were used in parts of drafting, synthesis, and production support. Final editorial judgment and release approval remained human-led.
    続きを読む 一部表示
    25 分
  • AI governance implementation for operators: turning policy into weekly execution
    2026/02/23
    EP003: AI GOVERNANCE IMPLEMENTATION FOR OPERATORS AI governance breaks when it lives as a policy document and not as a weekly operating loop. In this main episode, we use current market signals (model updates, AI security tooling, regional deployment strategy, and standards activity) to show how leaders and operators can run governance as execution instead of theory. WHAT YOU WILL GET • A practical model-change governance workflow you can run every week. • Security workflow controls for AI-assisted code review. • Procurement and data-governance actions triggered by regional/partner deployment signals. • A reusable weekly AI Governance Desk format with owner, controls, and communication outputs. • A late-update block on alignment-research funding and regulated-industry deployment signals. TIMESTAMPS • 00:00 Cold open — governance is a workflow, not a PDF • 00:59 Intro music + disclosure • 01:20 Why this episode now (EP001/EP002 bridge) • 03:20 Story 1 — Claude Sonnet 4.6 and model-change governance • 07:50 Story 2 — Claude Code Security and human-in-the-loop controls • 12:20 Story 3 — OpenAI for India + Tata and procurement reality • 16:00 Story 4 — NIST AI agent interoperability signal • 18:10 Late updates — alignment funding + regulated-industry collaboration • 19:00 Weekly AI Governance Desk (25-minute operating loop) • 22:05 Postscript — chat-code controls + workflow-class policy mapping • 23:25 Monday morning actions • 24:25 Outro + listener question MONDAY MORNING ACTIONS 1. Name one owner for weekly AI governance desk operations. 2. Run a model-change regression check on your top workflows. 3. Require human approval for AI-generated security patches/findings. 4. Update procurement clauses (data handling, change notifications, sub-processors). 5. Publish a one-page internal update: what changed, what to do, what not to do. SOURCES • https://www.anthropic.com/news/claude-sonnet-4-6 • https://docs.anthropic.com/en/release-notes/api#feb-17th-2026 • https://www.anthropic.com/news/claude-code-security • https://docs.anthropic.com/en/docs/claude-code/security • https://openai.com/index/openai-for-india/ • https://www.tata.com/newsroom/openai-and-tata-group-announce-strategic-collaboration • https://www.nist.gov/news-events/news/2026/02/nist-seeks-public-input-advance-ai-agent-interoperability-and-efficiency • https://www.federalregister.gov/documents/2026/02/20/2026-02979/ai-agent-interoperability-and-efficiency-standards-request-for-information • https://openai.com/index/advancing-independent-research-ai-alignment/ • https://alignmentproject.aisi.gov.uk/ • https://www.anthropic.com/news/anthropic-infosys • https://www.infosys.com/newsroom/press-releases/2026/advanced-enterprise-ai-solutions-industries.html LISTEN • Spotify: https://open.spotify.com/show/5X1sLLTeULqFCdt7aaisGD • Apple Podcasts: https://podcasts.apple.com/us/podcast/ai-change-desk/id1876677295 DISCLOSURE AI-assisted tools were used in parts of drafting, synthesis, and production support. Final editorial judgment and release approval remained with the host.
    続きを読む 一部表示
    25 分
  • AI policy basics for operators: what this week changed
    2026/02/19

    EP002: AI policy basics for operators.

    This episode translates AI policy concepts into practical operating decisions for leaders, managers, and delivery teams.

    • Episode: 002
    • Title: AI policy basics for operators
    • Runtime: 10m 30s
    • Host: Michael Hanna-Butros Meyering

    AI policy works only when it is written as operational guidance people can apply in daily workflows.

    • 00:00 Why AI policy fails in real teams
    • 01:20 Story 1: Claude Sonnet 4.6 and model-change governance
    • 04:40 Story 2: AI infrastructure cost signals and procurement controls
    • 07:40 Action block: policy + change management implementation
    • 09:40 Monday-morning actions + outro
    • Anthropic launched Claude Sonnet 4.6 (February 17, 2026), which reinforces the need for model-upgrade controls and evaluation gates in internal policy.
    • Anthropic announced it will cover electricity price increases tied to data-center growth (February 17, 2026), making infrastructure impact a practical procurement and governance issue.
    • Scope: which AI use cases are allowed, restricted, or prohibited.
    • Data: which data classes may be used with which tools.
    • Controls: review, logging, exception handling, and escalation.
    • Accountability: who owns policy updates and incident response.
    • Add a model-change trigger section to your AI policy (when re-evaluation is mandatory).
    • Add three infrastructure-risk questions to AI vendor intake.
    • Run one manager briefing with a clear script for allowed/restricted use.
    • Audit one active AI workflow for drift between policy and real usage.
    • Anthropic, “Announcing Claude Sonnet 4.6”: https://www.anthropic.com/news/claude-sonnet-4-6
    • TechCrunch coverage, “Anthropic releases Claude Sonnet 4.6”: https://techcrunch.com/2026/02/17/anthropic-releases-claude-sonnet-4-6/
    • Anthropic, “Covering electricity price increases from AI data centers”: https://www.anthropic.com/news/covering-electricity-price-increases
    • Reuters coverage (via Investing.com): https://www.investing.com/news/stock-market-news/anthropic-to-cover-electricity-price-increases-in-areas-where-it-builds-data-centers-3894580
    • NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
    • NIST Generative AI Profile: https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence
    • OECD AI Principles: https://oecd.ai/en/ai-principles
    • ISO/IEC 42001 overview: https://www.iso.org/standard/81230.html

    This episode uses AI-assisted production tools (voice rendering, editing support, and publishing automation). Final editorial and risk decisions are human-led.

    続きを読む 一部表示
    10 分
  • Welcome to AI Change Desk
    2026/02/11

    Welcome to episode one of AI Change Desk.

    This launch episode introduces the mission of the show and a practical framework you can use immediately to manage AI rollout decisions in your organization.

    • Episode: EP001
    • Title: Welcome to AI Change Desk
    • Runtime: 6m 25s (launch edition)
    • Host: Michael Hanna-Butros Meyering
    • 00:00 Cold open: the 3 questions teams keep asking about AI
    • 00:42 Intro (show ID)
    • 00:57 Show mission: AI as an operating shift, not a tool announcement
    • 01:39 Plain-English definitions: AI, LLM, and change management
    • 02:34 Personal context + why this show exists
    • 03:21 Boundaries + AI-use disclosure
    • 04:06 Show contract: practical, credible, actionable
    • 04:39 4D Desk Memo: Decision, Data, Drift, Deployment
    • 05:30 Inner workflow: how this podcast is produced
    • 06:04 Listener question + outro
    • 06:15 Outro (show close)
    • AI rollouts fail more often from adoption and governance gaps than model quality.
    • Treat AI changes as operational decisions with clear ownership and controls.
    • Use the 4D Desk Memo to make fast, defensible decisions: Decision, Data, Drift, and Deployment.

    This episode used AI-assisted production for:

    • Script drafting support
    • Voice synthesis through an authorized ElevenLabs voice model
    • Packaging and publishing automation

    Final editorial decisions, risk posture, and publication approval were made by Michael Hanna-Butros Meyering.

    • Daily research scan
    • Source verification and editorial filtering
    • Script lock in Context -> Impact -> Action format
    • Voice rendering through ElevenLabs API
    • Audio QA
    • RSS.com episode publishing
    • Google Cloud Storage + Google Sites web publishing
    • ElevenLabs API quickstart: https://elevenlabs.io/docs/eleven-api/quickstart
    • RSS.com Core API docs: https://api.rss.com/v4/docs
    • Google Cloud Storage static hosting: https://cloud.google.com/storage/docs/hosting-static-website
    • Episode page: https://www.michaelhbm.com/AIChangeDesk/episodes/ep001-welcome-to-ai-change-desk.html
    • Transcript (TXT): https://storage.googleapis.com/site-app-html/AIChangeDesk/transcripts/ep001-welcome-to-ai-change-desk.txt
    • RSS feed: https://media.rss.com/aichangedesk/feed.xml

    What is one AI-related decision your organization keeps postponing right now?

    続きを読む 一部表示
    6 分