『S2E21 - Building Safer Agentic AI』のカバーアート

S2E21 - Building Safer Agentic AI

S2E21 - Building Safer Agentic AI

無料で聴く

ポッドキャストの詳細を見る

概要

Agentic AI is moving fast — from experiments and copilots to systems that can plan, decide, and act over time. As these systems become more capable, an important question follows: how do we make sure they remain safe, trustworthy, and aligned with human intent?

In this extra episode of Season 2, Rob Price is joined by Nell Watson — AI ethics researcher, author, and Chair of the Safer Agentic AI Safety Experts Focus Group, IEEE — to explore what safer agentic AI means in practice.

Rather than focusing on abstract risks or distant futures, the conversation looks at:

  • how agentic AI is being built and adopted today

  • where organisations and founders most often underestimate safety

  • how principles like alignment, epistemic hygiene, and goal limits show up in real products and operating models

  • and why leaders may want to engage with agentic AI safety before regulation or incidents force the issue

As agentic systems become more capable and more embedded in organisations, what does “safe enough” look like — and who gets to decide?


Comments are open — how are you deciding when and how to act on AI?

Please subscribe to Futurise to hear first about future episodes.

まだレビューはありません