『AI Governance Illusion: Hidden Risks & Accountability in ITSM』のカバーアート

AI Governance Illusion: Hidden Risks & Accountability in ITSM

AI Governance Illusion: Hidden Risks & Accountability in ITSM

無料で聴く

ポッドキャストの詳細を見る

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

AI governance maturity can be misleading. Many organizations rely on frameworks, policies, and dashboards that signal control but fail to reflect true understanding of AI systems. This episode explores the Governance–Understanding Gap, highlighting why unclear decision ownership and limited system insight create hidden risks in AI, ITSM, and Enterprise Service Management environments.


In this episode, we answer to:

What is the worst decision an AI system could realistically make in practice?

Which AI system in the organization is least understood and hardest to explain?

If an AI system makes a harmful decision, who is accountable for it?


Resources Mentioned in this Episode:

NIST website, framework "AI Risk Management Framework", link https://www.nist.gov/itl/ai-risk-management-framework


European Commission website, policy "Artificial Intelligence", link https://digital-strategy.ec.europa.eu/en/policies/artificial-intelligence


ISO Standards website, ISO/IEC 42001 standard, link https://www.iso.org/standard/81230.html


MIT Sloan Management Review website, article "A framework for assessing AI risk", link https://mitsloan.mit.edu/ideas-made-to-matter/a-framework-assessing-ai-risk


Stanford Human-Centered AI website, article " AI Index 2025", link https://aiindex.stanford.edu


Connect with me on:

LinkedIn: https://www.linkedin.com/in/theitsmpractice/

Website: http://www.theitsmpractice.com

And if you want more tips and guidance, follow me on LinkedIn. I am sharing daily posts regarding Enterprise Service Management, IT Service Management, and IT Security.


Credits:

Sound engineering by Alan Southgate - http://alsouthgate.co.uk/


Graphics by Yulia Kolodyazhnaya

まだレビューはありません