In this episode of Risk! Engineers Talk Governance, due diligence engineers Richard Robinson and Gaye Francis how AI in Risk Management?
Richard begins with a deep-dive into how large language models work, and where they fall short. He explains why AI systems are sophisticated inference engines rather than true reasoning machines, and why that distinction matters enormously for high-stakes decision-making and risk management.
The conversation covers the parallels between AI and Monte Carlo simulation (great for likely scenarios, unreliable for rare critical events), the growing wave of fabricated legal citations produced by AI tools, and why the common law system itself mirrors how large language models operate.
Gaye and Richard then bring the discussion back to governance and what does responsible AI use look like for boards and organisations? Who carries liability when a decision is based on AI output? And how do you ensure the sources AI cites are actually real?
They conclude by agreeing that AI is a powerful tool for gathering information faster than ever before, but it demands that essential second layer of human thought, verification, and documented decision-making.
They reiterate that thinking, and SFAIRP, is hard.
If you’d like us to cover a specific topic or have any feedback we’d love to hear from you. Email admin@r2a.com.au.
For further information on Richard and Gaye’s consulting work with R2A, head to https://www.r2a.com.au, where you’ll also find their booklets (store) and a sign-up for their quarterly newsletter to keep informed of their latest news and events.
Gaye is also founder of Australian women’s safety workwear company Apto PPE https://www.aptoppe.com.au.