『119: Polished incoherence and other marvels of modernity』のカバーアート

119: Polished incoherence and other marvels of modernity

119: Polished incoherence and other marvels of modernity

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

Glittery bags of words, scatterbrained tutors, or random concept triggerers?


In this one we feel our way through the murky reality of AI tools—reaching our tentacles beyond all the silt that's been stirred up in the hype and panic. We think we've found some interesting nooks and crannies.


We kick off with yet another "oops, used AI without checking" message that we received, then we share thoughts triggered by our own experiments with LLM-powered ritual dissent (as mentioned in the previous podcast – email tentacles@crownandreach.com if you'd like a copy of the prompt).


Then we explore where tools like LLMs could be genuinely helpful versus when they're simply expensive confusion generators, with reference to some interesting experiments we've seen on our travels.


  • Effective at the extremes in the role of a tutor: when you're an expert OR a complete beginner, not somewhere in the middle
  • The "random number generator" theory of LLMs as a trigger for concepts, ideas and processes you already know
  • Potential for designing LLM interactions that don't dumb you down
  • Why high-fidelity outputs are no longer a good proxy for high-quality thinking – the decades-long descent into polished incoherence
  • Bag of words theory: LLMs necessarily can't generate coherence, only fluency
  • Real examples of where AI can save time (e.g. risk assessment templates) vs. where it fails (e.g. original strategy or thinking)
  • How to avoid the "vibe-coded prototype" trap in both design and thinking (and possibly why most people still won't, even though it's technically easier than ever).


References


  • Gerald Weinberg's classic "Secrets of Consulting" https://archive.org/details/secretsofconsult0000wein
  • Hazel Weakly's excellent piece on AI https://hazelweakly.me/blog/stop-building-ai-tools-backwards/
  • Vaughn Tan's paper prototype that scaffolds critical thinking with LLMs https://vaughntan.org/aiux
  • Ed Zitron's Where's Your Ed At – the firebrand pointing out the nakedity of the emperor https://www.wheresyoured.at
  • Pavel Samsonov's solid critique https://productpicnic.beehiiv.com/p/human-in-the-loop-is-a-thought-terminating-cliche
  • Philip Morgan ... couldn't find where he wrote about aspects of risk capacity, but he's here: https://philipmorganconsulting.com/
  • Dave Snowden's Ritual Dissent https://cynefin.io/wiki/Ritual_dissent
  • Our method Multiverse Mapping https://multiversemapping.com
  • Our method Pitch Provocations (old episodes 007-009 for a rough intro) https://shows.acast.com/triggerstrategy
  • Class action lawsuit against Anthropic re: training data https://www.lieffcabraser.com/anthropic-author-contact/

Find out more about us and our work at crownandreach.com

Hosted on Acast. See acast.com/privacy for more information.

まだレビューはありません