AI Agents, Failed Pilots, and the Human Risk Layer w/ Jason Todd Wade of BackTier
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
BackTier.com
-
https://nvimal.com/
https://www.stellarhorn.com/about
Jason Wade talks with Ryan Drumheller and Nikhil Vimal about the real-world mess of AI adoption: failed pilots, unclear strategy, vibe coding, AI agents, cybersecurity risk, and the human guardrails companies keep skipping.
Ryan brings the fractional CIO view: companies want AI, but often do not know what problem they are trying to solve. Nikhil brings the enterprise AI and startup lens, explaining why many AI pilots fail when companies rush into tools without strategy, data discipline, or governance.
The conversation covers why “we need AI” is not a plan, how tools like Copilot, Claude, GPT, Gemini, Base44, and Lovable are being used, and why rapid prototypes are useful but not enough. The deeper issue is usually hidden data, unclear workflows, weak training, and poor ownership.
The strongest section focuses on AI agents. Agents can create serious leverage, but they can also delete code, break systems, expose data, or create operational risk when given too much access. Ryan’s key point: treat agents like team members. Give them permissions, guardrails, supervision, and backups.
Key Topics
- Failed AI pilots
- Fractional CIO perspective
- Enterprise AI adoption
- Vibe coding and prototypes
- Copilot, Claude, GPT, Gemini
- Base44 and Lovable
- AI agents
- Cybersecurity risk
- Data quality
- Human guardrails
- Backups and permissions
- AI for creativity and productivity