EP33 | Is Your Environment Readable Enough for AI to Reason About?1
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
This week, David pulls a thread across six seemingly unrelated Microsoft GitHub repos and lands on a question the industry keeps skipping: whether the systems we're asking AI to reason about are actually readable enough for that reasoning to mean anything. It's a more durable framing than the usual "what can AI do for you" conversation, and it connects to everything from security posture to delivery backlogs to architectural fragility.
Cyrus covers a run of Microsoft security and identity updates. Entra license usage insights hitting GA, cross-tenant security group sync, Global Secure Access B2B support for AVD and Windows 365, macOS recovery lock, Defender promotional email handling, new advanced hunting tables, incident graph filtering, and Sentinel repositories reaching GA. He also highlights Chrome's rollout of device-bound session credentials, a hardware-backed fix to browser token theft that has been trivially exploitable for over a decade.
Richard covers a Microsoft Research paper on red teaming networks of AI agents. Over a hundred autonomous agents interacting through forums and messages, and four failure modes that only emerge when agents interact at scale: propagation, amplification, trust capture, and proxy chain invisibility. The researchers also observed emergent defences, with a small number of agents spontaneously developing security-conscious behaviour.
The episode closes with GitHub's shift from premium request units to token-based AI credits from June, and a shared comparison of what happens when agentic coding tools decide to ignore you.