
AI Security and Agentic Risks Every Business Needs to Understand with Alexander Schlager
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.
Chapters
00:00 Introduction to AI Security and eIceberg
02:45 The Evolution of AI Explainability
05:58 Runtime Protection and AI Safety
07:46 Adoption Patterns in AI Security
10:51 Agentic AI: Risks and Management
13:47 Building Effective Agentic AI Workflows
16:42 Governance and Compliance in AI
19:37 The Role of Cybersecurity in AI Innovation
22:36 Lessons Learned and Future Directions
Episode # 166
Today’s Guest: Alexander Schlager, Founder and CEO of AIceberg.aiHe's founded a next-generation AI cybersecurity company that’s revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he’s restoring trust in an era of automation.
- Website: AIceberg.ai
- Linkedin: Alexander Schlager
What Listeners Will Learn:
- Why real-time AI security and runtime protection are essential for safe deployments
- How explainable AI builds trust with users and regulators
- The unique risks of agentic AI and how to manage them responsibly
- Why AI safety and governance are becoming strategic priorities for companies
- How education, awareness, and upskilling help close the AI skills gap
- Why natural language processing (NLP) is becoming the default interface for enterprise technology
Keywords:
AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning
Resources:
- AIceberg.ai