AI Safety 101: Manipulation, Hallucinations & Defense
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
The tools we trust most can deceive us fastest. In this episode of SipCyber, Jen Lotze brings insights straight from Wild West Hackin' Fest—one of the premier ethical hacking conferences—to Wabasha Brewing in St. Paul, MN. Fresh off an AI cybersecurity course, Jen breaks down two critical vulnerabilities in the AI tools millions of us use daily: manipulation and hallucination.
AI agents like ChatGPT, Gemini, and Perplexity are powerful—but they're not infallible. Bad actors are weaponizing clever prompts to bypass safety protocols, while even well-intentioned queries can return confidently incorrect answers. The result? Phishing scams that pass the smell test, fake citations that look real, and advice that could lead you astray.
Key Topics Covered:
- How threat actors manipulate AI to create believable phishing attacks
- What "AI hallucination" really means—and why it's dangerous
- Why you must verify every critical AI-generated answer
- Treating AI like a research assistant, not a trusted expert
- Real-world tips from Wild West Hackin' Fest's AI security training
This isn't anti-AI—it's pro-awareness. The same critical thinking that protects you from phishing emails applies to the machines we're inviting into our workflows.
☕ Featured Spot: Wabasha Brewing, St. Paul, MN
Don't let AI do your thinking for you. Subscribe for weekly cybersecurity insights delivered from the best local spots across the country—and share this with anyone using AI at work.
#AI #ArtificialIntelligence #ChatGPT #AIHallucination #CyberSecurity #Phishing #EthicalHacking #WildWestHackinFest #InfoSec #AIRisks #SipCyber #DigitalSafety #LLM