『277: TechTime Radio: "THANKS" Giving Episode with Dubai’s Flying Taxis, Australia’s Teen Social Ban, CVE vs Hackers, Nike’s Robo Shoes, Unsafe AI Toys, Black Friday Deals, with Guest Nick Espinosa | Air Date: 11/25 - 1/1/25』のカバーアート

277: TechTime Radio: "THANKS" Giving Episode with Dubai’s Flying Taxis, Australia’s Teen Social Ban, CVE vs Hackers, Nike’s Robo Shoes, Unsafe AI Toys, Black Friday Deals, with Guest Nick Espinosa | Air Date: 11/25 - 1/1/25

277: TechTime Radio: "THANKS" Giving Episode with Dubai’s Flying Taxis, Australia’s Teen Social Ban, CVE vs Hackers, Nike’s Robo Shoes, Unsafe AI Toys, Black Friday Deals, with Guest Nick Espinosa | Air Date: 11/25 - 1/1/25

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

What happens when a holiday “thankful” theme clashes with cutting-edge technology, bold policies, and some notable missteps? We begin with Dubai’s high-profile plan to introduce flying taxis and ask tough questions: can eVTOLs truly reduce travel time after accounting for boarding, airspace management, and vertiport capacity—or will they just be expensive toys hovering above gridlocked cities?

Next, we discuss Australia’s eye-catching ban on social media for users under 16. We openly address the issues it aims to solve—cyberbullying, grooming, and addictive content—and consider the potential loss of social and educational benefits for teens, along with the challenges of age verification, VPN use, and platform switching.

Our guest, cybersecurity expert Nick Espinoza, highlights the CVE database, which quietly supports global vulnerability management. When defenders respond swiftly, it’s because CVE provides a shared map. This connects to real-world enforcement—like the arrest of a suspected Russian hacker in Thailand through international cooperation—and the rapidly evolving frontline where AI counters AI. Modern defenses depend on machine learning and deep learning that analyze CVEs, detect indicators of compromise, and respond faster than humans, narrowing the gap from cyberattackers who automate their tactics.

We also examine Nike’s provocative concept of “e-bikes for your feet,” discussing when robotic assistance improves mobility and recovery—and when it might serve as a shortcut that sacrifices effort for convenience. Additionally, we highlight a notable failure: AI toys that used a loosely constrained model to deliver inappropriate and unsafe content to children before being removed. This underscores that safety measures are essential in consumer AI. We conclude with practical insights: a whiskey worth tasting, worthwhile laptop deals, and advice to delay TV purchases until the Super Bowl.

If this blend of skeptical analysis, useful tips, and cybersecurity insights appeals to you, follow the show, share with a friend, and leave a quick review—what story made you nod, and which one made you say “nope”?

From there, we swing to Australia’s headline-grabbing decision to ban social media for users under 16. We’re candid about the harms it targets—cyberbullying, grooming, and addictive feeds—while weighing the social and educational communities teens might lose, plus the cat-and-mouse game of age verification, VPNs, and platform hopping.

Our guest, cybersecurity expert Nick Espinoza, shines a spotlight on the CVE database, the quiet backbone that keeps vulnerability management coherent across the globe. If defenders coordinate quickly, it’s because CVE gives everyone the same map. We connect that to real-world enforcement—a suspected Russian hacker arrested in Thailand thanks to cross-border cooperation—and to the fast-evolving frontline where AI meets AI. Modern defense hinges on machine learning and deep learning that can learn from CVEs, hunt indicators of compromise, and act faster than humans, closing the gap on attackers who already automate their playbooks.

We also try Nike’s provocative idea of “e-bikes for your feet,” exploring when robotic assistance enhances access and recovery—and when it risks becoming a shortcut that trades effort for ease. Then we spotlight a clear tech fail: AI toys that used a loosely constrained model to serve explicit, unsafe content to kids before they were pulled. It’s a vivid reminder that guardrails aren’t optional in consumer AI. We finish with practical value: a whiskey worth savoring, laptop deals that are truly worth it, and a warning to hold off on TVs until Super Bowl season.

If this mix of clear-eyed skepticism, useful tips, and actionable security insight resonates, follow the show, share it with a friend, and drop a

Support the show

まだレビューはありません