エピソード

  • When the Coders Don’t Code: What Happens When AI Coding Tools Go Dark? | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9
    2025/10/08
    In this issue of the Future of Cyber newsletter, Sean Martin digs into a topic that’s quietly reshaping how software gets built—and how it breaks: the rise of AI-powered coding tools like ChatGPT, Claude, and GitHub Copilot.These tools promise speed, efficiency, and reduced boilerplate—but what are the hidden trade-offs? What happens when the tools go offline, or when the systems built through them are so abstracted that even the engineers maintaining them don’t fully understand what they’re working with?Drawing from conversations across the cybersecurity, legal, and developer communities—including a recent legal tech conference where law firms are empowering attorneys to “vibe code” internal tools—this article doesn’t take a hard stance. Instead, it raises urgent questions:Are we creating shadow logic no one can trace?Do developers still understand the systems they’re shipping?What happens when incident response teams face AI-generated code with no documentation?Are AI-generated systems introducing silent fragility into critical infrastructure?The piece also highlights insights from a recent podcast conversation with security architect Izar Tarandach, who compares AI coding to junior development: fast and functional, but in need of serious oversight. He warns that organizations rushing to automate development may be building brittle systems on shaky foundations, especially when security practices are assumed rather than applied.This is not a fear-driven screed or a rejection of AI. Rather, it’s a call to assess new dependencies, rethink development accountability, and start building contingency plans before outages, hallucinations, or misconfigurations force the issue.If you’re a CISO, developer, architect, risk manager—or anyone involved in software delivery or security—this article is designed to make you pause, think, and ideally, respond.🔍 What’s your take? Is your team building with AI? Are you tracking how it’s being used—and what might happen when it’s not available?📖 Read the full companion article in the Future of Cybersecurity newsletter for deeper insights: TBA________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn: https://itspm.ag/future-of-cybersecuritySincerely, Sean Martin and TAPE9________Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-locationTo learn more about Sean, visit his personal website. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
    続きを読む 一部表示
    10 分
  • Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative Liberation | Analog Minds in a Digital World: Part 2 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
    2025/10/05
    ⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationPrefer to watch/listen? Here's the YouTube version. For readers, keep scrolling.I've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of ...
    続きを読む 一部表示
    15 分
  • The Hidden Cost of Too Many Cybersecurity Tools (Most CISOs Get This Wrong) | A Conversation with Pieter VanIperen | Redefining CyberSecurity with Sean Martin
    2025/10/03

    GUEST

    Pieter VanIperen, CISO and CIO of AlphaSense | On Linkedin: https://www.linkedin.com/in/pietervaniperen/

    HOST

    Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com

    EPISODE NOTES

    Real-World Principles for Real-World Security: A Conversation with Pieter VanIperen

    Pieter VanIperen, the Chief Information Security and Technology Officer at AlphaSense, joins Sean Martin for a no-nonsense conversation that strips away the noise around cybersecurity leadership. With experience spanning media, fintech, healthcare, and SaaS—including roles at Salesforce, Disney, Fox, and Clear—Pieter brings a rare clarity to what actually works in building and running a security program that serves the business.

    He shares why being “comfortable being uncomfortable” is an essential trait for today’s security leaders—not just reacting to incidents, but thriving in ambiguity. That distinction matters, especially when every new technology trend, vendor pitch, or policy update introduces more complexity than clarity. Pieter encourages CISOs to lead by knowing when to go deep and when to zoom out, especially in areas like compliance, AI, and IT operations where leadership must translate risks into outcomes the business cares about.

    One of the strongest points he makes is around threat intelligence: it must be contextual. “Generic threat intel is an oxymoron,” he argues, pointing out how the volume of tools and alerts often distracts from actual risks. Instead, Pieter advocates for simplifying based on principles like ownership, real impact, and operational context. If a tool hasn’t been turned on for two months and no one noticed, he says, “do you even need it?”

    The episode also offers frank insight into vendor relationships. Pieter calls out the harm in trying to “tell a CISO what problems they have” rather than listening. He explains why true partnerships are based on trust, humility, and a long-term commitment—not transactional sales quotas. “If you disappear when I need you most, you’re not part of the solution,” he says.

    For CISOs and vendors alike, this episode is packed with perspective you can’t Google. Tune in to challenge your assumptions—and maybe your entire security stack.

    SPONSORS

    ThreatLocker: https://itspm.ag/threatlocker-r974

    RESOURCES

    ADDITIONAL INFORMATION

    ✨ More Redefining CyberSecurity Podcast:

    🎧 https://www.seanmartin.com/redefining-cybersecurity-podcast

    Redefining CyberSecurity Podcast on YouTube:

    📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

    📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/

    Interested in sponsoring this show with a podcast ad placement? Learn more:

    👉 https://itspm.ag/podadplc

    ⬥KEYWORDS⬥

    ciso, appsec, threatintel, trust, ai, vendors, bloat, leadership, tools, risk, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    52 分
  • SBOMs in Application Security: From Compliance Trophy to Real Risk Reduction | AppSec Contradictions: 7 Truths We Keep Ignoring — Episode 3 | A Musing On the Future of Cybersecurity with Sean Martin and TAPE9 | Read by TAPE9
    2025/10/01

    SBOMs were supposed to be the ingredient label for software—bringing transparency, faster response, and stronger trust. But reality shows otherwise. Fewer than 1% of GitHub projects have policy-driven SBOMs. Only 15% of developer SBOM questions get answered. And while 86% of EU firms claim supply chain policies, just 47% actually fund them.

    So why do SBOMs stall as compliance artifacts instead of risk-reduction tools? And what happens when they do work?

    In this episode of AppSec Contradictions, Sean Martin examines:

    • Why SBOM adoption is lagging
    • The cost of static SBOMs for developers, AppSec teams, and business leaders
    • Real-world examples where SBOMs deliver measurable value
    • How AISBOMs are extending transparency into AI models and data

    Catch the full companion article in the Future of Cybersecurity newsletter for deeper analysis and more research.

    👉 What’s your experience with SBOMs? Have they helped reduce risk in your organization—or do they sit on the shelf as compliance paperwork? How are you bridging the gap between transparency and real security outcomes? Share your take—we’d love to hear your story.

    📖 Read the full companion article in the Future of Cybersecurity newsletter for deeper insights: TBD

    🔔 Subscribe to stay updated on the full AppSec Contradictions video series and more perspectives on the future of cybersecurity: https://www.youtube.com/playlist?list=PLnYu0psdcllRWnImF5iRnO_10eLnPFWi_

    ________

    This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.

    Enjoy, think, share with others, and subscribe to "The Future of Cybersecurity" newsletter on LinkedIn: https://itspm.ag/future-of-cybersecurity

    Sincerely, Sean Martin and TAPE9

    ________

    Sean Martin is a life-long musician and the host of the Music Evolves Podcast; a career technologist, cybersecurity professional, and host of the Redefining CyberSecurity Podcast; and is also the co-host of both the Random and Unscripted Podcast and On Location Event Coverage Podcast. These shows are all part of ITSPmagazine—which he co-founded with his good friend Marco Ciappelli, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️

    Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location

    To learn more about Sean, visit his personal website.


    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    続きを読む 一部表示
    3 分
  • AI Will Replace Democracy: The Future of Government is Here. Or, is it? Let's discuss! | A Conversation with Eli Lopian | Redefining Society And Technology Podcast With Marco Ciappelli
    2025/09/27
    ⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ Technology entrepreneur Eli Lopian joins Marco to explore "AI-cracy" - a revolutionary governance model where artificial intelligence writes laws based on abundance metrics while humans retain judgment. This fascinating conversation examines how we might transition from broken democratic systems to AI-assisted governance in our evolving Hybrid Analog Digital Society.Picture this scenario: you're sitting in a pub with friends, listening to them argue about which political rally to attend, and suddenly you realize something profound. As Eli told me, it's like watching people fight over which side of the train to sit on while the train itself is heading in completely the wrong direction. That metaphor perfectly captures where we are with democracy today.Eli's background fascinates me - breaking free from a religious upbringing at 16, building a successful AI startup for the past decade, and now proposing something that sounds like science fiction but feels increasingly inevitable. His central premise stopped me in my tracks: no human being should be allowed to write laws anymore. Only AI should create legislation, guided by what he calls an "abundance metric" - essentially optimizing for human happiness, freedom, and societal wellbeing.But here's where it gets really interesting. Eli isn't proposing we hand over control to a single AI overlord. Instead, he envisions three separate AI systems - one controlled by the government, one by the opposition, and one by an NGO - all working with the same data but operated by different groups. They must reach identical conclusions for any law to proceed. If they disagree, human experts investigate why.What struck me most was how this could actually restore direct democracy. In ancient Athens, every citizen participated in the polis. We can't do that with hundreds of millions of people, but AI could process everyone's input instantly. Imagine submitting your policy ideas directly to an AI system that responds within hours, explaining why your suggestion would or wouldn't improve societal abundance. It's like having the Athenian square scaled to modern complexity.The safeguards Eli proposes reveal his deep understanding of human nature. No AI can judge humans - that remains strictly a human responsibility. Citizens don't vote for charismatic politicians anymore; they vote for actual policies. Every three years, people choose their preferred policies. Every decade, they set ambitious collective goals - cure cancer, reach Mars, whatever captures society's imagination.Living in our Hybrid Analog Digital Society, we already see AI creeping into governance. Lawyers use AI, governments employ algorithms for efficiency, and citizens increasingly turn to ChatGPT for advice they once sought from doctors or therapists. Eli's insight is that we're heading toward AI governance whether we plan it or not - so why not design it properly from the start?His most compelling point addresses a fear I share: that AI lacks creativity. Eli argues this is actually a feature, not a bug. AI generates rather than truly creates. The creative spark - proposing that universal basic income experiment, suggesting we test new social policies, imagining those decade-long goals - that remains uniquely human. AI simply processes our creativity faster and more fairly than our current broken systems.The privacy question loomed large in our conversation. Eli proposes a brilliant separation: your...
    続きを読む 一部表示
    37 分
  • Why Identity Must Come First in the Age of AI Agents | A Black Hat SecTor 2025 Conversation with Cristin Flynn Goodwin | On Location Coverage with Sean Martin and Marco Ciappelli
    2025/09/26
    When we talk about AI at cybersecurity conferences these days, one term is impossible to ignore: agentic AI. But behind the excitement around AI-driven productivity and autonomous workflows lies an unresolved—and increasingly urgent—security issue: identity.In this episode, Sean Martin and Marco Ciappelli speak with Cristin Flynn Goodwin, keynote speaker at SecTor 2025, about the intersection of AI agents, identity management, and legal risk. Drawing from decades at the center of major security incidents—most recently as the head cybersecurity lawyer at Microsoft—Cristin frames today’s AI hype within a longstanding identity crisis that organizations still haven’t solved.Why It Matters NowAgentic AI changes the game. AI agents can act independently, replicate themselves, and disappear in seconds. That’s great for automation—but terrifying for risk teams. Cristin flags the pressing need to identify and authenticate these ephemeral agents. Should they be digitally signed? Should there be a new standard body managing agent identities? Right now, we don’t know.Meanwhile, attackers are already adapting. AI tools are being used to create flawless phishing emails, spoofed banking agents, and convincing digital personas. Add that to the fact that many consumers and companies still haven’t implemented strong MFA, and the risk multiplier becomes clear.The Legal ViewFrom a legal standpoint, Cristin emphasizes how regulations like New York’s DFS Cybersecurity Regulation are putting pressure on CISOs to tighten IAM controls. But what about individuals? “It’s an unfair fight,” she says—no consumer can outpace a nation-state attacker armed with AI tooling.This keynote preview also calls attention to shadow AI agents: tools employees may create outside the control of IT or security. As Cristin warns, they could become “offensive digital insiders”—another dimension of the insider threat amplified by AI.Looking AheadThis is a must-listen episode for CISOs, security architects, policymakers, and anyone thinking about AI safety and digital trust. From the potential need for real-time, verifiable agent credentials to the looming collision of agentic AI with quantum computing, this conversation kicks off SecTor 2025 with urgency and clarity.Catch the full episode now, and don’t miss Cristin’s keynote on October 1.___________Guest:Cristin Flynn Goodwin, Senior Consultant, Good Harbor Security Risk Management | On LinkedIn: https://www.linkedin.com/in/cristin-flynn-goodwin-24359b4/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com___________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974BlackCloak: https://itspm.ag/itspbcweb___________ResourcesKeynote: Agentic AI and Identity: The Biggest Problem We're Not Solving: https://www.blackhat.com/sector/2025/briefings/schedule/#keynote-agentic-ai-and-identity-the-biggest-problem-were-not-solving-49591Learn more and catch more stories from our SecTor 2025 coverage: https://www.itspmagazine.com/cybersecurity-technology-society-events/sector-cybersecurity-conference-toronto-2025New York Department of Financial Services Cybersecurity Regulation: https://www.dfs.ny.gov/industry_guidance/cybersecurityGood Harbor Security Risk Management (Richard Clarke’s firm): https://www.goodharbor.net/Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us___________KEYWORDScristin flynn goodwin, sean martin, marco ciappelli, sector, microsoft, ai, identity, agents, ciso, quantum, event coverage, on location, conference Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
    続きを読む 一部表示
    22 分
  • How F-Secure Transformed from Endpoint Security to Predicting Scams Before They Happen | A Brand Story Conversation with Dmitri Vellikok, Product and Business Development at F-Secure
    2025/09/26
    The cybersecurity industry operates on a fundamental misconception: that consumers want to understand and manage their digital security. After 17 years at F-Secure and extensive consumer research, Dmitri Vellikok has reached a different conclusion—people simply want security problems to disappear without their involvement.This insight has driven F-Secure's transformation from traditional endpoint protection to what Vellikok calls "embedded ecosystem security." The company, which holds 55% global market share in operator-delivered consumer security, has moved beyond the conventional model of asking consumers to install and manage security software.F-Secure's approach centers on embedding security capabilities directly into applications and services consumers already use. Rather than expecting people to download separate security software, the company partners with telecom operators, insurance companies, and financial institutions to integrate protection into existing customer touchpoints.This embedded strategy addresses what Vellikok identifies as cybersecurity's biggest challenge: activation and engagement. Traditional security solutions fail when consumers don't install them, don't configure them properly, or abandon them due to complexity. By placing security within existing applications, F-Secure automatically reaches more consumers while reducing friction.The company's research reveals the extent of consumer overconfidence in digital security. Seventy percent of people believe they can easily spot scams, yet 43% of that same group admits to having been scammed. This disconnect between perception and reality drives F-Secure's focus on proactive, invisible protection rather than relying on consumer vigilance.Central to this approach is what F-Secure calls the "scam kill chain"—a framework for protecting consumers at every stage of fraudulent attempts. The company analyzes scam workflows to identify intervention points, from initial contact through trust-building phases to final exploitation. This comprehensive view enables multi-layered protection that doesn't depend on consumers recognizing threats.F-Secure's partnership with telecom operators provides unique advantages in this model. Operators see network traffic, website visits, SMS messages, and communication patterns, giving them visibility into threat landscapes that individual security solutions cannot match. However, operators typically don't communicate their protective actions to customers, creating an opportunity for F-Secure to bridge this gap.The company combines operator-level data with device-level protection and user interface elements that inform consumers about threats blocked on their behalf. This creates what Vellikok describes as a "protective ring" around users' digital lives while maintaining transparency about security actions taken.Artificial intelligence and machine learning have been core to F-Secure's operations for over a decade, but recent advances enable more sophisticated predictive capabilities. The company processes massive data volumes to identify patterns and predict threats before they materialize. Vellikok estimates that within 18 to 24 months, F-Secure will be able to warn consumers three days in advance about likely scam attempts.This predictive approach represents a fundamental shift from reactive security to proactive protection. Instead of waiting for threats to appear and then blocking them, the system identifies risk patterns and steers users away from dangerous situations before threats fully develop.The AI integration also serves as a translation layer between technical security events and consumer-friendly communications. Rather than presenting technical alerts about blocked URLs or filtered emails, the system provides context about threats in language consumers can understand and act upon.F-Secure's evolution reflects broader industry recognition that consumer cybersecurity requires different approaches than enterprise security. While businesses can mandate security training and complex protocols, consumers operate in environments where convenience and simplicity drive adoption. The embedded security model acknowledges this reality while maintaining protection effectiveness.The company's global reach through operator partnerships positions it to address cybersecurity as a systemic challenge rather than an individual consumer problem. By aggregating threat data across millions of users and multiple communication channels, F-Secure creates network effects that improve protection for all users as the system learns from new attack patterns.Looking forward, Vellikok anticipates cybersecurity challenges will continue evolving in waves. Current focus on scam protection will likely shift to AI-driven threats, followed by quantum computing challenges. The embedded security model provides a framework for adapting to these changes while maintaining consumer protection without requiring users to understand or ...
    続きを読む 一部表示
    36 分
  • Why Cybersecurity Training Isn’t Working — And What To Do Instead | Human-Centered Cybersecurity Series with Co-Host Julie Haney and Guest Dr. Aunshul Rege | Redefining CyberSecurity with Sean Martin
    2025/09/25
    ⬥GUEST⬥Aunshul Rege, Director at The CARE Lab at Temple University | On Linkedin: https://www.linkedin.com/in/aunshul-rege-26526b59/⬥CO-HOST⬥Julie Haney, Computer scientist and Human-Centered Cybersecurity Program Lead, National Institute of Standards and Technology | On LinkedIn: https://www.linkedin.com/in/julie-haney-037449119/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥Cybersecurity Is for Everyone — If We Teach It That WayCybersecurity impacts us all, yet most people still see it as a tech-centric domain reserved for experts in computer science or IT. Dr. Aunshul Rege, Associate Professor in the Department of Criminal Justice at Temple University, challenges that perception through her research, outreach, and education programs — all grounded in community, empathy, and human behavior.In this episode, Dr. Rege joins Sean Martin and co-host Julie Haney to share her multi-layered approach to cybersecurity awareness and education. Drawing from her unique background that spans computer science and criminology, she explains how understanding human behavior is critical to understanding and addressing digital risk.One powerful initiative she describes brings university students into the community to teach cyber hygiene to seniors — a demographic often left out of traditional training programs. These student-led sessions focus on practical topics like scams and password safety, delivered in clear, respectful, and engaging ways. The result? Not just education, but trust-building, conversation, and long-term community engagement.Dr. Rege also leads interdisciplinary social engineering competitions that invite students from diverse academic backgrounds — including theater, nursing, business, and criminal justice — to explore real-world cyber scenarios. These events prove that you don’t need to code to contribute meaningfully to cybersecurity. You just need curiosity, communication skills, and a willingness to learn.Looking ahead, Temple University is launching a new Bachelor of Arts in Cybersecurity and Human Behavior — a program that weaves in community engagement, liberal arts, and applied practice to prepare students for real-world roles beyond traditional technical paths.If you’re a security leader looking to improve awareness programs, a university educator shaping the next generation, or someone simply curious about where you fit in the cyber puzzle, this episode offers a fresh perspective: cybersecurity works best when it’s human-first.⬥SPONSORS⬥ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Dr. Aunshul Rege is an Associate Professor here, and much of her work is conducted under this department: https://liberalarts.temple.edu/academics/departments-and-programs/criminal-justiceTemple Digital Equity Plan (2022): https://www.phila.gov/media/20220412162153/Philadelphia-Digital-Equity-Plan-FINAL.pdfTemple University Digital Equity Center / Digital Access Center: https://news.temple.edu/news/2022-12-06/temple-launches-digital-equity-center-north-philadelphiaNICE Cybersecurity Workforce Framework: https://www.nist.gov/itl/applied-cybersecurity/nice/nice-framework-resource-center⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq📝 The Future of Cybersecurity Newsletter: https://www.linkedin.com/newsletters/7108625890296614912/Interested in sponsoring this show with a podcast ad placement? Learn more:👉 https://www.itspmagazine.com/purchase-programs⬥KEYWORDS⬥sean martin, julie haney, aunshul rege, temple university, cybersecurity literacy, social engineering, cyber hygiene, human behavior, community engagement, cybersecurity education, redefining cybersecurity, cybersecurity podcast, redefining cybersecurity podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
    続きを読む 一部表示
    45 分