エピソード

  • What Could Possibly Go Wrong? Safety Analysis for AI Systems
    2025/10/31

    How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems.

    In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI's CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems.

    続きを読む 一部表示
    36 分
  • Getting Your Software Supply Chain In Tune with SBOM Harmonization
    2025/10/23

    Software bills of materials or SBOMs are critical to software security and supply chain risk management. Ideally, regardless of the SBOM tool, the output should be consistent for a given piece of software. But that is not always the case. The divergence of results can undermine confidence in software quality and security. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Jessie Jamieson, a senior cyber risk engineer in the SEI's CERT Division, sits down with Matt technical director of Risk and Resilience in CERT, to talk about how to achieve more accuracy in SBOMs and present and future SEI research on this front.

    続きを読む 一部表示
    23 分
  • API Security: An Emerging Concern in Zero Trust Implementations
    2025/10/08

    Application programing interfaces, more commonly known as APIs, are the engines behind the majority of internet traffic. The pervasive and public nature of APIs have increased the attack surface of the systems and applications they are used in. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI's CERT Division, sits down with Tim Morrow, Situational Awareness Technical Manager, also with the CERT Division, to discuss emerging API security issues and the application of zero-trust architecture in securing those systems and applications.

    続きを読む 一部表示
    18 分
  • Delivering Next-Generation AI Capabilities
    2025/09/29

    Artificial intelligence (AI) is a transformational technology, but it has limitations in challenging operational settings. Researchers in the AI Division of the Carnegie Mellon University Software Engineering Institute (SEI) work to deliver reliable and secure AI capabilities to warfighters in mission-critical environments. In our latest podcast, Matt Gaston, director of the SEI's AI Division, sits down with Matt Butkovic, technical director of the SEI CERT Division's Cyber Risk and Resilience program, to discuss the SEI's ongoing and future work in AI, including test and evaluation, the importance of gaining hands-on experience with AI systems, and why government needs to continue partnering with industry to spur innovation in national defense.

    続きを読む 一部表示
    30 分
  • The Benefits of Rust Adoption for Mission-and-Safety-Critical Systems
    2025/09/16

    A recent Google survey found that many developers felt comfortable using the Rust programming language in two months or less. Yet barriers to Rust adoption remain, particularly in safety-critical systems, where features such as memory and processing power are in short supply and compliance with regulations is mandatory. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Vaughn Coates, an engineer in the SEI's Software Solutions Division, sits down with Joe Yankel, initiative Lead of the DevSecOps Innovations team at the SEI, to discuss the barriers and benefits of Rust adoption.

    続きを読む 一部表示
    20 分
  • Threat Modeling: Protecting Our Nation's Complex Software-Intensive Systems
    2025/09/05

    In response to Executive Order (EO) 14028, Improving the Nation's Cybersecurity, the National Institute of Standards and Technology (NIST) recommended 11 practices for software verification. Threat modeling is at the top of the list. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Natasha Shevchenko and Alex Vesey, both engineers with the SEI's CERT Division, sit down with Timothy Chick, technical manager of CERT's Applied Systems Group, to discuss how threat modeling can be used to protect software-intensive systems from attack. Specifically, they explore how threat models can guide system requirements, system design, and operational choices to identify and mitigate threats.

    続きを読む 一部表示
    35 分
  • Understanding Container Reproducibility Challenges: Stopping the Next Solar Winds
    2025/07/30

    Container images are increasingly being used as the main method for software deployment, so ensuring the reproducibility of container images is becoming a critical step in protecting the software supply chain. In practice, however, builds are often not reproducible due to elements of the build environment that rely on nondeterministic factors such as timestamps and external dependencies. Lack of reproducibility can lead to lack of trust, broken builds, and possibly mask hidden malware insertion. Vessel, a recent tool from the Carnegie Mellon University Software Institute (SEI), helps developers identify the difference between two container images to help sort benign from problematic issues. In this SEI Podcast, Kevin Pitstick, a senior software engineer at the SEI and Vessel's lead developer, and Lihan Zhan, a software engineer at the SEI working on tactical and AI-enabled systems, sit down with Grace Lewis, lead of the Tactical and AI-Enabled Systems (TAS) applied research and development team at the SEI, to discuss the Vessel tool, its development, and application in mission-critical settings.

    続きを読む 一部表示
    25 分
  • Mitigating Cyber Risk with Secure by Design
    2025/07/14

    Software enables our way of life, but market forces have sidelined security concerns leaving systems vulnerable to attack. Fixing this problem will require the software industry to develop an initial standard for creating software that is secure by design. These are the findings of a recently released paper coauthored by Greg Touhill, director of the Software Engineering Institute (SEI) CERT Division. In this latest SEI podcast, Touhill and Matthew Butkovic, director of Cyber Risk and Resilience at CERT, discuss the paper including its recommendations for making software secure by design.

    続きを読む 一部表示
    32 分