エピソード

  • Deep-dive on high-risk AI obligations: accuracy
    2025/07/03

    In this episode, we focus on the accuracy requirements under the EU AI Act for high-risk AI systems. Our discussion considers what "accuracy" means in the context of AI models, and different ways of thinking about how a model can be accurate or inaccurate in its outputs, as well as looking at different approaches that the Commission might take, in upcoming guidance, to specifying how the accuracy of AI models is to be measured. We also look at how the requirement for high-risk AI models to be sufficiently accurate is likely to play out in practice – examining the different obligations falling on providers and deployers of high-risk AI systems and what organisations are likely to need to do to meet these obligations.

    続きを読む 一部表示
    11 分
  • Limited and Minimal Risk AI – transparency obligations
    2025/06/20

    In this episode, we turn our attention to the transparency obligations imposed on providers and deployers of limited and minimal risk AI systems under the EU AI Act. While these systems may not pose the same level of concern as high-risk AI, they are still subject to important requirements (particularly around informing users when they are interacting with AI, ensuring compliance in AI generated decision making around health, safety or basic rights). Our episode considers the practical implications for businesses developing or deploying AI in the EU.

    続きを読む 一部表示
    19 分
  • High-risk AI systems and exceptions overview
    2025/06/09

    In this episode, we examine high-risk AI systems under the EU AI Act and the specific exceptions that apply. We'll discuss the criteria defining high-risk categories, the compliance obligations for these systems, and the safeguards businesses must adopt. We also explore scenarios where certain high-risk requirements may not apply, clarifying how exceptions function within the Act, and detailing the different obligations for different roles in the AI supply chain. Join us to understand how businesses can approach AI risk thoughtfully and remain compliant with evolving regulations.

    続きを読む 一部表示
    14 分
  • AI and litigation
    2025/06/09

    In this episode, we explore how artificial intelligence can generate litigation, in particular, in the field of data protection and cybersecurity. We discuss the use of personal data in training and deploying AI models, litigation arising from the alleged improper use of AI, cybersecurity threats and AI models, and AI as a litigation tool.

    続きを読む 一部表示
    21 分
  • Deep-dive on high-risk AI obligations: risk management
    2025/06/02

    In this episode, we focus on the risk management requirements for high-risk AI systems under the EU AI Act. Our discussion highlights practical approaches in identifying, assessing, and mitigating AI-related risks while emphasising transparency, accountability, and compliance. We also explore how organisations can align their risk management frameworks with regulatory expectations, and the role of documentation, monitoring, and governance in ensuring the safe and lawful deployment of high-risk AI.

    続きを読む 一部表示
    26 分
  • AI and employment
    2025/06/02

    In this episode, we explore how AI is reshaping the job market, with certain roles disappearing and new ones emerging. We discuss the evolving skill sets required and the risks of bias and discrimination in AI-driven recruitment. Emphasising the importance of compliance, we highlight the need for diverse data sets and responsible AI integration, ensuring organisations mitigate legal risks while promoting fairness and transparency in employment practices.

    続きを読む 一部表示
    18 分
  • AI and the environment - how effective is the EU AI Act likely to be
    2025/01/21

    Bobbie Bickerton and Mark Lewis, of Stephenson Harwood's commercial, technology and data team, discuss the impact of AI, in particular AI compute and data centres, on the environment and sustainability. They consider what the AI Act says about the environment and sustainability, and how effective it is likely to be in addressing these issues. They briefly raise the wider ongoing pressures of AI development on the environment at geopolitical, strategic, operational, energy resources, ethical investment, and procurement levels, and ask if AI can ultimately be used a tool for good in this context.

    続きを読む 一部表示
    21 分
  • AI – explaining the basics
    2025/01/21

    Head of innovation, Paul Orchard, and Bobbie Bickerton an associate in the technology and data team, discuss exactly what AI is, what all those acronyms mean, and if AI will result in the end of the world.

    続きを読む 一部表示
    33 分