Program

Regulatory Director (IVD), MDx CRO, UK
AI and digital health are transforming diagnostics, promising unprecedented accuracy and efficiency in in vitro diagnostic (IVD) devices. But with innovation comes a labyrinth of regulatory challenges. How do you ensure that your AI-powered IVD not only meets market needs but also clears regulatory hurdles in the EU, US, and beyond?
In this session, we’ll unravel the complexities of integrating AI into IVDs while maintaining compliance with evolving global frameworks. From algorithm transparency and data integrity to risk management and post-market surveillance, we’ll map out actionable strategies for navigating the regulatory maze.
Real-world case studies will shed light on common pitfalls and proven approaches, giving you the insights you need to mitigate risks, streamline development, and future-proof your AI-enabled diagnostics.
Join us to discover how to align cutting-edge technology with robust compliance – without getting lost in the regulatory maze.

Clinical Evaluation Expert for Medical Devices, Founder, Clinical Evaluation Navigator, France
This session will explore how AI agents can help regulatory and clinical teams reduce the time and effort needed to produce compliant clinical evaluations under MDR. Drawing from real-world examples, I’ll share how automation and intelligent workflows can simplify article screening, data extraction, and evidence appraisal, without compromising quality. Attendees will walk away with practical tools, use cases, and a realistic view of what AI can do today in the context of clinical evaluation.

AI Airlock Programme Manager, MHRA, UK

AI Airlock Technical Programme Manager, MHRA, UK
The integration of artificial intelligence (AI) into healthcare presents unique regulatory challenges, from managing adaptive algorithms to addressing risks like automation bias and model drift. Traditional regulatory frameworks, designed for static technologies, often struggle to accommodate the complexity and dynamism of AI as a medical device (AIaMD). In response, sandboxes offer a novel, collaborative environment for AI developers, regulators, and clinical stakeholders to test and refine regulatory approaches in various environments (theoretical and real world). This panel contribution shares key insights from the AI Airlock pilot, which supports selected AIaMD developers through tailored regulatory experimentation [and explores the EU’s approach – TBC].
Using a regulatory challenge-led framework, the AI Airlock pilot explored issues across the product lifecycle, including AI errors, validation, explainability, performance drift, and human-AI interaction. Now in its second phase, it continues to explore some of the most pressing regulatory challenges with AIaMD today. The session will discuss how sandboxes can support regulatory learning while protecting patient safety, foster evidence generation for novel technologies, and inform future policy. Reflections will also cover practical lessons on setting up a sandbox and cross-sector collaboration.

Associate Professor of Medical Device Regulatory Affairs Director, MSc Medical Device Regulatory Affairs, Ireland

Technical Team Manager, AIMD Clinical, BSI, UK
I will aim to provide the notified body’s perspective on these challenges, and will aim to cover areas such as describing and defining the software and its intended purpose, classification, establishing the software in relation to the State of the Art, Clinical Data, Appraisal and Analysis, and PMCF.

Managing Director, Hardian Health, UK
The intended purpose of a device goes beyond regulatory concerns; it also drives health economic considerations and can underlie intellectual property issues, all of which inform market access strategy. Aligning the intended purpose across all domains is crucial for successful market entry.

Senior Director Growth Drivers (MHS), TÜV SÜD, Germany

Director at Elsmere Enterprises, Belgium

Advocaat / Attorney at law, Axon Lawyers, Netherlands
The European regulatory ecosystem isis undergoing structural transformation. The AI Act, MDR/IVDR, and EU data regulations (including the EHDS Act) are converging to form an integral legal architecture that redefines what it means for digital health technologies to be trustworthy, effective and legally viable.
This session explores how digital health innovators can align clinical evidence, algorithmic design and data governance across these frameworks without fragmenting compliance or stalling innovation. As of AI, device, and data law continue to converge, strategic legal alignment becomes essential to ensure that products meet both regulatory expectations and the complexity of real world use.

Manager, Intelligence and Strategic Execution at RQM+, UK
Fulfilling its intended use, whilst being safe for that intended use is the basic requirement for all medical devices. The increase in the availability and use of artificial intelligence in medical devices comes alongside the publication of new regulations and standards to help control and guide manufacturers producing these devices. ISO 14971:2019 is a high-level, process standard that describes risk management for all medical devices. It does not provide guidance on how to apply its requirements to different types of medical devices, nor should it. We have some support from BS/AAMI 34971:2023, providing guidance to industry on how to apply ISO 14971:2019 to machine learning-enabled medical devices, but how should it be implemented?
As medical devices are seen to become more complicated with the inclusion of artificial intelligence and machine learning elements, there could be a temptation to think that our risk management process must evolve into something novel and even more complicated compared to our older, less advanced medical devices. That is not essentially the case.
There are concerns about transparency, explainability and bias. Rightly so. There will always be uncertainty about novel technologies, which are thus seen as high-risk and worrying.
New technologies must mean new risks and higher risks. Maybe, maybe not.
Does all of this mean that our risk management process needs to drastically evolve? Not so much.
This session will:
- Look at how a risk management process can evolve with newer technologies;
- Advocate for the foundations of the risk analysis that are critical to risk management of all medical devices;
- Discuss basic risk analysis methods that are critical to identifying potential risks for medical devices enabled with machine learning;
- Improve understanding of where risk controls will be implemented in the development of machine learning-enabled medical devices;
- Highlight the connection between risk, data management, usability and clinical evidence;
- Suggest post-market surveillance activities to be considered for machine learning-enabled medical devices.

Regulatory Affairs Specialist, Edward Lifesciences, Netherlands

Co-founder and CTO of Spotlight Health, UK