Program
CEO, MD-Clinicals, Switzerland
Assessor Medical Devices, Senior Expert, Austrian Agency for Health and Food Safety (AGES), Austria
Associate Professor of Medical Device Regulatory Science, University of Galway, Chair of the Regulatory Affairs Committee, Biomedical Alliance in Europe, Ireland
Advocaat / Attorney at law, Axon Lawyers, Netherlands
The European regulatory ecosystem is undergoing structural transformation. The AI Act, MDR/IVDR, and EU data regulations (including the EHDS Act) are converging to form an integral legal architecture that redefines what it means for digital health technologies to be trustworthy, effective and legally viable.
This session explores how digital health innovators can align clinical evidence, algorithmic design and data governance across these frameworks without fragmenting compliance or stalling innovation. As of AI, device, and data law continue to converge, strategic legal alignment becomes essential to ensure that products meet both regulatory expectations and the complexity of real world use.
AI Airlock Technical Programme Manager, MHRA, UK
AI Airlock Regulatory Specialist, MHRA, UK
Chief Innovation Officer at Complear, Portugal
The integration of artificial intelligence (AI) into healthcare presents unique regulatory challenges, from managing adaptive algorithms to addressing risks like automation bias and model drift. Traditional regulatory frameworks, designed for static technologies, often struggle to accommodate the complexity and dynamism of AI as a medical device (AIaMD). In response, sandboxes offer a novel, collaborative environment for AI developers, regulators, and clinical stakeholders to test and refine regulatory approaches in various environments (theoretical and real world). This panel contribution shares key insights from the AI Airlock pilot, which supports selected AIaMD developers through tailored regulatory experimentation [and explores the EU’s approach – TBC].
Using a regulatory challenge-led framework, the AI Airlock pilot explored issues across the product lifecycle, including AI errors, validation, explainability, performance drift, and human-AI interaction. Now in its second phase, it continues to explore some of the most pressing regulatory challenges with AIaMD today. The session will discuss how sandboxes can support regulatory learning while protecting patient safety, foster evidence generation for novel technologies, and inform future policy. Reflections will also cover practical lessons on setting up a sandbox and cross-sector collaboration.
Managing Director, Hardian Health, UK
The intended purpose of a device goes beyond regulatory concerns; it also drives health economic considerations and can underlie intellectual property issues, all of which inform market access strategy. Aligning the intended purpose across all domains is crucial for successful market entry.
Director at Elsmere Enterprises, Belgium
Co-founder and CTO of Revela, UK
Clinical Evaluation Expert for Medical Devices, Founder, Clinical Evaluation Navigator, France
This session will explore how AI agents can help regulatory and clinical teams reduce the time and effort needed to produce compliant clinical evaluations under MDR. Drawing from real-world examples, I’ll share how automation and intelligent workflows can simplify article screening, data extraction, and evidence appraisal, without compromising quality. Attendees will walk away with practical tools, use cases, and a realistic view of what AI can do today in the context of clinical evaluation.
Associate Director Project Management, RQM+, Germany
Director, Technical Solutions and Innovation, RQM+, USA
Fulfilling its intended use, whilst being safe for that intended use is the basic requirement for all medical devices. The increase in the availability and use of artificial intelligence in medical devices comes alongside the publication of new regulations and standards to help control and guide manufacturers producing these devices. ISO 14971:2019 is a high-level, process standard that describes risk management for all medical devices. It does not provide guidance on how to apply its requirements to different types of medical devices, nor should it. We have some support from BS/AAMI 34971:2023, providing guidance to industry on how to apply ISO 14971:2019 to machine learning-enabled medical devices, but how should it be implemented?
As medical devices are seen to become more complicated with the inclusion of artificial intelligence and machine learning elements, there could be a temptation to think that our risk management process must evolve into something novel and even more complicated compared to our older, less advanced medical devices. That is not essentially the case.
There are concerns about transparency, explainability and bias. Rightly so. There will always be uncertainty about novel technologies, which are thus seen as high-risk and worrying.
New technologies must mean new risks and higher risks. Maybe, maybe not.
Does all of this mean that our risk management process needs to drastically evolve? Not so much.
This session will:
- Look at how a risk management process can evolve with newer technologies;
- Advocate for the foundations of the risk analysis that are critical to risk management of all medical devices;
- Discuss basic risk analysis methods that are critical to identifying potential risks for medical devices enabled with machine learning;
- Improve understanding of where risk controls will be implemented in the development of machine learning-enabled medical devices;
- Highlight the connection between risk, data management, usability and clinical evidence;
- Suggest post-market surveillance activities to be considered for machine learning-enabled medical devices.
Technical Documentation Manager & Head of UK Approved Body Intertek Medical Notified Body, UK
This presentation explores the clinical evaluation requirements for Software as a Medical Device (SaMD) under the EU Medical Device Regulation (MDR), with a particular focus on the use of clinical evidence through equivalence and the application of Article 61(10). It outlines the regulatory criteria for establishing equivalence for SaMD, highlights the challenges associated with demonstrating sufficient clinical evidence in the absence of clinical investigations, and clarifies the conditions under which Article 61(10) may be applied. Practical examples will be used to illustrate key principles and support consistent interpretation of the MDR requirements.
CEO, MD-Clinicals, Switzerland
CEO, MDx CRO, UK
Bringing one of the world’s largest germline NGS panels into compliance with the EU IVDR was an unprecedented challenge. Spanning more than 4,600 genes, this diagnostic service combined complex wet-lab workflows with a custom bioinformatics pipeline, pushing the boundaries of regulatory expectations. In this talk, Carlos will share the key hurdles his team faced – ranging from inconsistent notified body interpretations to the integration of third-party components – and the strategies that proved decisive, including database-driven clinical justification, modular software validation, and robust post-market planning. The session will highlight practical lessons learned and offer insights into how advanced genomic services can navigate IVDR certification with greater efficiency and confidence.
Head of Medical Devices at MDx, Spain
Clinical evaluation is at the heart of demonstrating safety and effectiveness for medical device artificial intelligence (MDAI). This session will explore how manufacturers can design and execute evaluation and testing strategies that meet the requirements of both the EU MDR and the Artificial Intelligence Act (AIA).
We’ll cover expectations for real-world testing, dataset quality, bias mitigation, and integration of AIA obligations into existing clinical evaluation frameworks. Participants will gain practical insights into aligning regulatory evidence with innovation timelines while ensuring patient safety and trust.
Assessor Medical Devices, Senior Expert, Austrian Agency for Health and Food Safety (AGES), Austria
CEO, MDx CRO, UK
Designing and executing clinical performance studies for in vitro diagnostics requires careful navigation of regulatory, ethical, and operational demands. Under the IVDR, the expectations for study design have increased significantly, with ISO 20916 setting a new benchmark for good clinical practice in IVD studies. In this session, Carlos will outline the essential elements of a successful study design, from formulating clear research questions and ensuring appropriate specimen selection to establishing robust statistical plans and risk-based monitoring strategies. Drawing on practical experience, he will share common pitfalls observed during site qualification, initiation, and monitoring, as well as strategies for selecting the right CRO and CRA to ensure study quality. The talk will emphasize actionable insights and lessons learned, equipping attendees with the tools to design and manage IVD studies that withstand regulatory scrutiny and deliver reliable evidence of clinical performance.
Business Development Director, Meditrial, Switzerland
Early Feasibility Studies (EFS) are small-scale clinical investigations vital for refining medical device design and ensuring early safety in humans. EFS are often conducted in Europe, driven by unmet medical needs and strong investigator expertise. However, dedicated guidance on EFS is lacking. The EU Medical Device Regulation (MDR 2017/745) provides extensive provisions for clinical investigations, yet, like ISO 14155:2020 and MDCG 2021-6 Q&A, it does not address the unique requirements of EFS.
The HEU-EFS initiative (2023–2027) is addressing this challenge by spearheading a harmonised European methodology. This effort is designed to ensure that Europe can match and potentially surpass the current global EFS pathways. A key deliverable is the creation of an EFS protocol template to enhance compliance, accelerate approvals, and unite regulators, innovators, and patient advocates under a clear, shared framework. By fostering alignment, HEU-EFS is laying the foundation for enabling more efficient early-stage trials, using the U.S. EFS program as a blueprint, and strengthening Europe’s position as a leader in medical innovation.
CEO, MD-Clinicals, Switzerland
Independent Consultant, UK
Managing Partner, Escentia, Germany
Clinical Affairs Specialist, Escentia, Germany
PhD Researcher, Trinity College Dublin
Artificial intelligence (AI) technologies are increasingly integrated into medical devices, particularly for interpreting medical images. However, defining the appropriate clinical data requirements for these systems is far from straightforward. There is no universal solution: data needs depend on how an algorithm is trained and validated, the quality and diversity of reference datasets, and the extent of clinical input during development. Under the EU Medical Device Regulation (MDR), manufacturers face the challenge of designing a coherent “story” for clinical evaluation that bridges technical performance with real-world clinical value.
This session explores the multifaceted considerations of clinical data for AI-enabled medical devices from an academic perspective. Through case examples and audience participation, we will examine how factors such as training methodology, intended use, dataset representativeness, and validation design influence evidence requirements. Participants will be invited to vote on how different aspects of development and deployment affect the nature and amount of clinical data needed, highlighting the complexity of aligning AI innovation with MDR expectations.
Technical Team Manager, AIMD Clinical, BSI, UK
This will include practical insights into:
- How to plan clinical and technical evidence strategies with Clinical Evidence marking in mind.
- What Notified Bodies look for when reviewing applications and how evidence should be planned and presented.
- The importance of correctly defining and comparing against the state of the art.
- Common pitfalls where evidence generation is not aligned to these requirements.