What Clinical Trial Sponsors Must Know Before Using AI Tools: Data Protection and Global Regulatory Perspectives
Artificial intelligence is becoming an essential component of modern clinical trials. It supports patient recruitment, accelerates data analysis, enables adaptive trial designs, and contributes to regulatory decision-making. As sponsors adopt AI systems across various stages of the research lifecycle, they must address the legal and ethical frameworks that govern the use of personal data and algorithmic technologies in healthcare.
This article outlines the core responsibilities of clinical trial sponsors when using AI tools, with a primary focus on European data protection and AI regulations, while also referencing global guidance and emerging standards that shape the broader landscape.
Applying the General Data Protection Regulation
The General Data Protection Regulation applies whenever personal data is processed in the European Union (“EU”), including the European Economic Area (“EEA”) and the United Kingdom (“UK”), or by entities targeting individuals in the EU. When an AI system is used in a clinical trial to process personal data, the sponsor qualifies as a data controller and remains responsible for GDPR compliance of all data processing activities. This includes verifying the legal basis for processing; typically consent or legitimate interest of the sponsor in the area of scientific research; ensuring that the AI system operates in line with the purpose limitation principle and drafting the necessary records and assessments to demonstrate accountability. If an AI tool is introduced after initial data collection, or if its function differs from that initially communicated, the sponsor may need to assess whether the original legal basis still applies or comply with the obligations required for establishing a new legal basis. The sponsor must also ensure that any third-party providing AI services operates as a processor under a compliant data processing agreement and implements adequate technical and organisational measures to protect data confidentiality and integrity.
Understanding the EU AI Act and Its Interception with GDPR
In 2024, the European Union adopted the AI Act, which establishes a legal framework for the development and use of artificial intelligence systems. The regulation applies to all AI systems that are placed on the EU market or used in the EU, regardless of where the provider is based.
The EU AI Act establishes a risk-based regulatory framework that classifies AI systems into four categories: unacceptable, high, limited, and minimal risk. Unacceptable-risk systems are banned outright within the EU because they pose a serious threat to fundamental rights, safety, or democratic values. High-risk AI systems are subject to the most stringent obligations under the Act and can only be deployed if in compliance with such obligations. Limited-risk AI systems are permitted but must meet basic transparency requirements, such as informing users they are interacting with an AI system and ensuring the design does not mislead or deceive. Minimal-risk systems are not subject to specific requirements under the AI Act, but must still comply with other applicable laws, including data protection frameworks.
It is important to clarify the difference between the AI and Data Protection legal frameworks and why they may work simultaneously. While the GDPR applies to the processing of personal data (and if an AI system processes personal data, the person responsible for deploying such system must comply with the GDPR) the AI Act serves to ensure the ethical use of AI in real-world situations. Does the use of AI always require the processing of personal data? Not necessarily. However, in clinical trials, most AI applications, such as patient selection, imaging analysis, and safety monitoring, typically involve personal data.
Does the EU AI Act Apply to Scientific Research?
Despite the broad scope of the AI Act, Article 2(6) and Recital 25 establish a narrow exclusion for AI systems developed and used exclusively for scientific research and development. According to the Regulation, such systems fall outside the AI Act only if they are created solely for the purpose of conducting scientific research, and only if they are not placed on the market or used to produce legal or significant effects on individuals.
This exclusion was introduced to protect academic and experimental research and is designed to avoid imposing the full regulatory burden on AI models used in non-commercial, closed research environments. However, the exemption does not apply in a number of common clinical research scenarios. First, if the AI system is procured or licensed from a commercial provider, rather than developed specifically for the research project, the exclusion cannot be claimed. Second, if the system is used in a clinical trial where it influences patient eligibility, dosing, safety monitoring, or any aspect of the investigational product’s development pathway, the system is no longer considered confined to a purely scientific function. It is then considered to be “put into service,” as defined in Article 3(13) of the AI Act.
In practice, this means that most AI tools used operationally in clinical trials, particularly in interventional or regulatory-driven settings, will not qualify for the scientific research exclusion. The same applies to systems developed in a research environment but intended for future market use, including tools supporting software as a medical device or algorithms subject to future certification.
EU AI Act and Clinical Trials
AI systems used in clinical trials may fall within the high-risk category under the EU AI Act through two regulatory pathways outlined in Article 6. First, under Article 6(1), an AI system is considered high risk if it is a product or a safety component of a product governed by EU harmonization legislation listed in Annex I, such as medical devices under Regulation (EU) 2017/745 or in vitro diagnostic devices under Regulation (EU) 2017/746, and if that product requires third-party conformity assessment. This means that investigational AI tools used for diagnostic decision support, patient stratification based on biomarkers, or real-time safety monitoring may be classified as high risk if they fall within the scope of these device regulations and are subject to notified body review.
Second, Article 6(2) states that AI systems listed in Annex III are also deemed high risk. While clinical research is not explicitly mentioned in Annex III, an AI system used in a trial may fall under this category if it materially influences decisions that affect participants’ health or fundamental rights, particularly where profiling is involved or medical decision-making is impacted. Sponsors must assess whether the AI system qualifies under either of these routes, as both may lead to a high-risk designation with corresponding regulatory obligations.
If a clinical trial sponsor deploys a high-risk AI system (e.g. for patient selection, safety signal detection, or diagnostic support), it must comply with the EU AI Act by ensuring the system is used according to the provider’s instructions, assigning trained human oversight, retaining system logs for at least six months, and monitoring the system’s performance. The sponsor must report any serious incidents or risks to the provider and relevant authorities without delay, ensure input data is relevant and representative, inform trial participants of the AI system’s use, and where applicable, perform a fundamental rights impact assessment and complement the existing GDPR Data Protection Impact Assessment (DPIA) with AI-specific risks.
The Role of Data Protection Impact Assessments
When AI systems are used in clinical trials and involve the processing of sensitive health data or automated decision-making, a Data Protection Impact Assessment may be required under the GDPR. This assessment should include a description of the processing, the purpose of the AI system, the legal basis for data use, and an evaluation of the risks to data subjects. Where the AI system falls under the AI Act’s high-risk category, the sponsor must also maintain a risk management framework aligned with the requirements of the Regulation, including appropriate levels of human involvement, accuracy monitoring, and transparency in system design.
Global Context: Ethics and Emerging Regulatory Approaches
While the European Union provides one of the most comprehensive legal frameworks for AI in healthcare, other jurisdictions are developing their own regulatory and ethical approaches. The United States Food and Drug Administration (FDA) has issued an action plan for AI in medical devices and emphasizes good machine learning practices, particularly in software that evolves over time. Health Canada has issued draft guidance for AI-enabled medical devices, and Australia has adopted a regulatory sandbox model for early-stage AI testing.
The World Health Organization has published the Ethics and Governance of Artificial Intelligence for Health report, which sets out guiding principles such as transparency, accountability, inclusiveness, and respect for human autonomy. These principles are intended to guide all stakeholders involved in health-related AI, including researchers and sponsors. Even where specific legal obligations may not yet exist, adherence to ethical standards is increasingly expected by ethics committees, funders, and regulatory agencies. Sponsors are encouraged to align with these international standards and document their governance processes accordingly.
Conclusion
The application of the EU AI Act follows a phased approach. The Regulation entered into force in August 2024, with key provisions becoming applicable in stages. Rules concerning prohibited AI practices and AI literacy take effect from February 2025. Obligations for general-purpose AI systems, including transparency, documentation, and risk mitigation, will apply from August 2025. Requirements for high-risk AI systems, such as conformity assessments, risk management, and human oversight, come into force from August 2026. For AI systems embedded in medical devices that require notified body involvement, the relevant obligations apply from August 2027.
At the same time, jurisdictions such as the United States, Canada, the United Kingdom, and Australia are developing or implementing new legal frameworks to govern the use of AI in healthcare and clinical research. As global standards continue to emerge, clinical trial sponsors should design compliance programs that align with both European regulations and international expectations. A harmonized approach will help ensure ethical, legal, and operational consistency when deploying AI tools in trials across multiple regions.

Subscribe Now to the Bio-Startup Standard
Notify me for the next issue!