Current Limitations of AI in Regulatory Writing and Assessments for Drug and Device Development

Artificial Intelligence (AI) has made remarkable progress in recent years, offering promising tools to streamline documentation, accelerate data analysis, and support planning as well as strategic and regulatory decision-making across the product development lifecycle. However, when applied to regulatory writing and scientific interpretation, especially in the preparation of regulatory development plans and formal submissions such as Pre-RFDs, Pre-INDs, or Scientific Advice packages, current AI tools reveal significant limitations. These shortcomings pose meaningful challenges for developers of drugs, medical devices, and combination products, potentially resulting in regulatory communication gaps, misclassification, or flawed strategic decisions that can result in substantial delays, increased resource expenditure, and an extended time-to-market.

Misalignment with Regulatory Language and Strategic Intent

One of the most significant challenges of AI-generated content is its frequent misalignment with the precise and context-sensitive language required in regulatory communication. While AI tools can produce fluent, grammatically correct English, they often distort the intended regulatory message in subtle but meaningful ways.

For instance, when drafting a Pre-RFD to support the classification of a product as a medical device, AI may introduce terminology commonly associated with pharmaceutical products. What may appear as minor linguistic choices, such as referring to “active ingredients,” “systemic effects,” or “pharmacological action”, can conflict with the regulatory requirements for devices. This is particularly critical when describing the product’s mechanism of action, which must not only align with regulatory definitions of medical devices but also consider the diverse classification frameworks and terminological nuances applied by health authorities across different jurisdictions.

Inaccurate language may suggest pharmacologic activity where none exists, potentially triggering misclassification, increased regulatory hurdle, or delays in review. Moreover, given the variability in terminology and classification criteria across jurisdictions, regulatory messaging must be carefully tailored to each specific context, something current AI systems are not reliably equipped to do.

In pursuit of sounding more polished or “native,” AI tools also tend to replace specific regulatory terminology with broader or stylistically refined alternatives. This can compromise the scientific clarity and regulatory intent of a submission, which may significantly impact regulatory interpretation and decision-making.

AI tools are not yet capable of reliably interpreting nuanced regulatory distinctions or adjusting language to support the strategic regulatory positioning of a product effectively.

Challenges in Clinical Data Retrieval and Interpretation

AI tools are increasingly used to assist in identifying and analyzing large databases such as clinical trials from public registries and other platforms. However, their ability to retrieve specific studies or datasets, mainly when based on unique identifiers like NCT numbers, is still limited. In many instances, AI-generated outputs return incomplete results, overlook key endpoints, or misrepresent clinical aspects of study design and findings. These inaccuracies may stem from limitations in recognizing trial identifiers, differentiating between product classifications, and other formal definitions.

Beyond these technical limitations, a more fundamental challenge lies in AI’s inability to contextualize clinical data within the specific development stage of a product. For example, AI-generated analysis may fail to recognize whether the product has already undergone safety evaluations in previous studies, whether it is approved and now being studied for a new indication, or whether it is a novel investigational product. These distinctions are critical for assessing the relevance, novelty, and regulatory interpretation of the data.

In addition, AI tools generally do not account for broader clinical and methodological context—such as how the selection of primary and secondary endpoints aligns with the study’s inclusion and exclusion criteria, how these endpoints relate to the overall study duration and follow-up period, or whether the analysis focuses on a single timepoint versus longitudinal data.

As a result, the evidence summaries produced by AI may misrepresent the maturity or adequacy of the clinical dataset. When such outputs are used to inform development strategies or formal regulatory submissions, they can lead to misguided clinical assumptions, suboptimal protocol designs, inefficient prioritization of studies and milestones, and ultimately fail to align with regulatory expectations.

Inaccurate or Incomplete Referencing of Scientific Literature

Sourcing and citing peer-reviewed literature is another common area where AI tools fall short. When prompted to retrieve articles using DOI numbers or extract references from a predefined literature list, AI tools often fail to align citations with appropriate content, returning entirely incorrect sources, or, in some cases, fail to retrieve any results at all.

Even more concerning is the use of AI to generate scientific content intended to support regulatory submissions, where tools have been known to fabricate citations entirely. This not only undermines the scientific integrity of the document but also poses a significant risk to the credibility of the submission if unverifiable or non-existent references are included.

The Future of AI in Regulatory Planning: Progress with Caution

AI holds considerable promise as a supportive tool in the regulatory processes surrounding drug and medical device development. It can be a powerful assistant for early-stage drafting, language refinement, and high-level summarization. Additionally, AI has the potential to save time and resources when analyzing large datasets, helping to inform more robust regulatory assessments and support the strategic design of development plans.

As AI tools continue to advance, several current limitations in regulatory writing and data assessment may become less prominent. Structured and harmonized data environments, combined with enhanced natural language understanding models, may allow AI systems to more consistently extract relevant information, and tag key endpoints. This will reduce the manual effort involved in basic data mining and speed up early-stage analysis.

However, despite these gains, one of the most persistent and problematic gaps will remain: the inability to independently verify the accuracy or validity of such AI-driven analyses. Even if AI systems can surface studies based on seemingly correct filters or terminology, there is currently no mechanism to audit or validate how these tools weigh relevance, detect bias, or infer conclusions from aggregated data. AI lacks epistemic awareness: it does not “know” when it’s wrong, nor can it justify its outputs with the same methodological transparency required in regulatory contexts. As a result, developers may still face a critical verification burden when using AI-derived evidence to support clinical assumptions or regulatory arguments.

At its current level of maturity, AI cannot replace the expertise of regulatory professionals, especially when precision, context sensitivity, and the articulation of a clear clinical and regulatory strategy are critical to the product development plan and overall regulatory success. Organizations developing drugs, devices, or combination products should remain cautious when leveraging AI for regulatory purposes. Developers relying on AI-generated text, regulatory assessment, or clinical designs without expert oversight and integration of product-specific knowledge risk undermining their own classification strategy and introducing avoidable regulatory hurdles. Until these technologies evolve to fully comprehend regulatory frameworks, classification pathways, and the complexity and regulatory significance of formal submissions, their role should remain advisory and supplementary, rather than serving as a primary decision-making tool.

About the author

Lital Israeli Yagev
Scientific and Regulatory Affairs Director
LinkedinProfile
Go back to the Magazine

Subscribe Now to the Bio-Startup Standard

Notify me for the next issue!

    Contact Us
    Contact us






      Skip to content