Using AI for Regulatory Referencing in CMC Consulting: Risks and Mitigations

As a consultant supporting CMC development during clinical stages, I regularly use AI tools to enhance a range of tasks – from drafting CMC development plans and setting specifications, to understanding analytical testing requirements and identifying relevant regulatory guidelines. When applied thoughtfully, AI can significantly streamline complex and time-consuming processes. However, its use also introduces specific risks that need to be carefully managed to ensure reliability, compliance, and scientific integrity.

One major concern is the use of outdated or inaccurate content. As is probably already known, AI models are typically trained on historical data and may not reflect the most current versions of regulatory guidelines. For example, if I ask the AI about ICH Q2, it might reference an earlier draft or fail to incorporate recent Q&A updates. For example, when I used ChatGPT for a specific question relating to ICH Q2 (Validation of analytical procedures), even when I used its agent “WEB pilot”, which should access the most recent versions on the Internet, the answer I got was incorrect since it was based on version 1 instead of version 2 of the guideline. The risk is even higher for region-specific guidance – AI may inadvertently cite FDA expectations for an EMA dossier or vice versa. To mitigate this, I always verify outputs directly against official sources like EMA and FDA guidelines.

Another issue is source transparency. AI models can provide generalized summaries without clear citations or links to source documents. This lack of traceability is problematic in a regulated context, especially when we need to justify decisions during inspections or in Module 3 submissions. I’ve learned to treat AI outputs as pointers, not definitive sources, and as mentioned above, I always follow up by locating the original document myself.

There’s also the risk of hallucinated references-fabricated documents or clauses that sound convincing. These are dangerous in any regulatory context. I’ve seen AI create entirely fictional guidance sections or merge unrelated guidelines into one. As above, cross-referencing with validated regulatory databases (like RAPS, Cortellis, or agency portals) or relevant guidelines (EMA, FDA, ICH) helps ensure credibility. Additionally, I have noticed that some AI tools are less ‘hallucinating’ than others. E.g., Perplexity AI is usually more accurate. Alternatively, specific agents in ChatGPT (e.g., ‘ChatGMP’ and ‘Medical Device Regulatory Advisor’) are also usually more accurate.

A less obvious but serious risk is, of course, data protection. If I’m consulting on proprietary projects and use cloud-based AI tools that aren’t validated or private, there’s potential exposure of confidential sponsor data. This could breach both GDPR and confidentiality agreements. My solution: I never put sensitive information into unsecured platforms and additionally use enterprise-grade tools with appropriate data controls.

Last but not least, AI lacks regulatory-CMC nuance and the required expertise. It may present guidance without considering the clinical stage, product class, or regional nuances. If a prompt is developed by someone with no CMC expertise, it may lead to misleading, incomplete, or irrelevant outputs. Clear, context-specific questions-framed with an understanding of clinical stage, region, and technical nuance-are essential to guide the AI toward accurate and actionable responses. That’s where human expertise remains crucial.

In summary, AI is a valuable assistant-but not a substitute for CMC judgment. By combining AI’s efficiency with rigorous validation and oversight, we can use it responsibly and effectively in CMC consulting.

About the author

Tamar Oved
QA and CMC Director at ADRES Advanced Regulatory Services

Tamar has over fifteen years of experience in the pharmaceutical and biotechnology industry. She is experienced in quality assurance, quality control and manufacturing of drugs and biological products. She oversees GMP (Good Manufacturing Practices), GLP (Good Laboratory Practices) and CMC (Chemistry, Manufacturing and Controls) activities at production or testing sites. Tamar also has experience with aseptic processes. https://www.linkedin.com/in/tamar-oved-86082a1b/

Go back to the Magazine

Subscribe Now to the Bio-Startup Standard

Notify me for the next issue!

    Contact Us
    Contact us






      Skip to content