From training data to training people

Let’s do something a properly trained writing AI probably would not do: Start with an I statement.

I get the feeling that each time ADRES reaches out to me about possibly contributing an article to the BioStartup Standard, it turns into me writing about parts of my personal journey in the biotech space.

It is the same again, this time, because four (widely spaced) personal events, or perhaps rather encounters, over the course of that journey, together inform this piece, which, stripped of those anecdotes, boils down to little more than a small piece of advice regarding training: Training not, as the context of AI may suggest, in the sense of feeding training data into an AI model, but training in its traditional sense – the training of people.

  1. The first touchpoint was reading an article about knowledge loss (organizational forgetting) in the chemical industry. This was more than 15 years ago, and I did not keep a copy of the article, so unfortunately, I cannot attribute the exact source. Part of the methodology included interviews with retired lead engineers from several chemical plants and the role they continued to play as consultants post-retirement. It highlighted how there was no impact on routine operations with the loss of key knowledge assets (the lead engineers), but that that changed as soon as troubleshooting was required, be it due to quality issues, breakdowns, or changes such as planned expansion or process improvement.
  • Other than the first, the second touchpoint was directly related to biotech and to AI – or rather to Natural Language Processing (NLP), i.e., one of the key concepts in machine learning and language models, because back then – in the early 2010s – no one I knew called it AI yet. But the ideas were already there and I was discussing their potential application to biotech (specifically, to the analysis and presentation of data from clinical trials) with friends in Cambridge who were doing NLP research. While we could already envision, if dimly, what would be possible in the future (was is possible now!), in the short- to midterm we saw that the limitations of the (then available) technology would place it firmly as a tool for a human expert, like an advanced word processor or statistical programming suite.
  • Fast forward to 2025, and we have AI established and growing in importance across industries (including, of course, biotech). And along with that we have a growing body of criticism as well, which is where the third touchpoint comes in: A couple of weeks ago (end of June 2025, that is), a friend recommended a draft paper[1] to me, covering a study on neural and behavioural consequences of using AI assistance in (academic) writing tasks. The authors’ concluded that aside from positive effects the use of AI also “came at a cognitive cost”, impacting critical evaluation of AI outputs and potentially reinforcing “echo chamber” environments in which outputs from AI systems get critically checked less and less as their users get primed by previous exposure.
  • Then, shortly thereafter, the final piece to this puzzle, the one that made everything click into place, came into play when colleagues at ADRES reached out with the call for contributions to the issue of the BioStartup Standard you are currently reading. And right there, in the middle of the technical guidelines for submitting an article, I read “AI tools can assist, but substantial revision and personalization are required” and found that mildly funny – the call for contributions to “the AI issue” was critical of relying fully on AI. Initially, my somewhat vague intention had been to write about implementation of “behind the firewall” systems in small scale organisations or something similar more operations oriented. But I felt myself constantly drawn back to this critique of AI in a call for AI and it got me thinking in an entirely different direction. One by one the above memories came up: My first – abortive (I would be lying if I said that we implemented anything of what we discussed in Cambridge) – concepts for utilising AI in trial analysis and reporting as a tool for human experts; the recent paper on cognitive cost of using AI – specifically in an educational (learning!) setting; the long ago read about organizational forgetting caused by personnel turnover; it all started to fit together.

Let me pause here briefly to state (if that did not become clear from the Cambridge anecdote) that I am not an AI-luddite who tries to warn you about how dangerous this technology is and that you better not use it. We are using it. And we should be using it. It is a powerful tool, as I am sure a lot of the colleagues contributing to this issue will highlight in their own articles.

As we are adding new and powerful tools to your toolbox, we need to make sure to also have the right users for these tools, not just the right tools themselves, and that also means not neglecting the training of your next generation of users – not just in using the tools, but in the fundamentals.

The current generation of professionals in our space has still acquired their skills and experience outside an AI echochamber, they are experts able to deliver without AI support, who become further empowered by new AI tools, and are able to critically review what a system delivers, feeding into a continuous improvement cycle.

But this generation is not here to stay forever. What is needed, thus, is to ensure that the next generation, as well, will understand the underlying science and processes, and often enough the art and craftsmanship to do the same – to function and deliver without AI, to make the most use out of the AI systems available, to check whether the systems are performing, and to improve them going forward.

Invest in AI. But do not neglect to invest in people.

AND not OR.

1 N Kosmyna, E Hauptmann, YT Yuan, J Situ, X-H Liao, AV Beresnitzky, I Braunstein, P Maes, ‘Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task’, https://arxiv.org/abs/2506.08872 (retrieved 30-Jul-2025)

About the authors

Johann Daniel Weyer
Managing Director at ICRC-Weyer GmbH

Johann Daniel Weyer is the owner and Managing Director of ICRC-Weyer GmbH, an expert German consultancy and all-phase CRO. A life-long professional and learner within the CRO and scientific consulting fields for biopharma and medtech with wide and in-depth knowledge and experience across service areas, product types, and indications built over the course of a 30-year journey from the shop floor to company leadership. Personally, provides expert consulting and training on complex topics at the intersection of medical data management, medical writing and pharmaco- and device vigilance, as well as the integration of multi-functional teams.

LinkedinProfile
Maria Schulz
Quality Manager at ICRC-Weyer GmbH

Maria Schulz holds degrees in Pharmaceutical and Chemical Technology and Clinical Trial Management. An accomplished quality assurance and quality management professional, she joined ICRC-Weyer more than 15 years ago. Since then, she has been shaping the ICRC-Weyer Quality Management system and environment, consulting clients on quality topics, and flank-guarding the company's and its clients' move into new and innovative fields with an eye towards necessary quality and compliance measures.

Go back to the Magazine

Subscribe Now to the Bio-Startup Standard

Notify me for the next issue!

    Contact Us
    Contact us






      Skip to content