Skip to main content

The shift to in silico for experiments

Siemens in silico

‘Taking a multi-scale approach from molecular scales to plant scales can overcome many of the difficulties faced during the tech transfer stages. It needs data connectivity backed by systematic exploration of the parameter space to make the deployment of virtual or in silico methods successful,’ says Ravi Aglave, Director, Process Industries, Simulation and Test Business, Siemens Digital Industries Software (Image: Siemens)

As part of a series of articles based on a recent online roundtable event entitled ‘From research to manufacturing: Overcoming data challenges in the drug development cycle’, hosted by Scientific Computing World, we asked our panellists about how the shift to in silico experiments has changed the data landscape in drug development.

 

Moritz von Stosch, Chief Innovation Officer at DataHow, says: “Generally, we want to decrease the number of physical experiments, because they are expensive and they take time. Rather, we want to increase the in silico part of experimentation, because it’s typically faster, cheaper and uses previously collected data.”

 

Manasa Ramakrishna, Associate Director, Knowledge Graph - Design and Operations, AstraZeneca, says it’s not just cost that’s a factor: “At AstraZeneca, part of our ambition is to be more sustainable, so reducing physical validation experiments sits well with that ambition.” 

 

John McGonigle, Director of Bioinformatics at Insmed, adds: “I usually advise our scientist teams to reduce the number of experiments they perform and, instead, try to increase the size of each individual experiment as much as possible. This is because maximising the success of individual experiments can massively impact your ability to make accurate data-driven experiments. Other companies are coming at this from another angle. With the advent of AI models, some companies are focusing on decreasing the diversity of experiments, but increasing the throughput, with the aim of maximising information for a few core read outs in an attempt to reduce the DMTA (Design-Make-Test-Analyse) cycle.”

 

However, there is still a place for physical experiments, but they need to deliver usable results every time. “The physical experiments that we do undertake need to be information-rich,” continues von Stosch, “so that we can learn from them without having to do hundreds of them.’”

 

In a digital drug development world, the ability to simulate or predict behaviour of the manufacturing process in silico - from molecular to plant scale - has multiple benefits, including higher throughput of candidates and lower cost of experiments.

 

Ramakrishna says there’s a balance to be struck between the physical and the virtual. “It’s very much a chicken or an egg type of question,” she says. “Ideally, all our in silico predictions or suggestions are best validated in the lab, but we do not want to unnecessarily waste time and lab resources. It would be better to go through historical data to see if we can make inferences or validations based on what has happened before. The older the data, though, the more likely it will be in an inaccessible format, so this issue will need to be addressed first.

 

“With newer experiments, we’re trying to capture data in a much more sensible way. We’re asking to be included in discussions when our teams are starting out experiments, rather than after they are complete.”

 

Kevin Back, Product Manager, Cambridge Crystallographic Data Centre (CCDC), says: “My view is that you would always do some experimentation to validate, although you might do fewer experiments. In the crystallographic space, there’s a lot of confidence nowadays in structural prediction models and in getting the most thermodynamically stable polymorph of your drug predicted if you put enough computation effort into it. What’s less certain is which other forms on the landscape might be kinetically accessible.”

 

Darren Green, Director, DesignPlus Cheminformatics Consultancy, adds: “It’s about designing better and more productive experiments, rather than removing laboratory experiments altogether; that’s the benefit of machine learning and simulation. The other benefit of having a model is that you’re able to put what you observe in the experiment in context. An odd result compared to a prediction might prompt you to rerun the experiment, for example. “‘Did I expect that?’ you might ask, leading you to think, ‘I might need to repeat that experiment, because I know that my predictions from that particular model are never that wrong’.” 

 

Jackie Lighten, Program Manager, Cell and Gene Therapy Catapult, believes that knowledge sharing at every level can help remove physical validation steps. “It’s important that the whole development team understands the biology down to the molecular level,” he says. “In my experience, this understanding is very valuable in the bioinformaticians who are handed data to analyse and verify ‘success or failure’ of an experiment or batch. Data scientists must not only be good at coding and statistics, but also understand if the data make biological sense considering the null-hypothesis that is being tested. 

 

“This can help speed things up and reduce the amount of time in the wet lab, where bioinformaticians can squeeze as much power out of the data as possible, or even direct the laboratory development team where the data is lacking to test the targeted biological hypotheses. 

 

“In the new age of AI, where analytical code can be generated by simply talking to your computer about a problem to solve, it’s even more important that those analysing the data understand the biological relevance of the input data, analysis, and how to properly interpret results in the context of the biological mechanisms that are being scrutinised.”

 

Siobhan Fleming, Digitalisation Consultant, Pharmaceuticals and Life Sciences, Siemens, adds: “In the rapidly evolving landscape of pharmaceutical R&D, the integration of an in silico modelling and simulation is not just a luxury, but a necessity. These digital tools enable researchers to predict and optimise experimental outcomes from molecular scale to recipe scale, which significantly reduces the time and cost associated with relying only on traditional wet lab experiments. By leveraging in silico models, we can enhance the precision and efficiency of drug discovery and development, ultimately accelerating the delivery of innovative therapies to patients.”

 

The full report is available to download as a White Paper, which also covers: Data collection and formats; Data silos and how to avoid them; Data ontologies and efficiency of process development; Cultural change and the digitisation journey; and Process optimisation and technology transfer.
 

You can register to download the White Paper here.

The roundtable and series of articles is sponsored by Siemens Digital Industries.

Media Partners