Dr Markus Gershater, Co-Founder and Chief Science Officer at Synthace, presents his predictions for the life science industry in the next 12 months.
2023 is shaping up to be an uncertain year. While prediction articles like these often skew towards the positive, this year does not come with the same level of optimism as it might have in previous years. In other words, for those that cannot adapt and change with the times, uncertain economic conditions combined with technological shortcomings paint an unhappy picture for the year ahead. With this in mind, let’s take a look at what might happen in the next twelve months…and how to avoid the worst.
“Point solution” software in the lab makes diminishing returns
For decades, the life sciences industry has put its faith in the strength of its labs to make gains in R&D. Improve your lab, goes the logic, and you improve the science that comes out of it. And for a time, this has worked. Better equipment and better software has helped biologists make incredible advances.
But is this still true?
Today, the returns from investments in “point solution” lab software are diminishing. Because lab software was initially created to solve individual problems (e.g. “ELNs” being electronic versions of lab notebooks), the impact of this software outside of their individual problems is limited. While all of these developments have been welcome, making improvements to them no longer has the same level of impact that is used to.
What these developments have never done is improve the over-arching experiments that go on in labs. But experiments are the most important, fundamental building blocks in our understanding of biology. Without a reliable way to improve them with digital tools, this represents a problem.
What could the life science industry achieve if it instead thought in terms of experiments, and made use experiment software alongside existing lab software? Experiment software would take a more powerful approach to the data, metadata, and contextual experiment information that flows throughout the lab and the people from experiment software are far more valuable, accurate, and rapid than anything that relies on manual input and intervention.
AI/ML makes waves… but not in the lab
While AI/ML is working wonders on the data that we already have access to, the penetration of this technology into wet labs—and the breadth and depth of the data that comes from them—is nowhere close to where it needs to be to make any meaningful impact. But imagine what could be possible: networks of laboratories interconnected by AI/ML tools that routinely identify patterns hidden in broad swathes of unrelated experimental data, or are able to suggest practical, non-linear modifications to experimental methods that unlock new discoveries.
Why hasn’t something like this happened yet?
Labs are notoriously hard to automate and control, and even when they are automated the process of data aggregation and structuring is even harder. This is because the state of technological maturity and digitalisation is far behind where it needs to be, and because current solutions to digitalise laboratories focus purely on point solutions, as we discussed above.
This is a significant problem because, without the insight we could gain from this technology, our understanding of the minutiae of lab work and how biology interacts in the real world will remain limited to the scale of individual human understanding. We could be leaning on computing power to help us do a lot of the hard work in finding these patterns and improvements, enabling a greater outflow of creativity on the level of the individual scientist. But we aren’t.
The exception to this trend will be the labs that make use of experiment platforms to design experiments, control equipment, and structure data in a single place. It’s not the lab of the future we need to worry about anymore—they arrived some time ago—it’s the experiment of the future that needs more urgent attention.
Lab automation bottlenecks stay squeezed
For many, the problem of “how to improve lab automation” will remain a static issue in 2023. Wet lab bottlenecks—everything from throughput, data, and runs—are here to stay unless something fundamentally changes in the way we think about how labs work. Gains will remain incremental, and change will remain difficult. To the outside observer, this might seem surprising: with all of the modern equipment and software we have available, how is this even possible?
It’s possible because many are still thinking of automation as a linear issue:
- If we haven’t got automation equipment, how can we get it?
- If we’ve got automation equipment, how do we automate it?
- If we’re using automation, how do we do more of it, or get higher utilisation?
Only the very best and brightest R&D teams are starting to ask how automation might play a bigger role in the overall scientific progress of any given lab. It’s these automation engineers that are asking different questions, and redefining the problem: “how can I get more scientific value from every experiment?” Automation may be involved in this, but scientific value is not always a factor of the volume or density of your automation. Far from it.
How we define scientific value in automated labs needs to be redefined:
- Understanding feasibility of experiments before automation: do I know how much this is going to cost, what materials it will require, am I able to tell whether I could be doing this with fewer consumables or (e.g.) cheaper reagent for the same results?
- Understanding reusability of the experiment output: are my data, insights, materials from automated experiments reusable? Can I use them elsewhere? Am I getting the output that I need that can be plowed into further research?
- Understanding reproducibility of the experiment: how good is the understanding of the design space, is there unambiguous protocol definition, and does the team have the ability to run these protocols with critical factors controlled to their preference?
The uncomfortable truth is that labs aren’t really the problem here: it’s the experiments running inside them. If we can’t switch to thinking about experiment systems, or the experiment of the future, any gains are likely to be small. With an experiment-first approach, we ask different questions about scientific value that reframe the reality of the bottlenecks we’re really dealing with. Until we reframe the problem and instead think about the scientific throughput from an experiment-first position, we’re likely to remain stuck.
An uncertain year ahead
None of these predictions paint a particularly optimistic vision of the future. But, perhaps, this is something that we need to hear. It’s true that many will run headlong into the problems and trends we list above, and go through a great deal of pain in the process.
But there is hope: awareness of these problems is the first step in knowing how to avoid or mitigate them. As usual, those who will come out on top by the end of this year will be those who can manoeuvre and adapt with speed, take bold steps, and invite fresh change instead of fearing it.