Professor Ana Cavalcanti is a professor at the University of York, working in the Department of Computer Science. Professor Cavalcanti holds a chair in emerging technologies from the Royal Academy of Engineering. She oversights modelling, simulation, code generation, and testing concerns in this role.
Cavalcanti is also the director of RoboStar, a centre of excellence dedicated to research and technology transfer in software engineering for robotics. The research and development work there covers various aspects of model-based software engineering, including modelling, simulation, testing, and verification. The group also reviews control software, physical models of the platform and scenarios, environment assumptions, and human behaviour.
RoboStar is one of the largest research groups in the world, bringing a diverse membership of researchers working in robotics under a single umbrella. Its membership comprises UK researchers from the universities of York, Sheffield, Surrey, King’s College, and Thales, as well as researchers from around the world, including Brazil, China, France, Germany, and Norway.
Can you tell us about yourself and your role at the University of York?
Cavalcanti: The mandate of the Chair in Emerging Technologies was to create a centre of excellence, so I created and am now the Director of RoboStar, a centre of excellence in software engineering for robotics.
We all have a shared vision: to develop an end-to-end approach to designing and verifying mobile and autonomous robots. In terms of your interest in lab automation, RoboStar York is part of the community here at the University of York that is centred around our Institute for Safe Autonomy, and we have a CDT, a Centre for Doctoral Training in the area of Autonomous Robotic Systems called ALBERT - Autonomous Robotic Systems for Laboratory Experiments.
I am leading the panel, and Professor Ian Fairlamb, the co-director of ALBERT from our chemistry department, is also on the panel, which is focused on many things such as safety and lab automation.
What does end-to-end mean in the context of robot design?
Cavalcanti: It's an end-to-end process that develops, designs, and verifies robots. What I mean by end-to-end is current practice. This is a generalisation; I'm always cautious about that. The state of practice is that colleagues want to develop a new robot, and they start by writing code. Programming should be the last stage of the software engineering process.
The RoboStar vision focuses on being model-centric. We will start developing a system by writing models. By end-to-end, RoboStar provides support to describe and validate these models, just like a civil engineer would write a CAD model and then do some checks. We would do the same thing. We write the models and then check: is this system going to deadlock? Is this system going to have any undesirable properties? We can do that even before the system is built, just as civil engineers do, but we don't stop there. That's the start of the design.
From those models, we want to derive value. Our vision is that we want a world where writing code is something of the past. You write the models. You get the code for simulation, and you get the same code for deployment. That's important because, from an engineering point of view, when you use one code or model for simulation, and then you come to put it on your robot deploy, it is a different piece of code. You then have the possibility of introducing errors and you want to eliminate that. We also want to generate tests automatically. End-to-end means that we cover from the modelling stage to the deployment stage.
What are the main challenges to developing autonomous lab robots?
Cavalcanti: We are tackling this challenge on a very broad spectrum. It's a multidisciplinary problem with challenges in all areas. So the first one is to understand what is adequate for the humans currently in the lab. What do they need, want, and what would be useful for them? All these three things are different.
We start by thinking about the human aspect and how humans can effectively interact with robots. Regarding the development of the robots, there are challenges associated with the software and the hardware. For the hardware, some exciting pieces of work are taking place in ALBERT related to providing the robots with the ability to manipulate very fragile equipment and sensitive materials. This must be done so that we can ensure that they will not cause a problem by spilling materials or potentially interfering with an experiment that may have consequences when dealing with dangerous chemicals.
Software engineering is key to the design of trustworthy mobile and autonomous robots. RoboStar is developing technology to put software engineering on the same standing as traditional engineering disciplines, where models, supported by tools and mathematical foundations, drive the whole production process: simulation, testing, deployment, and provision of evidence of quality.
How do you control all this? This comes in software. How do you develop the applications? The challenge is not only to write the code, but also to provide the evidence that we can rely on that code. There will be sociological issues, business issues, and ethical issues. Robotics is a multidisciplinary field, and introducing autonomous robots into our society in general is a multidisciplinary issue.
What are the techniques for verifying robots, particularly in the laboratory?
Cavalcanti: We use three main techniques to verify the software used in robots. The techniques have different purposes. In general terms, you can use testing, which can be done in two ways: testing in simulation, where you have complete control over the control and observability of the system, because everything is in software and you can interfere here and there to capture the information you need – that's verification by simulation; the next is verification by testing the deployed robot. In that case, you want the tests we run in simulation to be converted to tests you run in deployment.
There is another very exciting way of doing the verification. With these models, we can automatically generate mathematical descriptions of them. With those mathematical descriptions, you can generate the tests, you can generate the code, and that can also carry out mathematical proof. It's abstract, but I always go back to the CAD models so people can understand what's going on.
An engineer can prove that the water output in a particular tap is good enough for the dimensions of the pipes to support the pressure, the flow of water and so on. We have the same opportunity. In our models, we can describe how the software will be organised and orchestrated and how it will communicate, for example. We can prove, mathematically, that there is no deadlock. No deadlock means the robot will not freeze like our laptops sometimes do. A freeze may not be a problem. However, safety is a domain-specific challenge. If a robot goes down steps in a hospital and freezes, it may fall and hurt someone. We use these proofs to guarantee the expected behaviour and expected key behaviour. That's the exciting part of RoboStar.
Is it possible to move to a process that only uses these mathematical proofs as verification?
Cavalcanti: No, I don't believe that. Especially in the domain of robotics and physical systems in general. But certainly in the domain of robotics. The objective is not to remove the need for tests but to carry out proof that can detect the problem earlier and give you stronger guarantees. Everything needs to be consistent.
Robot control software runs in a robot, which is a machine that will be subject to faults and wear and tear. A robot will be moving in a lab, for example. That's our vision: to have labs designed for humans and to have the robots working alongside humans.
That is a highly complex environment. The robot cannot know when a human will pass in front of them. They cannot know when the humans will move around the reagents or instruments for the experiment that the robot needs. It is very complex, and we must use our entire arsenal to solve those problems. And so it's not about removing the need for tests. It's about using the best techniques to deal with each of the problems at hand.
Does AI impact the work you are doing on the development of robotic systems?
Cavalcanti: Having AI components introduces new challenges. They are a black box and, in terms of the work we want to do, the main challenge is determining the specifications. For safety, we take the view that it is not about the AI component. Safety is not a property of any single component of a system. It's a system-level property.
So our techniques cater for the fact that you may have AI techniques implement some of your components. We can model those components. We can capture the mathematics characterising the behaviour of that component and use that at the system level to ensure properties.
I'm not saying it's a solved problem, not at all, because you need to work at the component level to make those system-level claims, and the technology for those components is very much in development at the moment. It is getting better every day, but it is a challenge. But, for us, the challenge is at the level of the systems, and we are tackling that in RoboStar in a European project at the moment called RoboSapiens.
Professor Ana Cavalcanti is a professor at the University of York, working in the Department of Computer Science. Professor Cavalcanti holds a chair in emerging technologies from the Royal Academy of Engineering.