Within many of today’s engineering design fields, a common and prevailing trend is the use of computer-based modelling and simulation throughout the development process. The benefits of doing so are clear: virtual builds can be created and tested far more quickly and cost-effectively compared to their physical counterparts, and designers are free to examine a wide range of optimisation possibilities and design variants that would not otherwise be possible. This boost in efficiency can mean a reduction in the time to maket within what remains a growing area.
In a recent study commissioned by Greenpeace and WWF, Cambridge Econometrics found that if large-scale investment were made in offshore wind rather than gas, the UK GDP would increase by 0.8 per cent by 2030 and more than 100,000 additional jobs would be created by 2025. The combination of environmental concerns and figures like these is increasing the pressure on design engineers to develop and refine systems that can reduce dependency on the more mature and conventional methods of power generation. Speed is of the essence here, and this is where simulation comes to the fore.
Coming of age
One advantage that this field has, in the words of Lance Hill, energy lead, Simulia at Dassault Systèmes, is that ‘it began before simulation was readily available and so allows people to break out of the paradigms of the past and the rigidity of how things have always been done. Alternative energy is really the area where simulation has come of age – it has become the driving force behind innovation.’ The difficulty, observes Hill, lies in the fact that these complex, multi-physics systems have competing objectives. For example, within a wind turbine, the power is roughly proportional to the square of the blade length. When extending the blade length, however, the weight increases, causing additional stresses and fatigue. The way the blades interact with the wind is also a factor, as are all the myriad mechanical interactions that convert the rotational energy and pass it into the electrical generator. The way in which the man-made structure interacts with the foundation and surrounding environment can also affect the efficiency of the turbines. The challenge, says Hill, is that each of these competing elements needs to be analysed in concert with each other. The solution is modelling and simulation.
Dassault Systèmes has a number of applications within this market. These applications are provided within a unified ‘Sustainable Wind Turbines Experience’. For example, a CAD (computer-aided design) package like the company’s Catia (computer-aided three-dimensional interactive application) application can be used to design the shape of the blade, perform the initial composite layup and also assess manufacturability. The design can then be analysed for functional reliability with Simulia applications for structural and multiphysics simulation. Delmia applications can be used for planning the manufacturing process and for quality control. Each step in the process can be managed with Enovia, Dassault Systèmes’ collaboration application. Hill comments that such a complete experience allows people to internet, explore and automate all these different processes – providing significant benefits for engineering and manufacturing teams who need to put all those pieces together in a time- and cost-constrained environment.
Combining disciplines
Paul Goossens, vice president, Applications Engineering at Maplesoft agrees that many of the issues being faced by engineering design teams in this field come down to the complexity of the systems they deal with and the need for a multi-domain approach. ‘In industry, there no longer a person who is just a mechanical engineer or just an electrical designer. Today, professionals must be able to combine all these disciplines in order to deliver the optimal design.’
He explains that whether it’s the fundamental physics of what is happening within the photovoltaic cells of a solar energy system, or the structural and dynamic behaviour of windmill blades, one of the largest engineering challenges is the nondeterministic nature of these technologies. Each group within the design process has their own specialist tools, but very few of these tools have the ability to incorporate various domains and get a true handle on what the overall behaviour of the system will be. The software solutions available from Maplesoft bring together all of these various domains to enable users to get a broad system-level view, facilitating the optimisation process.
‘The question is,’ says Goossens, ‘how can we take a model that has a very high degree of fidelity, but is computationally intensive, and then integrate that into an overall system level model? Many of us are now facing this challenge as design activities begin to merge.’ He adds that one area of research that the company is involved in revolves around taking a highly detailed finite element model and implementing it in an appropriate way within a simulation tool. Work is being done in model reduction, he says, where design engineers can take something highly detailed and lose some of that detail, but still get an overall dynamic effect.
Maintaining a rigorous connection between the original detailed model and the final model which provides the overall required behaviour isn’t easy. ‘This is the big challenge,’ adds Goossens, ‘and we’re using tools like our symbolic computation tools to further various studies in this area. Of course, another challenge is that once all this detailed work has been done, does that mean it somehow has to be redone in the simulation environment in order for it to be implemented at a system level, or is there a good step-by-step way of being able to reduce the model down to its required detail?’ Goossens comments that this is a continuing area of investigation.
Generating design
Focusing on the electro mechanical characteristics of generators is Cobham Technical Services, whose finite element-based numerical software, Opera, is widely used in the power generation industry. Kevin Ward, operations manager for Opera, explains that the program simulates a wind turbine generator’s electric drive and mechanical load to a good degree of accuracy – in fact, he states that it can achieve the same accuracy as experimentation, negating the need for prototype builds. ‘Design optimisation is a big issue in this industry and Opera enables design engineers to set up parameterised models and tell the program what dimensions or materials to vary between set limits. They then enter the performance objective they want to achieve and the software will automatically do it,’ he says, adding that this type of powerful functionality enhances a user’s ability to explore and optimise designs.
One aspect that stands out in the software are the methods for calculating losses in generators. One of these uses manufacturers’ loss curves where losses are predicted as a function of harmonic frequency and magnetic flux density throughout the device. Opera uses advanced mosaic meshing to discretise the 3D geometry. It uses a combination of tetrahedral, prism and hexahedral elements which can be used to achieve fine discretisation of conductor surface layers. This is an important point when predicting eddy current losses efficiently and accurately.
One example of the use of the 3D version of the Opera electromagnetic simulator from Cobham’s Vector Fields Software range is a project undertaken by Colorado-based Boulder Wind Power (BWP) to accelerate the development of a new permanent magnet generator. Based on a permanent magnet, direct drive design, BWP’s generator uses a unique axial flux, air-core architecture that increases efficiency and reliability, and, according to the company, will ultimately reduce the cost of wind generated electricity to compete at parity with fossil fuels.
Like all permanent magnet direct drive wind turbines, the generator rotor of BWP’s 3MW design turns at around 13 revolutions per minute, necessitating a high pole count. Opera’s advanced solvers allow this high degree of periodicity to be leveraged, so that the numerical model can be a fraction of the size of the complete generator – significantly reducing simulation times. This is particularly important for BWP because it makes exclusive use of the 3D version of Opera, which is necessarily more computationally demanding than the 2D version. While many wind turbine designers employ two-dimensional simulation for the main components in a generator, and only use three-dimensional simulation, which is adequate when three-dimensional features such as the end turns on windings do not significantly affect performance, BWP’s designers must use full 3D simulation at every stage, in order to model the generator’s novel architecture as accurately as possible.
In control
Having begun in virtual instrumentation, National Instruments is another company that now offers software for modelling, simulation and control. Its Labview and Labview Control and Simulation Module can simulate a full wind turbine system, including the wind turbine, mechanical drive train, generator, power grid and controller. The Control Design and Simulation Module provides a numerical simulation environment that enables design engineers to analyse the interactions between hybrid mechanical-electrical systems. Users can also can improve the quality of existing models and explore other control strategies by simulating deep-bar induction generators and more complex drive-train models.
With a variety of vendors offering software solutions in this market, one final trend coming to the fore is co-simulation. ‘Numerous tools of varying detail, from a range of vendors, are now striving to communicate with each other and figure out how to exchange information between one tool and another. As a result, co-simulation technologies are beginning to emerge and while nothing is coming out as a particular standard, one project we’re looking at very closely is FMI (functional mockup interface),’ comments Maplesoft’s Paul Goossens. ‘It’s being driven by the Modelica Association and the idea is that as long as each of the tools comply with that standard in terms of data exchange, we will start to see a lot more stand-alone products coming together through that platform.’ This, he says, will enhance the role of modelling and simulation in the development of alternative energy technologies.
Currently in the second year of his PhD, Stuart Walker, a Tidal Power researcher at the University of Sheffield, is focusing on the optimisation of the layout of farms of tidal turbines. He explains his project and how simulation tools fit in
My current work revolves around three-blade tidal turbines that are mounted down on the sea bed and installed in arrays, and how to minimise the wake from the first row as this has a big impact on the output of all the turbines behind. At this stage I am using a tank for experimental modelling and then later in the project will be doing CFD and computational modelling. My aim is to achieve an optimised layout for a small array of turbines, which could be used as a tool for developers looking at how best to arrange their turbines. My focus is on the support structure, rather than the blades, because until now these supports have all been a simple column. By investigating different shapes and ways of attaching the supports to the sea bed, I hope to optimise designs that will be location dependant.
It’s interesting that people often assume that everything is now done on a computer and when I explain that my testing work is being done in a tank, they immediately question why I don’t have a computer-based way of doing it. I do find it telling that large organisations with extensive budgets, such as Formula One, still use physical prototypes. Computing is taking over more and more of what used to be done physically, but I do struggle to see how it can remove the need for physical models as those are what validate all the computational models. These computational models will only ever be as good as the inputs you give them, and those inputs will always be derived from physical models. But more than that, I really enjoy the physical modelling because it means I can get very detailed picture of the flow behind the back of a single support structure. This is quite well understood in broad terms, but all the turbulent flows aren’t understood at all. Once I know that, it can become an input into further computational models which I will be using later in my project.
The method I will be using is particle imaging velocimetry (PIV), which is a visualisation technique that uses cameras and lasers. The water within the tank is seeded with particles that reflect the laser light at certain wavelengths and the cameras take very rapid images of the flow. Hundreds of images are taken in a short amount of time and that huge volume of data looks at how each individual particle has moved. That’s when the software really comes into play; it enables us to correlate and make sense of all that data. The resulting images look quite similar to CFD visualisations.
The downside is that we have recently been having issues with processing that volume of data as the software, and hardware, is having trouble keeping up. Beyond that, the software has been designed for general use, rather than for our specific application, which means that there will always be some things that either aren’t relevant to my research or are incredibly relevant, but the software doesn’t allow for enough adjustment of those features. Another issue is that with so many options, we could change values and make adjustments forever. On the one hand this is great for research as we don’t want rigid software; we want it to be as open source as possible so that it can be modified as needed. But, there are times when I wish there could be more of an explanation or description of what these options actually are, rather than have to learn the software.
Research can’t be fully effective if you don’t fully understand the software, so it would always be nice to have the person who wrote the software sitting right next to me. There really is a temptation to look at the computer as if it’s a black box and assume that the person who wrote the software knew the fields it would be applied to and understood what specific researchers would need it to do. Of course, that isn’t the reality and, speaking on a personal level, if I didn’t learn the software to ensure I fully understand each input, I would always worry that I couldn’t have full confidence in my results.