Drilling for oil is a tough business, and it’s only going to get tougher as global reserves begin to dwindle. Only a fraction of a per cent of the planet’s crust contains oil and, for any given reserve, only 10 to 60 per cent will be recoverable. It’s important for the corporate giants producing oil to open new wells to replace those that run dry, and so promising areas of land and sea are constantly being probed in order to locate new supplies.
Wherever the process occurs, be it on land or at sea, prospecting for oil and gas reserves begins with a seismic survey of a promising area. Low frequency sound waves are produced by an underwater spark or a compressed air gun, and as they travel through the Earth’s crust, they reflect off of areas in which the properties of the rocks change. Seismic surveyors use geophones or hydrophones to listen for these echoes, often collecting petabytes of raw acoustic data from a single seismic survey. In order for geophysical analysts to be able to pinpoint the locations of potential oil reserves, this data must be extensively processed into something they can visualise.
Laurent Billy, CEO of Visualisation Sciences Group (VSG), explains the steps necessary in producing a useful visualisation: ‘First the analysts start with the raw data from sensors – this needs to be interpreted in order to produce what is called the post-stack data – a seismic volume that can be analysed further. From this seismic volume, they identify horizons and geobodies (faults and cracks in the earth), and from here they can identify reservoirs in the areas where rocks are permeable enough to contain oil and gas.’ Compute-intensive reverse time migration algorithms are the technique of choice for turning seismic data into a seismic volume (3D map), and further calculations carried out on parameters within that volume are used to calculate the seismic attributes of the rock at various points in space. ‘The horizons in particular [areas where seismic properties change] are very important to visualise, and so they are modelled with a very large number of polygons,’ explains Billy. ‘This 3D data is therefore very heavy, and difficult to handle in the memory, and it can be very slow to display.’
VSG supplies computing components to facilitate the visualisation of these heavy datasets. ‘Most of our customers in the oil and gas industry are software vendors currently creating dedicated software for specialised tasks related to the exploration and production fields,’ says Billy. ‘Instead of creating and maintaining all of the visualisation capabilities of their software by themselves, our customers use our library components in order to deliver the best visualisation capabilities within their software. The end user – the geophysicist or whatever – is using our product indirectly, via the vendor who has embedded our technology within his own product.’
For the end user, powerful graphics cards (GPUs) are a prerequisite of this visualisation process, and VSG has worked to allow these GPUs to be harnessed for use in other tasks that an analyst may wish to perform on his data set. ‘Our toolkit is designed to be able to handle very large models, and the management of these large models can be done on the visualisation side [in the GPU] as well as the computation side [in a cluster, for example]. This means that we have created an architecture that enables a software developer to perform very complex calculations on the GPU while he is displaying the data. The technology is more than just a viewer – it handles very large models that do not fit in the memory, and it allows developers to use this facility to run code on the visualisation models. This is very useful for calculating seismic attributes on the fly – information contained within the seismic data. We are not supplying high-performance computing tools, but rather we are supplying high-performance visualisation tools that take advantage of the computational capabilities of GPUs, and which also provide developers with a framework and toolset to develop HPC code linked to visualisation.’
Close ties
In practice, VSG works very closely with GPU developer Nvidia, as Billy explains: ‘We have a partnership with Nvidia with regards to their Cuda technology, but we have built what we call an abstraction layer into our product. The developer does not specify what he wants to use Cuda for; rather the library chooses the best way to perform the calculation, be that in real time or using more GPU or more CPU,’ he says. VSG is able to ensure that its products make full use of even the latest incarnations of Nvidia’s hardware and Cuda platform by virtue of the fact that it is a beta tester for the devices: ‘We get the latest generation before they hit the market, so that we can test them and report any suggestions and wishes we have back to Nvidia,’ says Billy.
As well as having been early adopters of the Cuda GPU programming environment, VSG is an early user of Nvidia’s CompleX application acceleration engine – a scalability tool that allows a large 3D volume to be split into a number of smaller parts to be distributed across however many GPU boards a user has attached to his or her system. CompleX is linked to Nvidia’s Quadro Plex products, which are high performance visualisation-oriented single workstations containing two, four, or eight GPU cards. ‘Thanks to the CompleX technology, the display can be four, eight, or 16 times faster,’ explains Billy. ‘It allows the calculations on a very large volume to be distributed [across several GPUs], essentially in real time. We are able to increase the frame-rate and the display speed by increasing the number of GPUs.’
Beyond display
A GPU (or several) is an essential piece of equipment for the visualisation side of the process, but the GPU may not always be a good choice when it comes to seismic processing. SRC Computing, based in Colorado Springs, USA, develops computing systems based on reconfigurable processors – accelerator boards consisting of arrays of FPGAs. ‘There’s been something of a love affair between the oil and gas industry and GPUs, and this is because for a number of years the holy grail of the industry has been visualising large chunks of the data – that need was delivered by graphics cards,’ says Mark Tellez, director of business development at SRC. Tellez believes that the move from visualisation applications to GPU-led processing was something of a natural progression, as many client companies have already invested in the necessary hardware. ‘In a lot of ways, GPUs were just a product looking for a solution that they can solve, but [GPU suppliers] were not building a compute solution, they were building a graphics card designed to solve a graphics problem,’ he says.
SRC supplies customers in the oil and gas industry with a way of speeding up the demanding reverse time migration algorithms used in processing seismic data. David Caliga, director of software applications at SRC, describes the company’s approach: ‘We provide what a lot of people might call accelerators, capable of speeding up compute-intensive portions of code. What we provide is a complete system, and not just a GPU- or FPGA-based accelerator card.’ The company’s products are based on highly parallel and reconfigurable FPGA accelerators, which it refers to as MAPs. Caliga states that the company optimises its systems to ensure that seismic data can be very rapidly moved into the MAPs. Several MAPs may be incorporated into a single system, working alongside the standard CPU microprocessor. ‘We treat the MAP processor as a peer to the microprocessor. The intent is to divide the compute intensive application across the two compute devices to get the most out of both of them. In one image processing application, for example, a system with five MAPs and one microprocessor replaced a cluster with 96 dual-core nodes.’
The performance of MAPs-based systems is described by the company as comparable to that of GPU-based systems while only requiring around a quarter of the power of an equivalent GPU system. ‘We can provide superior performance with a fraction of the power dissipation,’ states Caliga. ‘Fully loaded, the MAP consumes approximately 50W, compared to a GPU, which would be around 200W.’ A reduction in power consumption and heat dissipation also allows a reduction in the footprint of the system: ‘The potential reduction is from something that would consume multiple racks in a data centre down to something that would fill a small desk-side enclosure, or maybe a standard rack.’
Tellez explains that even in a multi-billion dollar industry, power consumption is of key importance: ‘I know of a number of [seismic data processing] companies that are constrained by the amount of power they can pull from the grid at the location of their data centre. Additionally, even if they can pull enough power for their server farms, they’re typically working in a building that was never designed to have that much power generated on site (in terms of heat). They therefore have trouble in terms of cooling.’ He adds that many companies are now looking towards moving data processing operations nearer to the drilling location, on a ship, or even on the drilling platform itself, where power and space are even more constrained. ‘Typically, they put the data onto a number of hard drives and fly it by helicopter back to the mainland, where it is loaded into a computer. They compute whatever they are working on, and then they often have to send the results back in the same manner! If they could do all of the processing on site, they’d be saving a lot of time and money, not only in terms of just moving the data, but also in terms of avoiding idle time on the drilling platform while they wait for results.’ SRC believes therefore that its low-power processors offer the oil and gas industry an attractive alternative to both conventional clusters and GPU-based computation.
Tellez states that SRC has gone after customers in the industry of seismic data processing because importance of processing in the industry: ‘They are the low-hanging fruit at the moment, because the faster they can process a line of code, the faster they can either begin drilling or charge the drilling company for the information.’
Given the advantages of FPGA-based processing over cluster or GPU-based alternatives, why haven’t more oil and gas companies adopted the company’s solutions? Tellez believes that FPGAs (or reconfigurable processors) are seen as complicated and difficult to program. ‘The biggest challenge we have is to get people to realise what they can do with the current technology, and to realise that it’s not as scary as they may have heard.’ In a move analogous to Nvidia’s introduction of the Cuda programming environment for GPUs, SRC offers its CARTE platform for programming FPGAs. CARTE contains a set of standardised functions specific to the oil and gas industry, and the company hopes that this will enable more and more players in the industry to take advantage of the reconfigurable technology. ‘We’ve done a lot of work in order to simplify the move from the original calculation into the reconfigurable environment,’ says Tellez.
Before the introduction of GPU programming environments such as Cuda and OpenCL, GPUs were very difficult to program. As programming tools for reconfigurable environments become more established, programming the right tool for the job can only become easier.