Suppliers of supercomputers are looking for scientists and engineers as customers, and offering them an array of products at cheap prices.
At the International Supercomputing Conference, in Dresden in June, there was general agreement that two technologies – multi-cores and clusters – have changed the face of high-performance computing, so that what was once costly and difficult to program is now cheaper and easy to use. Instead of selling to national laboratories or the central IT purchasers of multinational corporations, supercomputing companies – whether it be Tyan with its ‘personal supercomputers’ that fit under a desk, or Nvidia which is better known for gaming and graphics display hardware – are now aiming direct at the scientist and engineer.
Dr Erich Strohmaier, from the Lawrence Berkeley National Laboratory in the USA points out that the power consumption of chips and systems has increased tremendously, because of ‘cheap’ exploitation of Moore’s Law – by increasing calculation frequency. But this is no longer possible, he said, because higher frequencies demand too much power and consequently generate too much waste heat. The optimal core sizes and power levels are smaller than current ‘rich cores’, and this has led to the development of ‘many-core’ chip architectures such as the Intel Polaris with 80 cores; Clearspeed CSX600 with 96 cores; Nvidia G80 with 128 cores; or CISCO Metro with 188 cores. Dr Strohmaier is one of the compilers of the ‘Top500’ – the list of the world’s fastest supercomputers – and so important has the issue of heat become, that he promised the meeting that: ‘In the near future we will track energy consumption in the Top500 list.’
Other solutions complement multi-core in tackling power consumption and heat generation. The NEC cluster at the Tokyo Institute of Technology, which now ranks 14 on the list, consumes 1.2MW of power. According to Stephen McKinnon, chief operating officer of Clearspeed, the company’s accelerator technology has helped improve performance on the machine, without a significant increase in power consumption. Ed Turkel of Hewlett-Packard’s HPC division, hinted that the future could see exotic cooling technologies with spray cooling of processors, possibly derived from HP’s inkjet printer technology, whereby the hot spots on a processor are selectively spray-cooled.
Dual-core processors are dominant in the Top500. Intel’s Woodcrest dual-core chip showed the most growth, with 205 systems using this chip (compared to just 31, six months ago). Another 90 systems use AMD Opteron dual core processors. At 373 systems, clusters remain the most common architecture in the Top500 list. With 40.6 per cent of the systems on the list, Hewlett-Packard now has the lead over IBM, which supplied 38.4 per cent of the Top500 systems. HP announced a research programme on next-generation multi-core optimisation technologies. Ed Turkel says the company’s blade systems are now prevalent in the Top500 although they were only introduced a year or so ago.
IBM has six out of the top 10 machines in terms of performance and both it and HP are looking forward to the next-generation ‘petaflop’ computer, but IBM too is looking to the smaller system. Klaus Gottschalk, from the company’s Systems and Technology Group in Stuttgart, said: ‘Science is one of the defined industries in our matrix.’ Along with other suppliers, the company is also targeting financial markets and has opened up an ‘on-demand computing’ facility for the City in London.
Nvidia announced a new class of products, called Tesla, based on its graphics processing unit (GPU), which it claims ‘will place the power previously available only from supercomputers in the hands of every scientist and engineer.’ With multiple deskside systems, a standard PC or workstation is transformed into a personal supercomputer, delivering up to eight teraflops of computing power to the desktop.
Multi-cores and clusters require parallel programming and compilers while huge amounts of legacy code were written for serial computing. Clearspeed’s McKinnon warned: ‘The old techniques of going faster will no longer work. People will have to adapt to multi-core, but that is not the way that code is currently written. We are trying to simplify the transition to a heterogeneous multi-core environment.’
Dr Burton Smith, a supercomputing pioneer and now a Microsoft Technical Fellow, warned that ‘general-purpose parallel computing’ meant: ‘Our industry and the universities must reinvent not only computing, but also the computing profession. To enjoy performance improvements in the future, mass-market technology providers will have to embrace parallel computing.’
Image: IBM's Blue Gene/P supercomputer.