Virtus Data Centres managing director, Darren Watkins, explains the importance of building a data centre from the ground up to support the requirements of HPC users - while maximising productivity, efficiency and energy usage.
Enterprise high-performance computing (HPC) was once seen as the reserve of the mega-corporation. While many complex science, engineering and business problems have required the high bandwidth, as well as the enhanced networking and high compute abilities associated with HPC, the cost of these solutions has often been a barrier.
In the past 15 years however, HPC has quietly become one of the fastest growing IT markets globally. IDC has forecast that the market will continue to grow to $31 billion by 2019 – up from $21 billion in 2014. This growth is not only being driven by increased use among long-standing HPC users, but also by lower entry pricing attracting new users and spurring commercial firms to adopt this technology.
The reality for many IT users is they want to run analytics that –with the growth of data – have become too complex and time critical for normal enterprise servers to handle efficiently.
Enabling the growth of HPC
Historically, when a data centre needed to meet increased capacity and processing needs, it would simply add floor space to accommodate more racks and servers. However, along with the demand for increased IT resources and productivity has come the need for greater efficiency, better cost savings and lower environmental impact. Third party colocation data centres are increasingly being looked at to support this growth and innovation, rather than CIOs expending capital to build and run their own on-premise capability.
As a result, HPC is now being looked at as a way to redress the IT budget and performance dichotomy, requiring data centres to adopt innovative high-density computing (HDC) solutions – an essential ingredient for HPC – in order to maximise productivity and efficiency, increase available power density and the ‘per foot’ computing power of the data centre.
Making HPC capabilities accessible to a wider group of organisations in sectors such as education, research, life sciences and government requires HDC solutions that, through greater efficiency, lowers the total cost of the increased compute power. HDC is, therefore, a vital component in enabling organisations of varying sizes to affordably benefit from greater processing performance. Indeed, the denser the deployment, the more financially efficient the customer’s deployment becomes.
As a result, using high-density innovation to be able to support high-performance computing in the data centre has become a key battleground for co-location providers.
HPC and HDC data centre connection
In spite of its importance to enabling HPC, misperceptions around the role of HDC and what even qualifies as high density continue to persist.
A 2015 study by VIRTUS revealed that although 97 per cent of respondents were aware of high-density solutions, only 31 per cent understood that it could be more cost-effective than traditional computing. Industry views on what constitutes high density also varies widely.
Gartner has defined a high-density capability as one where the energy needed is more than 15kW per rack for a given set of rows, but this is being revised upwards all the time with some HPC platforms now requiring performance in the 30-40kW range - often referred to as ‘ultra-high density’. However, VIRTUS’ research indicated that 60 per cent of IT business decision makers thought HDC meant a lower density than 10kW per rack of IT computing capability.
Data centres built as recently as four to five years ago were designed to have a uniform energy distribution of around two to four kilowatts (kW) per IT rack. Some even added ‘high-density zones’ capable of scaling up if required. However, many of these required additional deployments around the higher power racks to balance cooling capability, or supplemental cooling equipment that raised the cost of supporting the kW density increase.
On the other hand, if density has been designed ‘in’ from the beginning, it provides the ability to support the next generation of businesses IT infrastructure for HPC, thus optimising the data centre footprint required and the overall associated costs. This means that whether or not existing data centres take steps to offer high density, they are playing catch-up with the next generation of intelligent data centres that already have this capability built into their design.
Ultimately, older data centres are under increasing pressure to align to new, more powerful technologies being installed in them, if they want to remain competitive in the marketplace – something that is far more difficult to do retrospectively.
Choosing for the future
Organisations are beginning to realise that making the right choice is not simply about the data centre, it is also about making the right choice of HPC platform.
Many data centres will claim to deliver high-density computing – and technically speaking, a lot of them will – but only data centres that have been built from the ground up with high density in mind will be able to do so cost-effectively.
As such, it’s more important than ever that organisations that do not fall into the mega-corporation category, conduct due diligence before signing up with data centre providers in order to avoid the risk of tying themselves into costly long-term contracts that neither meets its current or future needs.
With the Internet of Things and Big Data quickly becoming a reality, organisations across industries will need to ensure that their IT systems are ready and able to deal with the next generation of computing and performance needs to remain competitive and cost efficient.
Virtus Data Centres, is one of the UK’s fastest growing data centre providers. The company owns, designs, builds and operates some of the countries most efficient data centres. The data centres are located within London’s metro – offering low latency and high connectivity, and are designed specifically to offer the flexibility modern users need.