ISC's Nages Sieslack highlights a convergence of technologies around HPC, a focus of the ISC High Performance conference, which takes place from 19 to 23 June.
When cloud computing was introduced to ISC High Performance attendees in 2010, it was still a relatively new concept. Many pundits in the industry had labelled it as hype and predicted it would fade away with time.
Professor Dr Wolfgang Gentzsch, known as the ‘European Cloud Guru,’ was invited to chair the very first ISC session on High-Performance Computing (HPC) in the Cloud and it was apparent from the discussions that many people in this community didn’t know much beyond the definition of cloud computing and its different forms, namely public, private and hybrid clouds. Some even suggested that it was merely grid computing with a new, fancier label.
Interestingly, about a year later, a similar crowd of scientists and entrepreneurs attended the conference. These attendees, however, were more savvy. They were familiar with terminologies like IaaS, PaaS, and SaaS, understood the ‘pay as you go’ model, and were able to pose challenging questions regarding pricing models.
Gentzsch has since founded the UberCloud online community, marketplace and container software factory in the Silicon Valley, where engineers and scientists and their service providers can get access to on-demand computing power and expertise. Almost 200 teams have used the UberCloud platform to conduct experiments in aerodynamics, life sciences, data modeling and other fields.
A growing acceptance of HPC in the cloud has taken place, brought about, in part, by the adoption of cloud as a mainstream computing technology. Gentzsch remarks, ‘HPC cloud has followed the general cloud trend with perhaps a lag of a few years.’ The IDC worldwide studies of HPC end-user sites show that the proportion of sites employing cloud computing – public or private – has grown steadily from 13.8 per cent in 2011, to 23.5 per cent in 2013, and to 34.1per cent in 2015. Also represented in this mix is the growing contingent of hybrid clouds that blur the public-private distinction by combining on-premise systems with external resources.
While the ‘adopt or not adopt cloud’ debate was building in the HPC community, we introduced big data as a topic parallel to cloud in 2013. With just a one-day conference devoted to it, fascinating discussions were sparked, which were carried forward to two subsequent conferences, encompassing topics like security and privacy issues, visual analytics, and the Internet of Things. The cloud computing model proved to be a fruitful match for big data, with cloud providing scalable resources on demand, thus providing a flexible platform for compute-intensive analytics. Many topics and sessions in the conference program reflected this serendipitous convergence.
We had speakers from scientific communities, like CERN, who told us how big data is used to analyse the particles collision within the Large Hadron Collider in order to reveal some of the most fundamental properties of the physical world. Other speakers described case studies from enterprises like PayPal, who rely on data analysis to enable customers and merchants to enjoy fraud-free transactions online, and the multinational Virgin Group, which relies on big data to study their customers spending habits and bind them to Virgin’s loyalty program.
Three years ago, another significant new technology was emerging onto the HPC scene. Barely known to the world at the time, deep learning had been undergoing rapid development since 2006. This technology applies massive amounts of computational power to large volumes of data, providing businesses, research organisations, and consumers with applications that can deliver sophisticated analytics that mimics human intelligence.
One of the primary goals is to build artificial intelligence (AI) systems capable of helping people navigate the risks of everyday life – be it a passenger using a self-driving car to get from point A to point B, a medical doctor using an expert system to help diagnosis and treat their patients, or a traveller being able to negotiate the language in a foreign country with a real-time speech translator.
According to Dr. Andrew Ng of Baidu, China’s premier web services company, the breakthrough came in 2013 when his peers Bryan Catanzaro and Adam Coates built the very first HPC-style deep learning system. This helped increase computational power for this technology by one to two orders of magnitude. It was about the same time when the world was becoming aware of ground-breaking research in the fields of computer vision, speech recognition, and natural language processing.
Before joining Baidu, Ng, founded and led the Google Brain project, and in 2011, his team developed massive-scale deep learning algorithms. This resulted in the famous ‘Google cat,’ in which a massive neural network with one billion parameters learned to detect cats in unlabelled YouTube videos. Ng joined Baidu in 2014, as did Catanzaro and Coates, where today they continue to work on deep learning and its applications in computer vision and speech, including areas such as autonomous driving.
In a recent Reddit post, Ng said: ‘today at Baidu, we have a systems team that’s developing what we think is the next generation of deep learning systems, using HPC techniques. We think it’s the combination of HPC and large amounts of data that’ll give us the next big increment in deep learning.’
This year, Andrew Ng has been invited to ISC High Performance to deliver the conference keynote. The event will be home to 3,000 HPC enthusiasts from over 56 countries, and will take place over a period of five days in Frankfurt, Germany. In his keynote, Ng will describe how HPC is supercharging machine learning. According to the keynote abstract: ‘AI is transforming the entire world of technology. Much of this progress is due to the ability of learning algorithms to spot patterns in larger and larger amounts of data. Today this is powering everything from web search to self-driving cars. This insatiable hunger for processing data has caused the bleeding edge of machine learning to shift from CPU computing, to cloud, to GPU, to HPC.’
Ng’s observation is especially reassuring to the organisers of ISC High Performance. With 31 years of experience in conducting scientific computing conferences, we are pleased that we realised early on that the convergence of cloud, big data, and HPC could bear such interesting fruit. This combination of technologies and communities are indeed changing the world of science and engineering, as well as every sector of the economy that relies on them.
In addition to the theme of convergent HPC technologies, this year’s conference will also offer two days of sessions in the industry track, specially designed to meet the interests of commercial users. Our focus is Industrie 4.0, a German strategic initiative conceived to take a leading role in pioneering industrial IT, which is currently revolutionising engineering in the manufacturing sector.
Speakers in the industry track will address various HPC technologies and applications that affect and support cyber-physical systems, the Internet of Things and the Internet of Services. This track also includes a panel discussion on cloud computing, as well as a discussion with end users and vendors in the space. This track is intended for engineers, manufacturers, and other commercial end-users, in addition to computing specialists and marketing staff from vendors serving this community.
We hope to see you at ISC High Performance from 19 to 23 June.