Primarily, as a researcher in high performance IO and file systems, I’ve contributed file systems and IO improvements to multiple supercomputing projects at Sandia. Probably the most visible was the virtual file system interface for our Red Storm machine, the prototype for Cray’s XT line.
Sandia runs a number of scientific machines. Our largest, most popular, machines are:
- Red Sky, which has 5,300 nodes, eight cores per node (42,400 processors) running Centos 5.3 Linux. This machine is currently at number 10 on the Top500 list;
- Thunderbird, which has 4,500 nodes, two cores per node (9,000 processors) running Redhat Enterprise Linux;
- TLCC (two identical machines), featuring 576 nodes, 16 cores per node (9,216 processors) in the Tri-Lab Linux Capacity Cluster (TLCC) running the tri-lab TOS software stack, a derivative of Redhat Enterprise Linux; and
- Red Storm, which was the prototype for Cray’s XT line of machines. The machine was developed in a partnership with Cray, beginning in the early part of the last decade. The machine has been upgraded twice since. Currently, it has 38,208 processors, and runs Catamount, a Sandia-crafted, special-purpose lightweight operating system designed to maximise the capabilities of large multi-program parallel machines.
Sandia is a multi-programme laboratory. Primarily, we work on the United States national security mission. Together with Los Alamos National Laboratory and Lawrence Livermore National Labs, we work on maintaining this nation’s nuclear weapons stockpile.
Major areas that we are working on within that mission include life extension programmes for the weapons, reliability, security, and safety.
We are not solely limited to that, though. Sandia’s mission truly is ‘exceptional service in the national interest’ and we have made significant contributions in nuclear non-proliferation, homeland security technologies, energy research, synthetic aperture radar, robotics, and more.
Our machines’ typical use reflects these missions. Sandia is fundamentally an engineering laboratory, so we would be modelling many things one might expect, such as heat transfer, shock and materials deformation.
Obviously, access to these machines for our sensitive work takes place in restricted areas, usually local to the facility.
We do much that is not sensitive, though, and access can be gained through the internet – much like any large business.
Here, we use the normal technologies; firewalls, gateways, VPN, encrypted HTTP and so on.
I’m an IO guy, so I’d like to elaborate on our IO solutions. We see the world evolving in the high-performance space.
In the past, these large machines have incorporated dedicated IO systems. The storage facilities were directly attached and not shared with our other compute resources.
At this point, we are well into a direction where we try to build a centralised store and connect the machines to this shared resource. Our first attempt at this was a purchase from Panasas. We bought 10 shelves of storage a number of years ago. Our special needs for this purchase led us to a non-standard configuration.
Since we primarily use the resource for parallel IO access from the supercomputers, we have only one DirectorBlade for every two shelves, instead of the recommended one per shelf.
Our second solution leverages two models of Direct Data Networks (DDN) RAID controllers (9900s and 9950s). The software used to aggregate these is Sun’s Lustre file system. This store was assembled over multiple purchases and is about one petabyte of capacity currently.
These two stores service all of the machines described above, except Red Storm.
Red Storm represents what is probably the last of the old-style, dedicated IO system architecture. That machine has a mixture of DDN and LSI RAIDS and aggregates it with the Lustre file system software.
More than just an investigation into a potential future, my centre is involved in a joint project to purchase the next ‘capability’ machine for use primarily by Los Alamos and Sandia.
Capability systems are built to solve the largest problems at the labs and typically land at around the top of the Top500 list of supercomputers.
Since this machine will be located at Los Alamos, it only makes sense to try to increase the capabilities of their similar central supercomputer store (which also uses a Panasas-based solution).
That laboratory has deployed a central store; four petabytes of storage supports six supercomputers, including RoadRunner, which is currently number two on the Top500.
Currently, that system delivers 55GB/s to Los Alamos applications and, as part of this project, we will add an additional 160GB/s of performance.
To support that effort, Sandia purchased four new shelves of Panasas gear and attached it to a brand new Cray XT5.
The Cray IO system, called DVS, currently supports Lustre and GPFS. Sandia, Panasas, and Cray have been working together to make the necessary design and software changes required to support a solution at its best possible capabilities.
Our missions are very demanding, computationally. We will continue to pursue the acquisition of state-of-the-art HPC resources in support of these missions.
In addition to all of this, Sandia has a long history of research into new technologies that may be expressed and incorporated into our production machine purchases.
Our research efforts in this area encompass a very wide range – everything from architecture, to high-speed networking, operating systems, and, of course, IO. Probably the best-known instances of these research efforts were our partnerships with Intel, which resulted in the ASCI Red machine (the very first machine to achieve one teraflop/sec), and Cray, which resulted in the XT line of machines.
Currently, we are working on extremely advanced architecture designs, memory technologies, a new operating system called ‘Kitten’, and a radical departure in high-performance file system design. We will be attempting to have these technologies influence or incorporated into any products that we might buy in the future.
One way we do that is by working with standards committees. For instance, our extensive experience with these very large machines has demonstrated that there are some potentially troublesome side-effects with the current IEEE POSIX file system interface.
Together with Panasas, and the other US energy labs, we have been working on a set of extensions in order to enhance portability and future-proof our applications.