Friday, May 15, 2015

Purdue Lights Up Eighth Cluster in Eight Years

Purdue cluster Rice installation from youtube
At Purdue, installing cluster computers is a tradition that inspires teamwork. The university’s central computing organization, Information Technology at Purdue (ITaP),  just built its eighth cluster in as many years – seven of these are TOP500-level machines – with help from more than 100 staff and volunteers.
On Friday morning, the crew got to work unboxing and assembling Purdue’s latest research supercomputer inside the high-performance computing datacenter of Purdue’s Mathematical Sciences Building. The team was in a race to get the HP cluster – named “Rice” in honor of John Rice, one of the earliest faculty members of Purdue’s first-in-the-nation computer science program – up and running by that afternoon.
Rice is the newest of Purdue’s Community Clusters, optimized for traditional, tightly-coupled science and engineering applications. The HP cluster touts 576 compute nodes, each with two 10-core Intel Xeon-E5 processors. All Rice nodes have 20 processor cores, 64 GB of RAM, and 56 Gbps InfiniBand interconnects – and a 5-year warranty. The cluster also features a Lustre parallel file system built on Data Direct Networks’ SFA12KX EXAScaler storage platform.
While official performance metrics haven’t been released yet, iTaP said Rice will provide about 7,000 times the processing power of an average laptop, which they expect will be sufficient to place it in the ranks of world’s 500 most powerful supercomputers, alongside two other Purdue clusters: Conte and Carter. ITaP and faculty partners have built six TOP500-class supercomputers at Purdue since 2008. Rice will be the seventh.
The three clusters – Rice, Conte and Carter – will be shared by 150 Purdue research labs and hundreds of faculty and students who will leverage the computing power for a wide range of science and engineering problems. It’s research that’s enriching humanity through better disease treatments, improved crop technology, climate simulations and space discovery.
ITaP Research Computing is also going to be adding two smaller clusters – Snyder and Hammer – aimed at memory-intensive and high-throughput serial work.
Rice was purchased for approximately $4.6 million, which is roughly the same as it costs to operate the cluster each year. Gerry McCartney, Purdue’s system chief information officer, told a local public radio station that having this level of advanced computing makes it easier to attract top talent, but he’s confident that it is also the right model from a cost-perspective.
“I will happily tell you we are a small fraction of the cost it would be to go outside,” he shared, likely alluding to a hosted service. “Now should that ever change, we will go outside. There’s no religion here.”
“Then the imagination has to be: ‘now what can we do to help faculty do research and our students be more successful?’ Right now, that expresses itself as building these machines. In five years, it might be something completely different.”
See time-lapse video of the cluster build here:

The impetus for the robust HPC upgrade path is clear from McCartney’s perspective.
“Demand from faculty making life- and society-changing discoveries drives our strong program of adding a TOP500 cluster every year,” said McCartney. “We only see this demand continuing to grow as new researchers join Purdue’s faculty under President Mitch Daniels’ Purdue Moves initiative.”
Meteorology graduate student Kim Hoogewind lost no time putting Rice to work simulating future severe weather patterns. Before the last box was unpacked, Hoogewind pushed a job out to six nodes, and it was finished in less than an hour.
Hoogewind works in the lab of atmospheric science Professor Michael Baldwin’s lab. The team is studying the link between climate change and severe weather events, such as thunderstorms and tornadoes, using decades worth of weather data. It’s the kind of research that’s just not feasible without supercomputers like Rice.
“You need years and years of these simulations to try and say something meaningful,” said Professor Baldwin. “It really takes high-performance computing, there’s no way around it.”

No comments:

Post a Comment