Home

Bill Thigpen, Engineering Branch Chief of NASA’s Advanced Supercomputing Division

If you look at the Science Mission Directorate, who is our largest user, they’re doing both earth science and space science. In the earth science arena they’re looking at things like climate change, ocean modeling, earthquake modeling, any data that the NASA satellites are gathering, the processing of a lot of that information is occurring on this system.

The Exploration Systems Mission Directorate is our second largest user, and they’re doing a lot of work in the next-generation spacecraft. The Constellation program and the ARES rockets, they’re looking at both the safety and design aspects of these systems, and how we’re going to get to the Moon, how we’re going to get to Mars.

I didn’t talk about the Space Science Division. Space Science is looking at things like solar weather, and there was a lot of work done on colliding black holes on the system. Then, in the aeronautics arena, they’re looking at how to make engines quieter, how to fly planes on Mars, and also research into fundamental aeronautics is going on.

And then the Space Operations Mission Directorate does a lot of work on safety for the Space Shuttle.

NTB: Have any significant updates or upgrades been made to the system since it first became operational?

Thigpen: Yes, as a matter of fact, there have. We have increased the system by 40-percent. That was done in two steps. First, the Exploration Systems Mission Directorate (ESMD) needed more processing capability, so they paid for an addition to the system and they get all of that addition. That was 2,048 processors that went in as two 1,024 core nodes.

Then there was also an addition that we did to the system in looking at the next generation of systems for the NAS (NASA Advanced Supercomputing) Division, and that was a 2,048 core node that is now being used by all four mission directorates.

NTB: So how many total processors does the system now have?

Thigpen: The total is over 14,000.

NTB: What are Columbia’s current performance characteristics in terms of speed, storage capacity, etc., and how does it compare to some of the world’s other top supercomputers?

Thigpen: We just ran a LINPAC benchmark on the system on 23 out of the 24 nodes — we actually didn’t run it on all 24 because we didn’t want to take the system down long enough to bring the 24th node in — and we got 66.5 teraflops. That places the system — if the numbers held, and they won’t hold because there’s other new systems that are coming in — 13th in the world as far as speed of the computer. There’s over one-and-a-half petabytes of disc storage that’s part of the Columbia system.

The other thing that’s happening now that I think is pretty significant is, we’re delivering about 1.9 million hours to NASA every week from Columbia.

NTB: That actually relates to my next question. I assume there’s a lot of demand for computer time on Columbia. How do you prioritize the projects and decide which ones will run and which ones won’t?