Features

Tom Flatley, Computer Engineer, Goddard Space Flight Center, Greenbelt, MD

Tom Flatley, computer engineer and current head of the Science Data Processing Branch at Goddard Space Flight Center, leads a group of engineers and programmers in their development of flight and ground-based science data processing systems and applications, including SpaceCube, CubeSats/SmallSats, modeling/simulation/visualization, and other technologies.

NASA Tech Briefs: Why will NASA require improvements in on-board computing power?

Tom Flatley: Many of the next-generation instruments currently being developed are going to produce tremendous data volumes, and at extremely high data rates. Their needs are surpassing the capabilities of current flight processing systems, so what we’re trying to do is enable an order of magnitude or more improvement in on-board processing power so that we can handle the large data volumes and high data rates that the next generation of missions will require.

NTB: What is SpaceCube?

Flatley: SpaceCube is a hybrid science data processing platform that we’re developing. When I say hybrid, I mean it’s composed of traditional CPU resources plus field programmable gate array (FPGA) and digital signal processing (DSP) resources. We develop applications that use the benefits of each of these processing technologies to accelerate the execution of science data processing algorithms.

By using the new radiation-tolerant, but not radiation-hardened, hybrid processors, we can take advantage of the speed that the commercial devices can achieve, which is an order of magnitude higher than the traditional flight processors. Then we just develop strategies to detect and correct when they’re upset by radiation in space, basically fix it, accepting that we may have had a “blip” in the data, and keep going with the processing.

NTB: Can you explain a bit more about upset mitigation and how SpaceCube enables that?

Flatley: Traditional flight processors are, by design, radiation-hardened. When the chips are developed, they specifically build them using technologies that are immune to being upset in the space radiation environment. Radiation-tolerant devices are designed so that they won’t have destructive failures in space, but they can have what you can think of as “bit flips,” where a 1 changes to a 0 or a 0 changes to a 1. A “bit flip” can produce incorrect data temporarily, but you can detect that and fix it, or you can continue your processing, and the error will wash out and your correct processing will continue. The traditional devices, which are designed specifically such that they cannot be upset, are typically larger and slower, and they cannot perform at the rate that the current commercial ground-based processors can. So what we’re trying to do is find a middle ground where we can take advantage of the ground-type capabilities for high-end processing, and make them operate reliably enough in space that we can do science data processing applications.

We’re not trying to do man-rated health and safety. We’re not even trying to do critical spacecraft functions. For processing science data, typically it’s okay if you have a bad pixel every once in a while, or if you have to reset and start over again, as long as you’re providing 100x more capability that can provide the difference between being able to do your mission or not do your mission. Our strategy has been to use these high-end, radiation-tolerant devices, and then come up with techniques to detect and correct when they’re upset so that we can operate nearly as reliably as the radiation-hardened devices in space, for certain orbits and applications.

NTB: What do you mean by the “science data” that’s being processed? Is this image processing?

Flatley: It’s actually cross-cutting. We can support image processing, radar processing, or basically any kind of high-end processing needs. One good example: One of our scientists was proposing a radar instrument to go to Mars. With the current traditional processor, he could collect and process 9 minutes of data per day, and that filled up his on-board recorder. That was all that his processors could handle in a day. Using something like the SpaceCube, we did an R&D project with him where we moved some of his ground processing, which required more computing power than you could have onboard (but that is capable in the SpaceCube), and then just sent the pre-processed images down, rather than all the raw data. The first year we got a 6-to-1 data volume reduction, migrating the first set of his ground processing. The following year we got a 165-to-1 data volume reduction by processing the complete images onboard. Basically, he could run for 9 minutes a day using traditional processors, or he could run 24/7 using the SpaceCube and maybe have a bad pixel every once in a while. That’s the sort of enabling capability that we’re trying to deliver to the science community with the SpaceCube.

NTB: What other exciting capabilities do you see with SpaceCube?

Flatley: You can actually look at the data in real-time and react to events. For example, an earth science application meant to survey the Earth may detect a forest fire or a flood and could adapt its processing to change to an emergency-response mode, or send direct-broadcast, real-time pictures of the fire or flood down to the people in the field who are engaging it.

SpaceCube hybrid processing can enable autonomous robotic operations, like satellite servicing. In geo-orbit or lunar-planetary missions, you can have autonomous operations, and have enough intelligence there to operate without having a human in the loop all of the time.