Who's Who at NASA

Nicholas Johnson, Chief Scientist and Program Manager for NASA’s Orbital Debris Program Office, Johnson Space Flight Center

Nicholas Johnson is Chief Scientist and Program Manager for NASA’s Orbital Debris Program Office. In July 2008 he was awarded the Department of Defense Joint Meritorious Civilian Service Award for his contribution to Operation Burnt Frost, a mission that involved the interception and destruction of an out-of-control satellite before it could hit the Earth. NASA Tech Briefs: When was the Orbital Debris Program Office established and what is its primary function? Nicholas Johnson: The office was established in 1979, first to define the current and future orbital debris environment to support mission operations and spacecraft design, and also to develop orbital debris mitigation measures and policies. NTB: Can you give us a little more detail about what that involves? Johnson: Office personnel evaluate all NASA space programs and projects for compliance with agency orbital debris mitigation requirements. The office is also the lead for coordination and cooperation with other U.S. government departments and organizations in the field of orbital debris. As Chief Scientist for Orbital Debris, I also serve as the U.S. technical expert on space debris at the United Nations. NTB: Exactly what is space debris? Johnson: Space debris, primarily, is anything in Earth-orbit that no longer has a useful function. That could include a non-functional spacecraft, a derelict launch vehicle upper stage, fragmentation debris, paint flecks, anything you can think of. NTB: Is space debris just manmade objects, or does it also include natural materials like meteoroids and things like that? Johnson: Normally when we talk about orbital debris, we’re talking about manmade objects. Meteoroids are in orbit about the sun and we normally refer to them as the natural environment. NTB: How does one become an expert in space debris? Is there a course of study you can recommend, or do you pretty much learn on the job? Johnson: Actually, orbital debris is a very small scientific community. Within the U.S., NASA is the principal source of orbital debris expertise and is the only organization which actually characterizes the orbital debris population from the smallest debris — microns — to the largest, which can be tens-of-meters. Personally, I studied physics and astrophysics, but none of my formal education involved orbital debris. I think the vast majority of the folks who are in the field did learn on the job. There is actually one university in the United States — The University of Colorado, Boulder — that is the only U.S. institution to have awarded PhD’s in orbital debris, but only about a half-dozen or so folks have made it through that course. I have been involved in orbital debris research for 30 years in support of a wide variety of U.S. government organizations. NTB: When it comes to the amount of debris in space, I’ve heard all kinds of figures. I’ve heard that since the launch of Sputnik 1 back in 1957, there have been approximately 28,000 objects put into space, 9,000 of which are still in orbit, and only 6-percent are still operational. To make matters worse, we’re launching approximately 75 new spacecraft per year, adding to the potential problem. How does your office keep track of all this debris and make sure it doesn’t pose a threat to things like the International Space Station, Hubble Telescope, and the Space Shuttle? And are those numbers I have even accurate? Johnson: Well, they’re a little bit out of date. First off, NASA doesn’t maintain what’s called the U.S. Satellite Catalog. That’s performed by the Department of Defense. We certainly work closely with the DoD in a large number of space surveillance areas. Since Sputnik 1 there have been more than 4600 space missions launched from around the world that have successfully reached Earth orbit or beyond. The total number of objects which have been officially cataloged by the Department of Defense is now nearly 34,000, of which about 13,000 are still in Earth-orbit. It turns out the Department of Defense is also tracking another 5,000 objects, which they know are out there, but they have not yet officially cataloged them. It’s more of an administrative issue. So, if you’re trying to find a final number of objects that we’re aware of, that we know where they are, it’s somewhere around 18,000. NTB: How big are most of these objects and how fast are they normally traveling through space? Johnson: These objects vary in terms of size from about 10 cm or so up to many tens-of-meters. That’s in terms of what can be tracked. Actually there are many smaller objects in orbit which DoD can’t track; they’re down to millimeter, or even micron, size. Their masses, of course, range anywhere from a sub-gram up to many metric tons. In low earth orbit the speeds of orbital debris are 7-8 km/s; in geosynchronous earth orbit the speeds are much less. NASA relies on the U.S. Space Surveillance Network to track objects larger than about 10 cm in low Earth orbit, up to 2000 km altitude. NASA is responsible for statistically defining the debris environment for smaller objects. Special ground-based radars, including the 70-meter-diameter radio telescope at Goldstone, CA, can detect orbital debris as small as a few millimeters. Returned spacecraft surfaces provide insight into the population of orbital debris smaller than 1 mm. NTB: Should one of these objects impact the Space Station or a Space Shuttle, what kind of damage could it do? Johnson: The damage could be negligible, mission-threatening, or catastrophic, depending upon the size of the debris and the location of impact. Debris 10 cm and larger have the potential for completing destroying a spacecraft and creating large amounts of new debris. The accidental collision of two intact spacecraft on 10 February 2009 resulted in the creation of more than 600 large pieces of debris. Debris smaller than 1 mm normally do not affect the operation of a spacecraft. The Space Station is the most heavily protected vehicle ever launched. It can withstand hits by particles up to about 1 cm. The Space Shuttle is a little bit more vulnerable because of its nature and the fact that it has to conduct a reentry successfully. But these particles, if they were to strike either the Space Shuttle or the International Space Station, would typically hit at somewhere around a speed of 10 km/sec, so a very small particle could do a lot of damage. NTB: What is currently being done to protect these craft from being damaged by space debris, and is there new technology being developed for the future that will provide even better protection? Johnson: It depends on the vehicle. We’re trying to design robotic spacecraft more robustly. We can shield against particles as large as about 1 cm, although most robotic spacecraft don’t have quite that much shielding onboard. The primary near-term protection for spacecraft is the limitation of new orbital debris. The U.S. and the international aerospace community have developed specific orbital debris mitigation measures. But when possible, the design of spacecraft can be improved to protect against particles up to about 1 cm. For particles larger than 10 cm, such as those tracked by the Space Surveillance Network, collision avoidance maneuvers are the primary protection. The entire Space Station maneuvered around a piece of debris just last year. For other countermeasures, it’s all in how you fly the vehicle. Debris normally comes from specific directions, so if you put your more sensitive components away from that direction, you’ve got a better chance of surviving. NTB: Is NASA working on any new technology that will a) reduce the amount of space debris currently floating around out there, and b) prevent future missions from turning into more space debris? Johnson: Well, we’re looking at “a,” but we can’t do that yet, and we certainly are doing “b” very, very well. What that means is that we’ve been looking at ways to remediate the space environment, but it turns out to be a significant technical challenge as well as an economic challenge. The International Academy of Astronautics is completing a comprehensive study of concepts for the remediation of the near-Earth space environment. When that study is completed, NASA will reevaluate debris removal proposals, but to date, no technique has been found to be both technically feasible and economically viable. It’s hard to find a way to go up and remove debris once it’s in orbit, so NASA and the international community have been focusing for the last 10 to 20 years on better operations and better vehicle designs so that we don’t create debris unnecessarily. NTB: In July 2008 you received the Department of Defense Joint Meritorious Civilian Service Award for your contribution to Operation Burnt Frost, which was the interception and destruction of an out-of-control National Reconnaissance Office satellite known as USA-193 before it could impact the Earth. Tell us about Operation Burnt Frost and what role you played in it. Johnson: Burnt Frost was the operation to try to mitigate the threat posed by a crippled Department of Defense spacecraft that contained hazardous material that was about to reenter the atmosphere, components of which would’ve struck the surface of the Earth. I served as the NASA representative to a large U.S. government interagency group charged with assessing the threat posed by the satellite to people on Earth and means of mitigating that threat. NASA contributed to the effort in a variety of ways. We verified that the spacecraft’s propellant tank, containing a large amount of frozen, hazardous hydrazine, would survive an uncontrolled reentry in the atmosphere, potentially exposing multiple people to injury or death. NASA also played a principal role in quantifying the probability of human casualty. In the event that an order was given by the President to engage the spacecraft prior to reentry, NASA evaluated the risk to the International Space Station, the Space Shuttle, and other NASA assets from the resultant, short-lived orbital debris. I worked on a daily basis with the interagency group, visiting U.S. Strategic Command in Omaha, meeting with the President’s Science Advisor, and attending a deputies’ meeting of the National Security Council in the White House. For my efforts, I was awarded the NASA Distinguished Service Medal by the NASA Administrator and the Joint Meritorious Civilian Service Award by the Chairman of the Joint Chiefs of Staff. NTB: Was that the first time we had ever attempted to intercept a piece of space debris with a surface-launched missile? Johnson: Yes, it actually was the first time we ever intercepted a piece of space debris. Back in 1985, the U.S. conducted its first and only test of an air-launched anti-satellite system against a Department of Defense satellite called Solwind, which was operational at the time; hence, it was not a piece of orbital debris. NTB: I imagine this was a lot more challenging. Johnson: It certainly was. Actually, six weeks prior to the engagement, the United States didn’t have the capability to engage the satellite. We had to completely reconfigure the hardware and the software to even make this possible. NTB: Exactly what was entailed in doing that? How do you respond so quickly to something like that? Johnson: It was a phenomenal operation. I really can’t give you the details, but you would be impressed by the dedication and the hard work that people from all over the country, from all of the services, and from the civilian community spent in making that possible. NTB: I’m sure I would because most people think government agencies tend to get bogged down in bureaucracy. It’s actually quite a tribute that you were able to mobilize that quickly and solve the problem successfully. Johnson: I’ve been working with the government for nearly 40 years and I’ve never seen people just throw the book out the window and get the job done as well as they did this time. NTB: Finally, how much risk does space debris pose to people here on Earth? Johnson: It’s really not that much of a risk. On the average, there’s about one cataloged object per day that reenters the Earth’s atmosphere. Most of it burns up in the atmosphere. Those things which may have surviving components typically fall into the water, or in some desolate region like Siberia, or the Canadian tundra, or the Australian Outback. No one has ever been hurt by any reentering debris. For more information, contact Nicholas Johnson at Nicholas.l.johnson@nasa.gov.  To download this interview as a podcast,

Posted in: Features, Who's Who

Read More >>

Dr. Alexander Kashlinsky, Senior Staff Scientist, SSAI, Goddard Space Flight Center, Greenbelt, MD

Dr. Alexander Kashlinsky is a principal investigator on several NASA and NSF grants studying topics related to cosmological bulk flows, cosmic microwave and infrared background radiation, and early stellar populations. Using the Wilkinson Microwave Anisotropy Probe, Kashlinsky recently discovered a phenomenon called “dark flow,” which are clusters of galaxies moving at a constant velocity toward a 20-degree patch of sky between the constellations of Centaurus and Vella. NASA Tech Briefs: You have a PhD in astrophysics from Cambridge University in England and your area of expertise at NASA is observational cosmology. What prompted you to pursue a career in this field? Dr. Alexander Kashlinsky: It actually started in my youth from reading too much science fiction, which I no longer do. I distinctly remember how it was triggered. I picked up a book from the shelf by Stanislaw Lem, called the “Magellanic Cloud,” which was about the first interstellar travel, and it conquered my mind at the time, but I ended up in astrophysics and not traveling to the stars. Later, when I was doing my PhD, I was very privileged to work with Martin Rees, who is a very inspirational scientist, and that triggered my interest in astronomy and particularly in cosmology. He was very open-minded, and very interested in completely different ideas, which I found very stimulating and very inspiring. The rest is history. NTB: Several years ago you were part of a team that succeeded in isolating the energy radiated by the first stars formed after the Big Bang, called Population 3, from all other energy that makes up the cosmic infrared background. What did you learn from that breakthrough? Kashlinsky: What we did at the time was, we analyzed very deep data available thanks to the Spitzer Infrared Telescope, and we were trying to find how much diffuse radiation is left after we removed the various contributions that we can isolate in the images. What we learned is that the residual diffuse background – the so-called cosmic infrared background radiation – has quite a bit of energy emitted from sources that are much too faint to be detected, even in deep Spitzer exposures. That most likely means that these sources are very far away, because we removed galaxies down to a very faint level, that is, very far away. They had very little time to radiate all this very substantial energy that we detected and, therefore, they had to be quite abundant and they had to be radiating at enormous rates compared to typical populations living today. That, in our opinion, meant these populations were dominated by very massive stars, or very massive black holes, that lived very short times, but each unit of their mass emitted so much more energy than present day stars – such as the sun – that they had to produce this signature. What is important in this context is not only what we learned, but what we did not learn. With the current data we could not learn whether these sources were stars that emit their energy by converting hydrogen into helium, or they were massive black holes that existed in very early times and that emitted energy by accretion processes, by gas falling into them and emitting energy in the process. NTB: More recently, using NASA’s Wilkinson Microwave Anisotropy Probe, you discovered a phenomenon you refer to as “dark flow.” What is dark flow? Kashlinsky: What we set out to measure in that measurement was the so-called peculiar velocities of clusters of galaxies, which are deviations from the uniform expansion of the universe. We never expected to find what we found at the end. We designed a method several years ago to probe the expected – within the standard cosmological models – peculiar velocities. The trick was to use many, many clusters of galaxies whereby you detect a very faint signal by beating down the noise. So we teamed up with colleagues at the University of Hawaii who assembled this x-ray cluster catalogue, and applied that method to the Wilkinson Microwave Anisotropy Probe, and we were very surprised by the results. We found a flow that does not decrease with distance as far as we could tell, and we could probe to several billion light years away from us. It roughly was a constant amplitude, whereas in the standard cosmological model you expect that it should’ve been decreasing linearly with increasing scale. That is, as you go from, say, a few hundred-million light years to a few billion light years, it should decrease by an order of magnitude. We did not find that. We found a more or less constant velocity all the way as far as we can probe. The reason we called it dark flow is because the matter distribution in the observed universe, which is very well-known from galaxy surveys and from cosmic microwave background anisotropy measurements, that matter distribution cannot account for this motion. So this is why we suggested that if this motion already extends so far, then it probably goes all the way across the observable universe to the so-called cosmological horizon, and it is caused by the matter inhomogenity, or, I should say, space-time inhomogenity, at very large distances well beyond the cosmological horizon, which is about 40 billion light years away from us.NTB: It’s been theorized that this dark flow may somehow be related to inflation, the brief hyper-expansion of the universe that occurred shortly after the Big Bang. Can you explain that relationship to us? Kashlinsky: Yes. What we suspect is happening is the following. Inflation was designed, if I’m not mistaken in the early 1980s, to explain why the universe we see around us is homogeneous and isotropic. It is homogeneous – it’s roughly the same on all scales – and isotropic – it’s roughly the same in every direction. Now, the way inflation works is as follows. It says that at some very early time the universe, or the underlying space-time, was not homogeneous. What happened then was that there was some bubble, a very tiny bubble of space-time, which, by pure chance, happened to be homogeneous by purely casual process, and then because of the various high-energy processes in the early universe this bubble, along with the rest of the space-time, expanded by a huge amount. We, today, live inside a tiny part of that original homogeneous bubble and we, therefore, see the universe around us as homogeneous and isotropic because the scales of inhomogenities that are other bubbles have been pushed away very, very far. What it means, at the same time, is that the original space-time was not homogeneous. If we go sufficiently far away, we should see the remnants of the pre-inflationary structure of the universe, of the space-time. These remnants would cause a very long wavelength wave across our universe and because there would be a gradient in this wave from one edge of the universe to the other, or from one edge of the cosmological horizon to the other, we would see a certain tilt, or the matter would be flowing from one edge to the other. The analogy I could think of is as follows. Suppose you are in the middle of a very quiet ocean and you see the horizon, which determines how far you can see. As far as you can see, the ocean is isotropic and homogeneous. You would then think, at first, that the entire universe is just like what you see locally, that it’s homogeneous and isotropic like your own horizon. But then, inside that ocean, you discover a very faint stream from one edge of the horizon to the other...a flow. From the existence of that flow, you could deduce that somewhere very far away there should be structures that are very different than what you see locally. There should be mountains for this flow to flow from, or some ravines for this flow to fall into. So that would give you a probe of what the underlying very large scale structure of what your universe, or space-time, or some today call it multiverse, is that it is not just like what you see locally, but that sufficiently far away your space-time is very different from what you see here. So, in that sense it’s very much in agreement with the underlying inflationary paradigm that the initial space-time was very inhomogeneous, and we just happen to live inside a very homogeneous and isotropic bubble, but if we were to go very, very far away, we should be able to see such inhomogenities. NTB: The galaxy clusters that make up this dark flow are rapidly moving toward a 20-degree patch of sky between the constellations of Centaurus and Vella. Why there, and do we know what’s attracting them? Kashlinsky: Our limit on the 20-degree patch is purely due to observational error. If we were to make this measurement with, say, an infinite sample of clusters of galaxies and infinitely noiseless cosmic microwave background data, we presumably would measure just one uniform direction measurement. Why there? It’s by pure chance. It just happens to flow in that particular direction. As for what’s attracting them, we know that such flow cannot be generated by the matter distribution inside the observable universe, inside the universe that we observe. So, we therefore concluded that it must be something else very, very far away from us that is attracting them. NTB: What impact, if any, does the discovery of dark flow have on our understanding of the universe and how it works? Kashlinsky: What it tells us is that what we call, today, the universe is part of the overall cosmos, the overall space-time, whose structure is very different than what we see locally. Today, various issues of terminology that, at first, what people would call universes essentially...people would think that this is all the space-time there is and the universe, by definition, is all that there is in it. Today, people start talking in terms of multiverse, and multiverse is then composed of the various universes such as our own — that is, our own cosmological horizon, or our own bubble in the terms of this inflationary language. But there could be various other universes in this multiverse, in this landscape in which we live. So, in that sense, what these measurements may imply is that our universe is just one of many and others may be very different from ours, and that there is an underlying multiverse in which these universes exist. So, if you would, it could imply an ultimate Copernican principle. It could generalize it, ultimately, that not only is our planetary system one of many, and our planet one of many, our universe may be just one of many. NTB: One of the projects you’re currently working on at NASA is called “Studying Fluctuations in the Far IR Cosmic Infrared Background with COBE FIRAS Maps.” Tell us about that project and what you hope to accomplish with it.Kashlinsky: This project and the group of us working on it — it’s myself, Dave Fixsen, and John Mather here at Goddard Space Flight Center — what it is designed to measure is the structure of the cosmic infrared background radiation at far infrared bands. Why COBE FIRAS? FIRAS is the Far Infrared Absolute Spectrophotometer that was launched onboard the COBE satellite, the satellite that discovered cosmic microwave background structure. That instrument measured the spectrum of the cosmic microwave background radiation and it determined that it is a basic black body spectrum, basically down to almost one part in 1,000,000. But it also gave us very useful maps to work with for other parts of science. It measured all sky maps at the various far infrared wavelengths. Because we know the spectrum of the cosmic microwave background radiation, we can remove it from these maps very well. Then, if we’re lucky — and by lucky I mean if we can remove other foreground, such as our own galaxy, sufficiently well — we can then determine how much is produced by distant galaxies at far infrared wavelengths. That will give us very important cosmological information as to how these galaxies lived when they produced these emissions, how much of these emissions they produced, and so on and so forth. This project is just beginning, so I don’t know what our results will be, but the hope is to isolate the fluctuations in the cosmic infrared background radiation after subtracting the cosmic microwave background radiation from the FIRAS maps. NTB: You mentioned that one of your co-investigators on this project is Dr. John Mather, the 2006 Nobel Laureate in physics. Do you ever find yourself dreaming of one day possibly winning a Nobel Prize, or is that something scientists don’t really think about until it happens? Kashlinsky: Oh, I think it’s the latter. It just doesn’t cross the mind, I would say, of most scientists because you are so busy trying to understand whether the results you are measuring are real; what the systematics are; what the statistical significance is; whether you have been fooled by the various other processes that you have not accounted for; that it doesn’t give you much time to stress or share thoughts. So no, I don’t spend time thinking about it. And once you produce results, you really are worried that these are real results; that they can be maintained by future measurements; and you should always seek confirmation of these results, so no, there’s not much time to think about that. NTB: What are some of the other significant projects you’re either working on, or anticipate working on, in the future? Kashlinsky: It’s a very fortunate era now in the field of cosmology. I remember when I was starting my PhD, there was very little data to go by and there were many ideas, but also the theoretical part of the field was not particularly developed as I look back at it now. Slowly but surely, theoretical understanding developed and then, what’s even more important, in the last, I would say, ten or fifteen years there has been an explosion in the data – high quality data – that has been obtained in this field. This data comes from various space observatories or satellites such as the COBE satellite. It was a very important point in cosmology, and it was reached also thanks to the new generations of ground telescopes that can see very far with very high resolution and very low noise. So, today you have a lot of data that can really constrain your understanding of the theoretical issues of the universe, and these data come at various wavelengths. For instance, in terms of cosmic microwave background measurements, there was COBE, then there was WMAP (Wilkinson Microwave Anisotropy Probe), which is still operating. It’s a superb instrument. And the Europeans are going to launch, this spring, the successor to WMAP, called the Planck satellite, which should bring a lot of new cosmic microwave background radiation data over a very wide range of frequencies with very low noise and with fairly good angular resolution. That is one of the projects we’re thinking to do with the dark flow studies; we want to try it with the Planck data. At other wavelengths there is the Spitzer satellite, which is still operating. It is now about to begin its so-called warm mission because it’s run out of cryogens, so it has been extended for warm mission and it should still bring some very important data for understanding distant populations and the cosmic infrared background radiation emitted by them. You can also go to a completely unexpected range of wavelengths or energies. At very high energies there is now operating… The GLAST (now renamed Fermi) satellite, which is the successor to the Compton Gamma Ray Observatory, and it is going to map the universe very well at gamma ray wavelengths and find a lot of distant gamma ray sources, gamma ray bursts, and so on. This would also be important in terms of studying early stellar populations because – this is one of the projects I hope to do with the data – you should see a very distinct cutoff in the spectrum of gamma-ray sources (bursts and blazars) at very large distances. This is produced by the cosmic infrared background from very early sources, such as Population 3, or the first black holes, and it is produced because the energy that these sources emit, which reach us in the infrared band, would also contain a lot of photons, and the very high-energy photons produced by these gamma ray bursts then would flow in the sea of IR photons – the cosmic infrared photons produced by the first stars – and they would get absorbed at sufficiently high energies by the so-called photon-photon absorption process. So, you should see a certain spectral feature that would tell you, yes, this is the epoch where these first stars lived. Maybe they lived for the first hundred-million years, maybe they lived for the first two-hundred-million years, and so on. You should be able to see the feature, if they produced enough energy. And, of course, there are preparations for science that can be done with the James Webb Space Telescope, the JWST, which is going to be launched four or five years from now. That would be a successor to Hubble, but it also measures the universe in infrared bands, so it would see very far. It would see at completely different wavelengths, and it would bring a lot of data and probably revolutionize our understanding of the evolution of the universe. For more information, contact Dr. Alexander (Sasha) Kashlinsky at Alexander.kashlinsky@nasa.gov. To download this interview as a podcast,

Posted in: Features, Who's Who

Read More >>

Dr. Drake Deming, Senior Scientist, Solar System Exploration Division, Goddard Space Flight Center

Dr. Drake Deming, former Chief of Goddard Space Flight Center’s Planetary Systems Laboratory, currently serves as Senior Scientist with NASA’s Solar System Exploration Division where he specializes in detecting and characterizing hot Jupiter extrasolar planets. Dr. Deming was named recipient of the 2007 John C. Lindsay Memorial Award, Goddard’s highest honor for outstanding contributions in space science, for his work in developing a way to detect light from extrasolar planets and use it to measure their temperatures. NASA Tech Briefs: What is the Solar System Exploration Division’s primary mission within NASA and what types of projects does it typically handle? Drake Deming: Our primary mission is to study planetary science in the context of NASA’s space mission program. In this case planetary science also includes planets orbiting other stars. NTB: You began your career in education teaching astronomy at the University of Maryland. What lured you away from academia to pursue a career with NASA? Deming: Research, and the opportunity to do cutting-edge space-based research. NTB: Had you always planned to move in that direction, or was it after you had started your career that NASA entered the picture? Deming: I had always planned to move into research. NTB: Much of your research over the years has focused on trying to detect and characterize so-called “hot Jupiter” extrasolar planets. What are “hot Jupiter planets, and what can we learn from them? Deming: Hot Jupiters are giant planets, like Jupiter in our own solar system, but they’re in much closer to their stars. Not only are they much closer than Jupiter in our solar system is, but they’re much closer even than our own Earth. What we can learn from them is quite a bit, because they’re in so close to the star they’re subject to strong irradiation from the star, so the dynamics of their atmosphere is very lively, the circulations are very strong, so we can learn about the physics of their atmospheres. Also, they are subject to tremendous forces from the star; they’re subject to tidal forces, which may play a role in inflating their sizes. So, we can learn about their internal structure, and we can learn a lot about planets from studying hot Jupiters because they’re an extreme case. NTB: Why are extrasolar planets so hard to detect? Deming: They’re so hard to detect because, so far, we cannot spatially resolve them from their parent stars, so we have to study them in the combined light of the planet and the star. That means it’s a small signal riding on top of a large noise source – in this case, the star. NTB: You are the principal investigator on a program called EPOCh, which stands for Extrasolar Planet Observations and Characterization. Tell us about that program and what you hop to accomplish with it. Deming: Well, we have just concluded our observing with EPOCh. We have over 170,000 images of planet-hosting stars, and when we get these images we don’t resolve the planet from the star. We use the images to do precise photometry. These are bright stars that have planets that transit in front of them, and the geometry of the transit tells us quite a bit about the planet. It tells us the radius. We can examine the data to see whether it has rings or moons. We can look for other, smaller planets in the system that may transit. And in favorable cases our sensitivity extends down to planets the size of the Earth, so we’re searching for smaller worlds in these systems. NTB: You’re also the Deputy Principal Investigator for a mission called EPOXI…Deming: Yes. That’s the same as EPOCh. EPOXI is a combination of EPOCh and DIXI (Deep Impact eXtended Investigation). DIXI is the component of EPOXI that goes to Comet Hartley 2, and that’s now ramping up because EPOCh is finished. NTB: What is that mission designed to accomplish? Deming: Well, I’m not involved in that, but it’s designed to image a comet nucleus. It will use the imaging capability of the Deep Impact flyby spacecraft to image another comet. The original Deep Impact mission released an impactor into a comet nucleus and actually blew a crater in it. Of course, the impactor is no longer available to us because it was used up, but still, a tremendous amount can be learned by imaging another comet nucleus for comparative purposes. NTB: That was the Temple 1 comet, right? Deming: That was the Temple 1 comet. NTB: EPOXI spent most of the month of May observing a red dwarf star called GJ436 that is located just 32 light years from Earth, and it has a Neptune-size planet orbiting it. What did you learn from those observations? Deming: We’re still very intensely analyzing those data, but what we hope to learn is whether there’s another planet in the system, and in this case our sensitivity extends down to Earth-sized planets, so we’re looking for another planet that may have left a small signature in the data as it transited the star. If we can find that, there’s a good chance that that planet might even be habitable. Because our period of observation extends for more than 20 days, and because this is a low-luminosity red dwarf star, the habitable zone in that system is in close to the star where the orbital periods are on the order of 20 days. So we have sensitivity to planets in the habitable zone in this case. Of course, those data have our highest priority and we’re inspecting them very intensely. However, the data analysis process is very involved. We have a lot of sources of spacecraft noise that we have to discriminate against. NTB: In July 2008, NASA’s Deep Impact spacecraft made a video of the Moon transiting – or passing in front of – the Earth from 31 million miles away. Why did that video generate so much excitement within the scientific community? Deming: Well, I think it generated a lot of excitement both in the scientific community and outside of the scientific community because it’s really, I think, the first time that we’ve seen the Earth/Moon system from that particular perspective, from that specific perspective, where you see the Moon transit in front of the Earth. And there’s also, as we analyze those data, some realization that although it would be a relatively low probability that, if that were to occur for a planet orbiting another star and its moon transited in front of it, we could learn about the topography of the planet. NTB: Do you think we’ll learn anything new about Earth from that video? Deming: I think we’ll learn new things about the Earth as a global object, as an astronomical object. For example, one of the things we should start prominently seeing in the data is the sun glint from the Earth’s oceans, and this has been hypothesized as a way to detect oceans on planets orbiting other stars because that glint would be polarized. Although we don’t have any polarization capability, we can see that the glint sometimes becomes dramatically brighter and we’re trying to understand why that is. It may be because the glint is a specular reflection, probably from the Earth’s oceans, so by correlating that glint, the brightening of that glint with, for example, winds across the oceans and wave heights, we may find that smooth patches of ocean give us a particularly strong glint. So, if the glint were observed on an extrasolar planet, we could then infer from a variable polarization signal the presence of the glint and the presence of oceans. NTB: It was discovered some time ago that Deep Impact’s high-resolution camera has a flaw in it that prevents it from focusing properly. This was a problem during its original mission when it was trying to study a crater made by an impactor on the comet Temple 1, but you were somehow able to use that to your advantage to study planets passing in front of their parent stars. Can you explain how that works?Deming: That’s a big advantage for us because we’re not trying to image the planet. We’re only measuring the total photometric signal, so when the planet passes in front of the star we see a dip in intensity. That dip in intensity is, like, one percent. We’re doing very precise photometry, so that one-percent dip is actually the largest signal we see. We’re actually looking for much smaller dips due to smaller planets. Well, in order to measure that dip very precisely, we have to get a very precise photometric measurement, which means we need to collect a lot of light from the star. If we didn’t have the defocus, all of that light would be falling on one or two small pixels of the detector and they would immediately saturate. We’d have to constantly be reading them out and it just wouldn’t be practical. But by having a defocused image, we can spread the light over many pixels and use them to collect more light in a given readout. For each readout we collect many more photons from the stars. NTB: Among your many accomplishments at NASA, you developed a way to detect light from extrasolar planets and use that light to measure their temperature. Can you explain how that technique works? Deming: This was an observation with Spitzer. And it was also done concurrently by Professor David Charbonneau at Harvard. The Spitzer Observatory wasn’t really developed for that purpose. We just found that that it was particularly capable of that application, and what we did was we observed the systems that had transiting planets in the infrared where the planet is a significant source of radiation and we waited until the planet passed behind the star – and we could calculate when that would be – and then we saw a dip in the total radiation of the system. Since we knew the planet was passing behind the star at that time, the magnitude of that dip tells us the magnitude of the light from that planet. So, in that way we were able to measure the light from extrasolar planets. This has become a big topic of research for Spitzer. Spitzer has done this for many planets over many wavelength bands. It has been able to reconstruct, in kind of a crude way – but even crude measurements of planets orbiting other stars are very revealing and important – it’s been able to reconstruct the emission spectrum of some of these worlds orbiting other stars. NTB: In 2007 you won the John C. Lindsay Memorial Award, Goddard’s highest honor for outstanding contributions in space science. What does it mean to you to have your name added to such a distinguished list of scientists? Deming: Well, of course, I was very honored to receive this award. I was also very surprised because I had no indication, no hint, that this was coming. NTB: Nobody tipped you off? Deming: Nobody tipped me off. It was a complete surprise. I think the award speaks not to my own personal accomplishment but to the success of the NASA missions that enabled the measurements and all the people who designed and built the Spitzer Observatory. It wouldn’t have been possible to make those measurements without that facility. For more information, contact Dr. Drake Deming at leo.d.deming@nasa.gov.  To download this interview as a podcast,

Posted in: Features, Who's Who

Read More >>

Glenn Rakow, SpaceWire Development Lead, Goddard Space Flight Center

Glenn Rakow is the Development Lead for SpaceWire, a high-speed communications protocol for space-flight electronics originally developed in 1999 by the European Space Agency (ESA). Under Rakow’s leadership, the SpaceWire standard was developed into a network of nodes and routers interconnected through bi-directional, high-speed serial links, making the system more modular, flexible and reusable. In 2004 Rakow was named the recipient of Goddard’s James Kerley Award for his innovation and contributions to technology transfer.

Posted in: Features, Who's Who

Read More >>

Dr. Woodrow Whitlow Jr., Director, John H. Glenn Research Center, Cleveland, OH

As Director of NASA’s John H. Glenn Research Center in Cleveland, Ohio, Dr. Woodrow Whitlow Jr. controls an annual budget of approximately $650 million and manages a labor force comprised of roughly 1,619 civil service employees who are supported by 1754 contractors working in more than 500 specialized research facilities.

Posted in: Features, Who's Who

Read More >>

Garrett Reisman, Astronaut, NASA Johnson Space Center, Houston, TX

In March 2008, astronaut Garrett Reisman flew aboard the Space Shuttle Endeavour to the International Space Station, where he spent 95 days living and working in space. After performing his first spacewalk to help install the Space Station’s new robotic manipulator, called Dextre, he returned to Earth in June aboard the Space Shuttle Discovery. NASA Tech Briefs: You began your professional career at TRW as a spacecraft guidance navigation and control engineer. How did you go from there to becoming an astronaut? Did you approach NASA, or did they recruit you? Garrett Reisman: No, NASA doesn’t really do any recruiting. We have more people applying than we have spots available, so I definitely applied to them. It was something I always dreamed about doing, since I was a little kid, but I didn’t really get serious about applying or anything like that until I was almost finished with college and I realized that this was something that was within the realm of possibility. When I was in grad school at Cal Tech I put in my application, and there was a requirement for a couple of years of related work experience. So I thought, well, maybe a couple of years of being a graduate student would count for that. I sent in my application then and I didn’t get too far, but I got farther than I thought I would. I thought, NASA’s probably going to laugh at me for not having the field work requirements, but it worked out okay. The second time was when I was with TRW and that time I made it all the way through the application process, and through the interview, and I got selected. NTB: In June 2003 you participated in a two-week training exercise called NEEMO where you lived on the ocean floor 3.5 miles off the coast of Key Largo, Florida in an underwater laboratory called Aquarius. Describe for our readers what that experience was like and some of the challenges you faced. Reisman:That was amazing! That was one of the most remarkable things – probably the most remarkable thing – I got to do, up until I blasted off in the Space Shuttle. We moved down there for two weeks and we were going outside making scuba dives almost every day, and we had these giant windows in our habitat so we could see all the fish outside. It was a really healthy reef where we were, so it was remarkable how much sea life there was outside the window. It was very good preparation for spaceflight because we use the same types of tools and we do a lot of the experiments that we did down there. I ended up doing the same things up on the Space Station. The food was the same. We tried to make it as similar to flying in space as possible. I even had the same commander that I would later fly with as part of our mission, so it was great preparation and also a fantastic experience. NTB: You recently spent 95 days aboard the International Space Station, orbiting the Earth at a speed of 17,400 miles per hour. What is it like living in that environment?Reisman: It’s tremendous fun and probably, on a day-to-day basis, the most fun thing is being able to float. And you do float, but when you push off it’s more like flying. You kind of feel like a superhero. You can just jump off the floor, like Superman, and you keep going up and up and up until you hit something. It’s really a joy. Now, when I watch science-fiction movies and I see everybody walking around on spaceships, I wonder why they would deprive themselves of that joy of flying. NTB: How difficult is it adjusting to weightlessness over an extended period of time? Reisman: Over an extended period of time you get better and better at it. Initially you’re just flying about and you lose things and it can be kind of awkward. But over time you get much better at it. You get much more graceful with your motions and you get more efficient. You’re able to work more effectively and you just learn how to deal with it. Over time it actually gets better. NTB: What would you say are some of the more unique challenges faced by astronauts living aboard the Space Station, aside from broken toilets of course? Reisman: One of the things about working in zero gravity is you can’t put anything down. That’s really an issue. Just think about trying to work on your car, because when we’re doing maintenance work on the Space Station it’s kind of like working on a car. Every time you unscrew a bolt, you can’t just put it down; you have to put it into a zip lock bag, or tape it somewhere, or Velcro it to a wall. If you just let go of it, or you turn your back on it, it may be gone when you turn back around again and good luck finding it because it’s hard to find things up there. So that’s a unique challenge up there. It makes it very easy to lose stuff, and it takes a long time in the beginning until you get good at managing all the parts. NTB: As part of your mission aboard the Space Shuttle Endeavor, I understand you performed your first spacewalk. What was that like? Reisman: Well, it was the most extraordinary experience I had in the whole time I was there. At times I would describe it as a strange mix of the familiar and the outlandish. What I mean by that is, at times it felt just like training. We have this big pool here in Houston that we practice spacewalking in, and they do a great job of making it very realistic. So there were times I actually forgot that we were in space because it felt just like training. I’d be looking right in front of me at what I was doing, and it felt just like I was in the pool during one of our training exercises, and then you look over your shoulder and you see the entire east coast of the United States, and that is very different from training. So it was kind of a rollercoaster ride between things that felt normal and things that felt completely abnormal. NTB: A lot of people don’t realize that astronauts can’t simply don a spacesuit, exit the airlock, and go spacewalking. Preparations begin the night before with something called the “Campout Prebreathe Protocol” to prevent decompression sickness. Describe what that whole procedure entails.Reisman: You’re right in saying that you can’t just go right out the door because, just like in scuba diving, you have to be careful. There are certain maximum times you can spend at certain pressures, and with scuba diving if you come up to the surface too quickly, or after having stayed down for too long, you can get the bends. The same thing can happen to us. When we go outside it’s kind of like surfacing after a scuba dive, because you’re going from high pressure to low pressure, so to prepare for that you have to purge the nitrogen out of your bloodstream to make it safe. We kind of do it in stages. What we do is, we lock ourselves up in the airlock the night before and we reduce the pressure from 14.7 psi to 10.2 psi, and we stay at that overnight. As we do that, the nitrogen is slowly coming out of our blood. Then we get into our suits, and even before we put on the masks we breathe 100-percent oxygen. When we breathe 100-percent oxygen, we’re purging more and more nitrogen out of our blood. When you get in your suit, you’re breathing 100-percent oxygen in the suit, and when you finally get down to around 3 or 4 psi in the suit, you’re ready. At that point you’re not going to get the bends. NTB: One of the projects you worked on in space is a new Canadian-built robot called Dextre. Tell us about Dextre and what it’s designed to do. Reisman: Dextre is a really neat robot that is designed to do basically the same kind of things that we do during a spacewalk. It has two arms, and it has a body, and it can pivot about its waist, and it can grab a box of equipment outside of the Space Station. It can unbolt it; it can put it away; and it can take a new box to replace it and bolt that into place. Those are the kind of things it’s designed to do. It has its limitations as well, and we’re still working on exactly how we’re going to use it. I think in the future it’s going to be a very good helper. It will help make us more efficient during spacewalks and we might be able to to get the workspace set up before we get there. It can help us in that way. NTB: There were some problems assembling Dextre, namely some stuck bolts and a power feed problem that could’ve prevented the robot’s heaters from operating properly. How serious were those problems, and how did the crew overcome them? Reisman: Oh, they were very serious. First off, we had to figure out why it wasn’t getting power when we expected it to. Then we had to figure out how to work around that. The solution for the power problem was all worked out on the ground; we have some very smart people down here that figured out what to do. We just managed to use the other robot’s arm to attach to it and connect cables to it, and through that it was able to get power through the other robot. Of course, once it got power we didn’t know…it might have been dead. We didn’t know if it could’ve stayed healthy in that cold space without any power, but as it turned out it’s a true Canadian and it did just fine with the cold. When we got power to it, it worked just like it was supposed to. When you have problems like that and you find ways to work around them, and you’re ultimately successful, that’s one of the most fulfilling things that can happen to you as an astronaut. NTB: While in space you had the honor of throwing out a ceremonial first pitch for your beloved New York Yankees when they played the Boston Red Sox on April 16. On August 26 you again threw out the first pitch, this time in person at Yankee Stadium when the Bombers faced the Sox. Which pitch was the bigger thrill for you? Reisman: Well, I’ve got to say I was certainly a lot more nervous about being there, because it was easy to throw that pitch in space. I didn’t have to worry about bouncing it. It was pretty easy to throw a strike. But now, coming back down to Earth and dealing with gravity again, I was worried that my arm might not quite be in shape to throw a good strike. But being present at the stadium, in person, with all the fans, that was overwhelming. That was a dream come true for me. For more information, contact Katherine Trinidad, NASA Public Affairs at Katherine.trinidad@nasa.gov.To download this interview as a podcast,  

Posted in: Who's Who

Read More >>

Robert W. Moorehead, Director of Space Flight Systems, John H. Glenn Research Center

Robert W. Moorehead served as NASA’s chief investigator for the Space Shuttle Challenger accident in 1986 and managed the Space Station Freedom program from 1989 to 1993. He has also held the title of NASA’s Chief Engineer, developing system architectures for the Space Shuttle’s replacement. He is currently Director of Space Flight Systems at the John H. Glenn Research Center in Cleveland, Ohio.

Posted in: Who's Who

Read More >>