Who's Who at NASA

Gary Martin, Director, New Ventures & Communications Directorate, Ames Research Center, Moffett Field, CA

Gary Martin began his career with NASA in the Microgravity Sciences and Applications Division in 1990 where he served as Branch Chief for Advanced Programs from 1992 – 1994 and Deputy Director from 1994 – 1996. In 2002 he was named NASA’s first – and as it turned out, only – space architect. Martin currently heads up the New Ventures & Communications Directorate at Ames.

Posted in: Who's Who

Dr. Gerard Holzmann, Senior Research Scientist at the Laboratory for Reliable Software, NASA’s Jet Propulsion Laboratory

After a 23 year career at Bell Labs, Dr. Gerard Holzmann joined NASA’s Jet Propulsion Laboratory in 2003 to help create the Laboratory for Reliable Software (LaRS), which he currently manages. Dr. Holzmann is credited with inventing the SPIN model checker for distributed software systems and a Method and Apparatus for Testing Event Driven Software, as well as authoring The Power of 10: Rules for Developing Safety Critical Code, and the groundbreaking book Beyond Photography – The Digital Darkroom.

Posted in: Who's Who

Allen Parker, Systems Engineer, Advanced Structures and Measurement Group, Dryden Flight Research Center

Allen Parker is a systems engineer with expertise in the areas of fiber optics and data acquisition. He is currently part of the team that is developing and flight testing an innovative new fiber optic wing shape sensor system installed on the Ikhana unmanned aircraft system.

Posted in: Who's Who

Dr. Scott Barthelmy, Research Scientist, Laboratory for High Energy Astrophysics, Goddard Space Flight Center, Greenbelt, MD

Dr. Scott Barthelmy is the principal investigator for the Burst Alert Telescope (BAT), a sophisticated instrument that detects and precisely locates elusive gamma-ray bursts in the universe. Developed as part of NASA’s Swift mission, the instrument technology is now being considered for a variety of homeland security applications because of its ability to pinpoint and identify nuclear materials – both legal and illegal – in transit or storage. Dr. Barthelmy also created the Gamma-Ray Bursts Coordinates Network (GCN) to distribute data collected on gamma-ray bursts to researchers throughout the world in real time.

Posted in: Who's Who

Dr. Peter Shirron, Senior Research Scientist, Cryogenics and Fluids Group, Goddard Space Flight Center

Dr. Peter Shirron, a senior research scientist with NASA’s Cryogenics and Fluids Group, led the team of researchers credited with developing the first continuous duty multi-stage adiabatic demagnetization refrigerator (ADR) used to cool sophisticated space-borne detector arrays to temperatures below 2 Kelvin.

Posted in: Who's Who

Nicholas Johnson, Chief Scientist and Program Manager for NASA’s Orbital Debris Program Office, Johnson Space Flight Center

Nicholas Johnson is Chief Scientist and Program Manager for NASA’s Orbital Debris Program Office. In July 2008 he was awarded the Department of Defense Joint Meritorious Civilian Service Award for his contribution to Operation Burnt Frost, a mission that involved the interception and destruction of an out-of-control satellite before it could hit the Earth.

NASA Tech Briefs: When was the Orbital Debris Program Office established and what is its primary function?

Nicholas Johnson: The office was established in 1979, first to define the current and future orbital debris environment to support mission operations and spacecraft design, and also to develop orbital debris mitigation measures and policies.

NTB: Can you give us a little more detail about what that involves?

Johnson: Office personnel evaluate all NASA space programs and projects for compliance with agency orbital debris mitigation requirements. The office is also the lead for coordination and cooperation with other U.S. government departments and organizations in the field of orbital debris. As Chief Scientist for Orbital Debris, I also serve as the U.S. technical expert on space debris at the United Nations.

NTB: Exactly what is space debris?

Johnson: Space debris, primarily, is anything in Earth-orbit that no longer has a useful function. That could include a non-functional spacecraft, a derelict launch vehicle upper stage, fragmentation debris, paint flecks, anything you can think of.

NTB: Is space debris just manmade objects, or does it also include natural materials like meteoroids and things like that?

Johnson: Normally when we talk about orbital debris, we’re talking about manmade objects. Meteoroids are in orbit about the sun and we normally refer to them as the natural environment.

NTB: How does one become an expert in space debris? Is there a course of study you can recommend, or do you pretty much learn on the job?

Johnson: Actually, orbital debris is a very small scientific community. Within the U.S., NASA is the principal source of orbital debris expertise and is the only organization which actually characterizes the orbital debris population from the smallest debris — microns — to the largest, which can be tens-of-meters. Personally, I studied physics and astrophysics, but none of my formal education involved orbital debris. I think the vast majority of the folks who are in the field did learn on the job. There is actually one university in the United States — The University of Colorado, Boulder — that is the only U.S. institution to have awarded PhD’s in orbital debris, but only about a half-dozen or so folks have made it through that course. I have been involved in orbital debris research for 30 years in support of a wide variety of U.S. government organizations.

NTB: When it comes to the amount of debris in space, I’ve heard all kinds of figures. I’ve heard that since the launch of Sputnik 1 back in 1957, there have been approximately 28,000 objects put into space, 9,000 of which are still in orbit, and only 6-percent are still operational. To make matters worse, we’re launching approximately 75 new spacecraft per year, adding to the potential problem. How does your office keep track of all this debris and make sure it doesn’t pose a threat to things like the International Space Station, Hubble Telescope, and the Space Shuttle? And are those numbers I have even accurate?

Johnson: Well, they’re a little bit out of date. First off, NASA doesn’t maintain what’s called the U.S. Satellite Catalog. That’s performed by the Department of Defense. We certainly work closely with the DoD in a large number of space surveillance areas.

Since Sputnik 1 there have been more than 4600 space missions launched from around the world that have successfully reached Earth orbit or beyond. The total number of objects which have been officially cataloged by the Department of Defense is now nearly 34,000, of which about 13,000 are still in Earth-orbit. It turns out the Department of Defense is also tracking another 5,000 objects, which they know are out there, but they have not yet officially cataloged them. It’s more of an administrative issue. So, if you’re trying to find a final number of objects that we’re aware of, that we know where they are, it’s somewhere around 18,000.

NTB: How big are most of these objects and how fast are they normally traveling through space?

Johnson: These objects vary in terms of size from about 10 cm or so up to many tens-of-meters. That’s in terms of what can be tracked. Actually there are many smaller objects in orbit which DoD can’t track; they’re down to millimeter, or even micron, size. Their masses, of course, range anywhere from a sub-gram up to many metric tons. In low earth orbit the speeds of orbital debris are 7-8 km/s; in geosynchronous earth orbit the speeds are much less.

NASA relies on the U.S. Space Surveillance Network to track objects larger than about 10 cm in low Earth orbit, up to 2000 km altitude. NASA is responsible for statistically defining the debris environment for smaller objects. Special ground-based radars, including the 70-meter-diameter radio telescope at Goldstone, CA, can detect orbital debris as small as a few millimeters. Returned spacecraft surfaces provide insight into the population of orbital debris smaller than 1 mm.

NTB: Should one of these objects impact the Space Station or a Space Shuttle, what kind of damage could it do?

Johnson: The damage could be negligible, mission-threatening, or catastrophic, depending upon the size of the debris and the location of impact. Debris 10 cm and larger have the potential for completing destroying a spacecraft and creating large amounts of new debris. The accidental collision of two intact spacecraft on 10 February 2009 resulted in the creation of more than 600 large pieces of debris. Debris smaller than 1 mm normally do not affect the operation of a spacecraft.

The Space Station is the most heavily protected vehicle ever launched. It can withstand hits by particles up to about 1 cm. The Space Shuttle is a little bit more vulnerable because of its nature and the fact that it has to conduct a reentry successfully. But these particles, if they were to strike either the Space Shuttle or the International Space Station, would typically hit at somewhere around a speed of 10 km/sec, so a very small particle could do a lot of damage.

NTB: What is currently being done to protect these craft from being damaged by space debris, and is there new technology being developed for the future that will provide even better protection?

Johnson: It depends on the vehicle. We’re trying to design robotic spacecraft more robustly. We can shield against particles as large as about 1 cm, although most robotic spacecraft don’t have quite that much shielding onboard.

The primary near-term protection for spacecraft is the limitation of new orbital debris. The U.S. and the international aerospace community have developed specific orbital debris mitigation measures. But when possible, the design of spacecraft can be improved to protect against particles up to about 1 cm. For particles larger than 10 cm, such as those tracked by the Space Surveillance Network, collision avoidance maneuvers are the primary protection. The entire Space Station maneuvered around a piece of debris just last year. For other countermeasures, it’s all in how you fly the vehicle. Debris normally comes from specific directions, so if you put your more sensitive components away from that direction, you’ve got a better chance of surviving.

NTB: Is NASA working on any new technology that will a) reduce the amount of space debris currently floating around out there, and b) prevent future missions from turning into more space debris?

Johnson: Well, we’re looking at “a,” but we can’t do that yet, and we certainly are doing “b” very, very well. What that means is that we’ve been looking at ways to remediate the space environment, but it turns out to be a significant technical challenge as well as an economic challenge. The International Academy of Astronautics is completing a comprehensive study of concepts for the remediation of the near-Earth space environment. When that study is completed, NASA will reevaluate debris removal proposals, but to date, no technique has been found to be both technically feasible and economically viable. It’s hard to find a way to go up and remove debris once it’s in orbit, so NASA and the international community have been focusing for the last 10 to 20 years on better operations and better vehicle designs so that we don’t create debris unnecessarily.

NTB: In July 2008 you received the Department of Defense Joint Meritorious Civilian Service Award for your contribution to Operation Burnt Frost, which was the interception and destruction of an out-of-control National Reconnaissance Office satellite known as USA-193 before it could impact the Earth. Tell us about Operation Burnt Frost and what role you played in it.

Johnson: Burnt Frost was the operation to try to mitigate the threat posed by a crippled Department of Defense spacecraft that contained hazardous material that was about to reenter the atmosphere, components of which would’ve struck the surface of the Earth. I served as the NASA representative to a large U.S. government interagency group charged with assessing the threat posed by the satellite to people on Earth and means of mitigating that threat. NASA contributed to the effort in a variety of ways. We verified that the spacecraft’s propellant tank, containing a large amount of frozen, hazardous hydrazine, would survive an uncontrolled reentry in the atmosphere, potentially exposing multiple people to injury or death. NASA also played a principal role in quantifying the probability of human casualty. In the event that an order was given by the President to engage the spacecraft prior to reentry, NASA evaluated the risk to the International Space Station, the Space Shuttle, and other NASA assets from the resultant, short-lived orbital debris. I worked on a daily basis with the interagency group, visiting U.S. Strategic Command in Omaha, meeting with the President’s Science Advisor, and attending a deputies’ meeting of the National Security Council in the White House. For my efforts, I was awarded the NASA Distinguished Service Medal by the NASA Administrator and the Joint Meritorious Civilian Service Award by the Chairman of the Joint Chiefs of Staff.

NTB: Was that the first time we had ever attempted to intercept a piece of space debris with a surface-launched missile?

Johnson: Yes, it actually was the first time we ever intercepted a piece of space debris. Back in 1985, the U.S. conducted its first and only test of an air-launched anti-satellite system against a Department of Defense satellite called Solwind, which was operational at the time; hence, it was not a piece of orbital debris.

NTB: I imagine this was a lot more challenging.

Johnson: It certainly was. Actually, six weeks prior to the engagement, the United States didn’t have the capability to engage the satellite. We had to completely reconfigure the hardware and the software to even make this possible.

NTB: Exactly what was entailed in doing that? How do you respond so quickly to something like that?

Johnson: It was a phenomenal operation. I really can’t give you the details, but you would be impressed by the dedication and the hard work that people from all over the country, from all of the services, and from the civilian community spent in making that possible.

NTB: I’m sure I would because most people think government agencies tend to get bogged down in bureaucracy. It’s actually quite a tribute that you were able to mobilize that quickly and solve the problem successfully.

Johnson: I’ve been working with the government for nearly 40 years and I’ve never seen people just throw the book out the window and get the job done as well as they did this time.

NTB: Finally, how much risk does space debris pose to people here on Earth?

Johnson: It’s really not that much of a risk. On the average, there’s about one cataloged object per day that reenters the Earth’s atmosphere. Most of it burns up in the atmosphere. Those things which may have surviving components typically fall into the water, or in some desolate region like Siberia, or the Canadian tundra, or the Australian Outback. No one has ever been hurt by any reentering debris.

For more information, contact Nicholas Johnson at Nicholas.l.johnson@nasa.gov.


To download this interview as a podcast,

Posted in: Who's Who

Dr. Alexander Kashlinsky, Senior Staff Scientist, SSAI, Goddard Space Flight Center, Greenbelt, MD

Dr. Alexander Kashlinsky is a principal investigator on several NASA and NSF grants studying topics related to cosmological bulk flows, cosmic microwave and infrared background radiation, and early stellar populations. Using the Wilkinson Microwave Anisotropy Probe, Kashlinsky recently discovered a phenomenon called “dark flow,” which are clusters of galaxies moving at a constant velocity toward a 20-degree patch of sky between the constellations of Centaurus and Vella.

NASA Tech Briefs: You have a PhD in astrophysics from Cambridge University in England and your area of expertise at NASA is observational cosmology. What prompted you to pursue a career in this field?

Dr. Alexander Kashlinsky: It actually started in my youth from reading too much science fiction, which I no longer do. I distinctly remember how it was triggered. I picked up a book from the shelf by Stanislaw Lem, called the “Magellanic Cloud,” which was about the first interstellar travel, and it conquered my mind at the time, but I ended up in astrophysics and not traveling to the stars. Later, when I was doing my PhD, I was very privileged to work with Martin Rees, who is a very inspirational scientist, and that triggered my interest in astronomy and particularly in cosmology. He was very open-minded, and very interested in completely different ideas, which I found very stimulating and very inspiring. The rest is history.

NTB: Several years ago you were part of a team that succeeded in isolating the energy radiated by the first stars formed after the Big Bang, called Population 3, from all other energy that makes up the cosmic infrared background. What did you learn from that breakthrough?

Kashlinsky: What we did at the time was, we analyzed very deep data available thanks to the Spitzer Infrared Telescope, and we were trying to find how much diffuse radiation is left after we removed the various contributions that we can isolate in the images. What we learned is that the residual diffuse background – the so-called cosmic infrared background radiation – has quite a bit of energy emitted from sources that are much too faint to be detected, even in deep Spitzer exposures. That most likely means that these sources are very far away, because we removed galaxies down to a very faint level, that is, very far away. They had very little time to radiate all this very substantial energy that we detected and, therefore, they had to be quite abundant and they had to be radiating at enormous rates compared to typical populations living today. That, in our opinion, meant these populations were dominated by very massive stars, or very massive black holes, that lived very short times, but each unit of their mass emitted so much more energy than present day stars – such as the sun – that they had to produce this signature.

What is important in this context is not only what we learned, but what we did not learn. With the current data we could not learn whether these sources were stars that emit their energy by converting hydrogen into helium, or they were massive black holes that existed in very early times and that emitted energy by accretion processes, by gas falling into them and emitting energy in the process.

NTB: More recently, using NASA’s Wilkinson Microwave Anisotropy Probe, you discovered a phenomenon you refer to as “dark flow.” What is dark flow?

Kashlinsky: What we set out to measure in that measurement was the so-called peculiar velocities of clusters of galaxies, which are deviations from the uniform expansion of the universe. We never expected to find what we found at the end. We designed a method several years ago to probe the expected – within the standard cosmological models – peculiar velocities. The trick was to use many, many clusters of galaxies whereby you detect a very faint signal by beating down the noise. So we teamed up with colleagues at the University of Hawaii who assembled this x-ray cluster catalogue, and applied that method to the Wilkinson Microwave Anisotropy Probe, and we were very surprised by the results.

We found a flow that does not decrease with distance as far as we could tell, and we could probe to several billion light years away from us. It roughly was a constant amplitude, whereas in the standard cosmological model you expect that it should’ve been decreasing linearly with increasing scale. That is, as you go from, say, a few hundred-million light years to a few billion light years, it should decrease by an order of magnitude. We did not find that. We found a more or less constant velocity all the way as far as we can probe. The reason we called it dark flow is because the matter distribution in the observed universe, which is very well-known from galaxy surveys and from cosmic microwave background anisotropy measurements, that matter distribution cannot account for this motion. So this is why we suggested that if this motion already extends so far, then it probably goes all the way across the observable universe to the so-called cosmological horizon, and it is caused by the matter inhomogenity, or, I should say, space-time inhomogenity, at very large distances well beyond the cosmological horizon, which is about 40 billion light years away from us.

NTB: It’s been theorized that this dark flow may somehow be related to inflation, the brief hyper-expansion of the universe that occurred shortly after the Big Bang. Can you explain that relationship to us?

Kashlinsky: Yes. What we suspect is happening is the following. Inflation was designed, if I’m not mistaken in the early 1980s, to explain why the universe we see around us is homogeneous and isotropic. It is homogeneous – it’s roughly the same on all scales – and isotropic – it’s roughly the same in every direction.

Now, the way inflation works is as follows. It says that at some very early time the universe, or the underlying space-time, was not homogeneous. What happened then was that there was some bubble, a very tiny bubble of space-time, which, by pure chance, happened to be homogeneous by purely casual process, and then because of the various high-energy processes in the early universe this bubble, along with the rest of the space-time, expanded by a huge amount. We, today, live inside a tiny part of that original homogeneous bubble and we, therefore, see the universe around us as homogeneous and isotropic because the scales of inhomogenities that are other bubbles have been pushed away very, very far. What it means, at the same time, is that the original space-time was not homogeneous. If we go sufficiently far away, we should see the remnants of the pre-inflationary structure of the universe, of the space-time. These remnants would cause a very long wavelength wave across our universe and because there would be a gradient in this wave from one edge of the universe to the other, or from one edge of the cosmological horizon to the other, we would see a certain tilt, or the matter would be flowing from one edge to the other.

The analogy I could think of is as follows. Suppose you are in the middle of a very quiet ocean and you see the horizon, which determines how far you can see. As far as you can see, the ocean is isotropic and homogeneous. You would then think, at first, that the entire universe is just like what you see locally, that it’s homogeneous and isotropic like your own horizon. But then, inside that ocean, you discover a very faint stream from one edge of the horizon to the other...a flow. From the existence of that flow, you could deduce that somewhere very far away there should be structures that are very different than what you see locally. There should be mountains for this flow to flow from, or some ravines for this flow to fall into. So that would give you a probe of what the underlying very large scale structure of what your universe, or space-time, or some today call it multiverse, is that it is not just like what you see locally, but that sufficiently far away your space-time is very different from what you see here. So, in that sense it’s very much in agreement with the underlying inflationary paradigm that the initial space-time was very inhomogeneous, and we just happen to live inside a very homogeneous and isotropic bubble, but if we were to go very, very far away, we should be able to see such inhomogenities.

NTB: The galaxy clusters that make up this dark flow are rapidly moving toward a 20-degree patch of sky between the constellations of Centaurus and Vella. Why there, and do we know what’s attracting them?

Kashlinsky: Our limit on the 20-degree patch is purely due to observational error. If we were to make this measurement with, say, an infinite sample of clusters of galaxies and infinitely noiseless cosmic microwave background data, we presumably would measure just one uniform direction measurement. Why there? It’s by pure chance. It just happens to flow in that particular direction. As for what’s attracting them, we know that such flow cannot be generated by the matter distribution inside the observable universe, inside the universe that we observe. So, we therefore concluded that it must be something else very, very far away from us that is attracting them.

NTB: What impact, if any, does the discovery of dark flow have on our understanding of the universe and how it works?

Kashlinsky: What it tells us is that what we call, today, the universe is part of the overall cosmos, the overall space-time, whose structure is very different than what we see locally. Today, various issues of terminology that, at first, what people would call universes essentially...people would think that this is all the space-time there is and the universe, by definition, is all that there is in it. Today, people start talking in terms of multiverse, and multiverse is then composed of the various universes such as our own — that is, our own cosmological horizon, or our own bubble in the terms of this inflationary language. But there could be various other universes in this multiverse, in this landscape in which we live.

So, in that sense, what these measurements may imply is that our universe is just one of many and others may be very different from ours, and that there is an underlying multiverse in which these universes exist. So, if you would, it could imply an ultimate Copernican principle. It could generalize it, ultimately, that not only is our planetary system one of many, and our planet one of many, our universe may be just one of many.

NTB: One of the projects you’re currently working on at NASA is called “Studying Fluctuations in the Far IR Cosmic Infrared Background with COBE FIRAS Maps.” Tell us about that project and what you hope to accomplish with it.

Kashlinsky: This project and the group of us working on it — it’s myself, Dave Fixsen, and John Mather here at Goddard Space Flight Center — what it is designed to measure is the structure of the cosmic infrared background radiation at far infrared bands.

Why COBE FIRAS? FIRAS is the Far Infrared Absolute Spectrophotometer that was launched onboard the COBE satellite, the satellite that discovered cosmic microwave background structure. That instrument measured the spectrum of the cosmic microwave background radiation and it determined that it is a basic black body spectrum, basically down to almost one part in 1,000,000. But it also gave us very useful maps to work with for other parts of science. It measured all sky maps at the various far infrared wavelengths. Because we know the spectrum of the cosmic microwave background radiation, we can remove it from these maps very well. Then, if we’re lucky — and by lucky I mean if we can remove other foreground, such as our own galaxy, sufficiently well — we can then determine how much is produced by distant galaxies at far infrared wavelengths. That will give us very important cosmological information as to how these galaxies lived when they produced these emissions, how much of these emissions they produced, and so on and so forth. This project is just beginning, so I don’t know what our results will be, but the hope is to isolate the fluctuations in the cosmic infrared background radiation after subtracting the cosmic microwave background radiation from the FIRAS maps.

NTB: You mentioned that one of your co-investigators on this project is Dr. John Mather, the 2006 Nobel Laureate in physics. Do you ever find yourself dreaming of one day possibly winning a Nobel Prize, or is that something scientists don’t really think about until it happens?

Kashlinsky: Oh, I think it’s the latter. It just doesn’t cross the mind, I would say, of most scientists because you are so busy trying to understand whether the results you are measuring are real; what the systematics are; what the statistical significance is; whether you have been fooled by the various other processes that you have not accounted for; that it doesn’t give you much time to stress or share thoughts. So no, I don’t spend time thinking about it. And once you produce results, you really are worried that these are real results; that they can be maintained by future measurements; and you should always seek confirmation of these results, so no, there’s not much time to think about that.

NTB: What are some of the other significant projects you’re either working on, or anticipate working on, in the future?

Kashlinsky: It’s a very fortunate era now in the field of cosmology. I remember when I was starting my PhD, there was very little data to go by and there were many ideas, but also the theoretical part of the field was not particularly developed as I look back at it now. Slowly but surely, theoretical understanding developed and then, what’s even more important, in the last, I would say, ten or fifteen years there has been an explosion in the data – high quality data – that has been obtained in this field. This data comes from various space observatories or satellites such as the COBE satellite. It was a very important point in cosmology, and it was reached also thanks to the new generations of ground telescopes that can see very far with very high resolution and very low noise.

So, today you have a lot of data that can really constrain your understanding of the theoretical issues of the universe, and these data come at various wavelengths. For instance, in terms of cosmic microwave background measurements, there was COBE, then there was WMAP (Wilkinson Microwave Anisotropy Probe), which is still operating. It’s a superb instrument. And the Europeans are going to launch, this spring, the successor to WMAP, called the Planck satellite, which should bring a lot of new cosmic microwave background radiation data over a very wide range of frequencies with very low noise and with fairly good angular resolution. That is one of the projects we’re thinking to do with the dark flow studies; we want to try it with the Planck data.

At other wavelengths there is the Spitzer satellite, which is still operating. It is now about to begin its so-called warm mission because it’s run out of cryogens, so it has been extended for warm mission and it should still bring some very important data for understanding distant populations and the cosmic infrared background radiation emitted by them.

You can also go to a completely unexpected range of wavelengths or energies. At very high energies there is now operating… The GLAST (now renamed Fermi) satellite, which is the successor to the Compton Gamma Ray Observatory, and it is going to map the universe very well at gamma ray wavelengths and find a lot of distant gamma ray sources, gamma ray bursts, and so on. This would also be important in terms of studying early stellar populations because – this is one of the projects I hope to do with the data – you should see a very distinct cutoff in the spectrum of gamma-ray sources (bursts and blazars) at very large distances. This is produced by the cosmic infrared background from very early sources, such as Population 3, or the first black holes, and it is produced because the energy that these sources emit, which reach us in the infrared band, would also contain a lot of photons, and the very high-energy photons produced by these gamma ray bursts then would flow in the sea of IR photons – the cosmic infrared photons produced by the first stars – and they would get absorbed at sufficiently high energies by the so-called photon-photon absorption process. So, you should see a certain spectral feature that would tell you, yes, this is the epoch where these first stars lived. Maybe they lived for the first hundred-million years, maybe they lived for the first two-hundred-million years, and so on. You should be able to see the feature, if they produced enough energy.

And, of course, there are preparations for science that can be done with the James Webb Space Telescope, the JWST, which is going to be launched four or five years from now. That would be a successor to Hubble, but it also measures the universe in infrared bands, so it would see very far. It would see at completely different wavelengths, and it would bring a lot of data and probably revolutionize our understanding of the evolution of the universe.

For more information, contact Dr. Alexander (Sasha) Kashlinsky at Alexander.kashlinsky@nasa.gov.

To download this interview as a podcast,

Posted in: Who's Who

Dr. Drake Deming, Senior Scientist, Solar System Exploration Division, Goddard Space Flight Center

Dr. Drake Deming, former Chief of Goddard Space Flight Center’s Planetary Systems Laboratory, currently serves as Senior Scientist with NASA’s Solar System Exploration Division where he specializes in detecting and characterizing hot Jupiter extrasolar planets. Dr. Deming was named recipient of the 2007 John C. Lindsay Memorial Award, Goddard’s highest honor for outstanding contributions in space science, for his work in developing a way to detect light from extrasolar planets and use it to measure their temperatures.

NASA Tech Briefs: What is the Solar System Exploration Division’s primary mission within NASA and what types of projects does it typically handle?

Drake Deming: Our primary mission is to study planetary science in the context of NASA’s space mission program. In this case planetary science also includes planets orbiting other stars.

NTB: You began your career in education teaching astronomy at the University of Maryland. What lured you away from academia to pursue a career with NASA?

Deming: Research, and the opportunity to do cutting-edge space-based research.

NTB: Had you always planned to move in that direction, or was it after you had started your career that NASA entered the picture?

Deming: I had always planned to move into research.

NTB: Much of your research over the years has focused on trying to detect and characterize so-called “hot Jupiter” extrasolar planets. What are “hot Jupiter planets, and what can we learn from them?

Deming: Hot Jupiters are giant planets, like Jupiter in our own solar system, but they’re in much closer to their stars. Not only are they much closer than Jupiter in our solar system is, but they’re much closer even than our own Earth. What we can learn from them is quite a bit, because they’re in so close to the star they’re subject to strong irradiation from the star, so the dynamics of their atmosphere is very lively, the circulations are very strong, so we can learn about the physics of their atmospheres. Also, they are subject to tremendous forces from the star; they’re subject to tidal forces, which may play a role in inflating their sizes. So, we can learn about their internal structure, and we can learn a lot about planets from studying hot Jupiters because they’re an extreme case.

NTB: Why are extrasolar planets so hard to detect?

Deming: They’re so hard to detect because, so far, we cannot spatially resolve them from their parent stars, so we have to study them in the combined light of the planet and the star. That means it’s a small signal riding on top of a large noise source – in this case, the star.

NTB: You are the principal investigator on a program called EPOCh, which stands for Extrasolar Planet Observations and Characterization. Tell us about that program and what you hop to accomplish with it.

Deming: Well, we have just concluded our observing with EPOCh. We have over 170,000 images of planet-hosting stars, and when we get these images we don’t resolve the planet from the star. We use the images to do precise photometry. These are bright stars that have planets that transit in front of them, and the geometry of the transit tells us quite a bit about the planet. It tells us the radius. We can examine the data to see whether it has rings or moons. We can look for other, smaller planets in the system that may transit. And in favorable cases our sensitivity extends down to planets the size of the Earth, so we’re searching for smaller worlds in these systems.

NTB: You’re also the Deputy Principal Investigator for a mission called EPOXI…

Deming: Yes. That’s the same as EPOCh. EPOXI is a combination of EPOCh and DIXI (Deep Impact eXtended Investigation). DIXI is the component of EPOXI that goes to Comet Hartley 2, and that’s now ramping up because EPOCh is finished.

NTB: What is that mission designed to accomplish?

Deming: Well, I’m not involved in that, but it’s designed to image a comet nucleus. It will use the imaging capability of the Deep Impact flyby spacecraft to image another comet. The original Deep Impact mission released an impactor into a comet nucleus and actually blew a crater in it. Of course, the impactor is no longer available to us because it was used up, but still, a tremendous amount can be learned by imaging another comet nucleus for comparative purposes.

NTB: That was the Temple 1 comet, right?

Deming: That was the Temple 1 comet.

NTB: EPOXI spent most of the month of May observing a red dwarf star called GJ436 that is located just 32 light years from Earth, and it has a Neptune-size planet orbiting it. What did you learn from those observations?

Deming: We’re still very intensely analyzing those data, but what we hope to learn is whether there’s another planet in the system, and in this case our sensitivity extends down to Earth-sized planets, so we’re looking for another planet that may have left a small signature in the data as it transited the star. If we can find that, there’s a good chance that that planet might even be habitable. Because our period of observation extends for more than 20 days, and because this is a low-luminosity red dwarf star, the habitable zone in that system is in close to the star where the orbital periods are on the order of 20 days. So we have sensitivity to planets in the habitable zone in this case.

Of course, those data have our highest priority and we’re inspecting them very intensely. However, the data analysis process is very involved. We have a lot of sources of spacecraft noise that we have to discriminate against.

NTB: In July 2008, NASA’s Deep Impact spacecraft made a video of the Moon transiting – or passing in front of – the Earth from 31 million miles away. Why did that video generate so much excitement within the scientific community?

Deming: Well, I think it generated a lot of excitement both in the scientific community and outside of the scientific community because it’s really, I think, the first time that we’ve seen the Earth/Moon system from that particular perspective, from that specific perspective, where you see the Moon transit in front of the Earth. And there’s also, as we analyze those data, some realization that although it would be a relatively low probability that, if that were to occur for a planet orbiting another star and its moon transited in front of it, we could learn about the topography of the planet.

NTB: Do you think we’ll learn anything new about Earth from that video?

Deming: I think we’ll learn new things about the Earth as a global object, as an astronomical object. For example, one of the things we should start prominently seeing in the data is the sun glint from the Earth’s oceans, and this has been hypothesized as a way to detect oceans on planets orbiting other stars because that glint would be polarized. Although we don’t have any polarization capability, we can see that the glint sometimes becomes dramatically brighter and we’re trying to understand why that is. It may be because the glint is a specular reflection, probably from the Earth’s oceans, so by correlating that glint, the brightening of that glint with, for example, winds across the oceans and wave heights, we may find that smooth patches of ocean give us a particularly strong glint. So, if the glint were observed on an extrasolar planet, we could then infer from a variable polarization signal the presence of the glint and the presence of oceans.

NTB: It was discovered some time ago that Deep Impact’s high-resolution camera has a flaw in it that prevents it from focusing properly. This was a problem during its original mission when it was trying to study a crater made by an impactor on the comet Temple 1, but you were somehow able to use that to your advantage to study planets passing in front of their parent stars. Can you explain how that works?

Deming: That’s a big advantage for us because we’re not trying to image the planet. We’re only measuring the total photometric signal, so when the planet passes in front of the star we see a dip in intensity. That dip in intensity is, like, one percent. We’re doing very precise photometry, so that one-percent dip is actually the largest signal we see. We’re actually looking for much smaller dips due to smaller planets. Well, in order to measure that dip very precisely, we have to get a very precise photometric measurement, which means we need to collect a lot of light from the star. If we didn’t have the defocus, all of that light would be falling on one or two small pixels of the detector and they would immediately saturate. We’d have to constantly be reading them out and it just wouldn’t be practical. But by having a defocused image, we can spread the light over many pixels and use them to collect more light in a given readout. For each readout we collect many more photons from the stars.

NTB: Among your many accomplishments at NASA, you developed a way to detect light from extrasolar planets and use that light to measure their temperature. Can you explain how that technique works?

Deming: This was an observation with Spitzer. And it was also done concurrently by Professor David Charbonneau at Harvard. The Spitzer Observatory wasn’t really developed for that purpose. We just found that that it was particularly capable of that application, and what we did was we observed the systems that had transiting planets in the infrared where the planet is a significant source of radiation and we waited until the planet passed behind the star – and we could calculate when that would be – and then we saw a dip in the total radiation of the system. Since we knew the planet was passing behind the star at that time, the magnitude of that dip tells us the magnitude of the light from that planet. So, in that way we were able to measure the light from extrasolar planets. This has become a big topic of research for Spitzer. Spitzer has done this for many planets over many wavelength bands. It has been able to reconstruct, in kind of a crude way – but even crude measurements of planets orbiting other stars are very revealing and important – it’s been able to reconstruct the emission spectrum of some of these worlds orbiting other stars.

NTB: In 2007 you won the John C. Lindsay Memorial Award, Goddard’s highest honor for outstanding contributions in space science. What does it mean to you to have your name added to such a distinguished list of scientists?

Deming: Well, of course, I was very honored to receive this award. I was also very surprised because I had no indication, no hint, that this was coming.

NTB: Nobody tipped you off?

Deming: Nobody tipped me off. It was a complete surprise. I think the award speaks not to my own personal accomplishment but to the success of the NASA missions that enabled the measurements and all the people who designed and built the Spitzer Observatory. It wouldn’t have been possible to make those measurements without that facility.

For more information, contact Dr. Drake Deming at leo.d.deming@nasa.gov.

 To download this interview as a podcast,

Posted in: Who's Who

Glenn Rakow, SpaceWire Development Lead, Goddard Space Flight Center

Glenn Rakow is the Development Lead for SpaceWire, a high-speed communications protocol for space-flight electronics originally developed in 1999 by the European Space Agency (ESA). Under Rakow’s leadership, the SpaceWire standard was developed into a network of nodes and routers interconnected through bi-directional, high-speed serial links, making the system more modular, flexible and reusable. In 2004 Rakow was named the recipient of Goddard’s James Kerley Award for his innovation and contributions to technology transfer.

Posted in: Who's Who

Dr. Woodrow Whitlow Jr., Director, John H. Glenn Research Center, Cleveland, OH

As Director of NASA’s John H. Glenn Research Center in Cleveland, Ohio, Dr. Woodrow Whitlow Jr. controls an annual budget of approximately $650 million and manages a labor force comprised of roughly 1,619 civil service employees who are supported by 1754 contractors working in more than 500 specialized research facilities.

Posted in: Who's Who

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.