Dr. Andrew Watson works on models of human vision and applies them to visual technology. The Founder and Editor in Chief of the Journal of Vision, he is also a Fellow of the Optical Society of America, of the Association for Research in Vision and Ophthalmology, and of the Society for Information Display. Watson received a 2011 Presidential Rank Award from the President of the United States.

NASA Tech Briefs: What is the Spatial Standard Observer (SSO)?

Dr. Watson: For many years we’ve been working on computational models of the early stages of human vision. Part of the purpose of that research is to develop engineering tools that could be used in the design of display technology, compression algorithms, and things of that kind. We have taken a lot of our research and compressed it into a simple engineering tool, the Spatial Standard Observer, which can be used to predict the visibility of artifacts, for example, in a display, or the legibility of information in a display — any case where you have imaging technology that is going to be used by a human observer.

NTB: What role does human test data play in the SSO model? Where is that data coming from?

Watson: Human test data are very important, and the data are of two kinds. One is fundamental data on basic sensitivities of human vision. A lot of that comes from our lab and other labs around the world, where vision scientists are collecting data on how well people can see motion, color, spatial pattern, and temporal change.

The other kind is applied data on questions such as “If we compress this image, can the observer see the artifacts in the image?” Or “If we distort the color in a certain way, will people be sensitive to that distortion?” So we take both those kinds of data and use them to design and calibrate tools like the Spatial Standard Observer.

NTB: Can you take us through a real-world example of how this works, for example, with finding artifacts?

Watson: That’s one of our technologies that has been most widely applied, and in that case, there are about 1 billion flat panel displays manufactured in the world each year; that’s quite remarkable when you think that there are only 6 billion people on Earth. Almost every display needs to be inspected for defects produced during manufacturing. We like to find the defects that are visible to human observers. We don’t really care about the ones that are not visible; the Spatial Standard Observer is uniquely suited to that task because it can tell us when the artifacts are visible. That technology has been licensed to the display industry, and it’s currently in use inspecting, for example, flat panel televisions.

NTB: Can you go through the other applications as well? Aircraft damage? Laser eye surgery?

Watson: Another quite different example is where we’re trying to understand not a piece of imaging technology, but actually the performance of human observers in a visual task. In the case of unmanned aerial vehicles, where there are many efforts to introduce them more widely into the national airspace, there’s great concern about the effects they may have on aviation. One of the issues is the so-called “See and Avoid” rule, where piloted aircraft are generally required to see and avoid other aircraft. Now if there’s no pilot in the unmanned aerial vehicle, how does it see and avoid other aircraft? And under what conditions will it be seen and avoided? So we’ve used the Spatial Standard Observer to actually compute visibility measures for aircraft of various sizes, at various distances, under various meteorological conditions. That can be used to model the introduction of unmanned aerial vehicles into the national airspace and determine under what conditions that would be safe, and under what conditions it will be unsafe.

NTB: What kinds of partnerships are possible with this type of technology?

Watson: The technology is now being used in industry: the display inspection technology that I described earlier. Another example of a different industry that we believe may be able to make use of this technology is laser eye surgery. In that situation, we now have very advanced technology for sculpting the eye in order to reduce the eye’s optical defects.

We don’t have particularly sophisticated ways of predicting the visual outcomes. Another application of the Spatial Standard Observer is to be able to predict from optical measurements of the eye, before surgery and after surgery, what the optical and visual performance of the observer will be. So that’s another industry where we’re hoping that the technology may see some transfer.

NTB: Can you take us through a typical day for you? What is your day-to-day work? What are you working on now?

Watson: My day consists largely of sitting in front of a computer screen, developing models and metrics of the kind I’ve described, and testing out their application in various domains. One interesting problem I’m working on currently is the application of a model like the Spatial Standard Observer to the image quality of what I would call “remote viewing systems.”

Two examples of remote viewing systems are surveillance video systems and submarine periscopes. In both of these cases, you are monitoring some activity at some distance, and you’re doing it via an electronic imaging system, which has within it optical components and electronic sensors, digital signal processing, image compression, transmission, and then it has display at the other end, where it’s being viewed by a human observer. What we’re trying to do is use models like the Spatial Standard Observer to characterize that end-to-end process and be able to develop numerical metrics for the quality of the entire system, based on the predicted performance of the human observer in using the system.

For example, if the system is being used to identify boats of various kinds, then we will actually be able to simulate the identification of boats through that imaging system using our vision model, and give it a quality metric based on that performance measure. And another application, it might be “Can you see a gun in that hand of someone in an airport security scenario?” There we’ll be able to actually predict the identifiability of handheld devices using that same model. We’re very excited about that work, and that’s a project that I’m currently working on.

However, just to elaborate on that a little bit, a project like this has many parts, and each one of those parts can take a considerable amount of effort and development to accomplish. One part of the project that I just described involves developing a better model for the optical performance of the human eye. The degree of blur by the eye’s optics, for example, depends on various things such as your age, and how large your pupil is. We’ve developed a mathematical model that can compute your optical performance based on those parameters. That optical component will then go into that larger model of visual identification performance.

NTB: Why is SSO a superior option to previous ones? What have been the weaknesses of earlier display metrology?

Watson: There have been a number of other significant efforts in this area in the last 10-20 years, and I want to acknowledge some really excellent work that’s been done by other folks. One difficulty with those other efforts is they have often been quite complicated. The computational machinery involved, the number of parameters involved, and the sophisticated knowledge of the software that was required in order to complete the calculations was quite daunting. As a result, they were rarely used except in research.

One goal of the Standard Observer was to minimize the number of complicated components, the number of complicated calculations, and to, so far as possible, hide the complexity from the user so that it would be more widely applied in actual, practical situations. I think the use of the Standard Observer in the display industry is an indication that we’ve at least partly succeeded in that effort so far.

NTB: What are your goals for 2013, as far as these models are concerned?

Watson: The task-based performance model for electronic imaging systems is really my major goal right now. We’ve recently submitted one paper on the optical performance model that I described a moment ago. We’re in the process of completing a paper on the complete human pattern identification model, which will be completed quite soon, and then the third product will be an actual paper on the technological applications of this model to quantify the performance of imaging systems.

Again, trying to give you a sense of the effort that goes into developing these models, another feature of this identification model, beyond the optics, is processing in the human retina. So, one feature of the retina is that the density of neural cells declines as you go away from the point of fixation. We’re all familiar with the fact that an image gets blurry as you move away from the point where you’re looking. But that’s not an optical effect. That’s an effect of the neural machinery in the retina, and so we’re doing a very careful job of modeling that quantitative change in resolution as you go across the retina, and that’s part of what produces accurate performance for human observers.

I’ll give you one example: One set of data that we’re modeling is letter identification. Letters are a wonderful pattern stimulus for human experiments because people are extremely well practiced at identifying letters of the Roman alphabet. You don’t have to worry about training the observers. The data that we’re looking at are how much contrast you need — that is, how much difference there is between the white background and the black letter, let’s say, as a function of the size of the letter.

As you might imagine: In order to identify a letter when it’s very tiny, near your letter acuity limit, you need a lot of contrast. You need essentially 100% contrast, which means a black letter on a white background. But if the letter’s larger, you can manage with less contrast. It can be a light grey letter on a white background.

Now the curious thing is that you might think that as letters get larger and larger, they would get easier and easier to see – that is, you would need less and less contrast. But that is not what happens. After they get larger than the size of about one degree — and a degree of visual angle is a unit we use in vision science, which is about the size of the width of your thumb at arm’s length — their performance no longer improves. Now why is that? Well, it really has to do with the fact that as the letters get larger, they necessarily impinge upon areas of the retina that have fewer and fewer neurons, and consequently the resolution goes down and the performance also does not improve. That’s an example of the kind of neuroscience result that we have to introduce into our models in order to make them accurate predictors of human performance.

NTB: What is your favorite part of the job?

Watson: My colleagues. I really enjoy working with other people, and we have an excellent group here at NASA Ames Research Center, who constantly challenge me and improve my work. I very much value their collaboration.

For more information, on licensing and partnering opportunities related to the technologies mentioned here, contact This email address is being protected from spambots. You need JavaScript enabled to view it. or call 1-855-NASA-BIZ (1-855-627-2249).

To download this interview as a podcast, click here