Solar System Exploration camera implementations to date have involved either single cameras with wide field-ofview (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability.

View of the Baseline 7-Camera Concept (360° horizontal FOV, >90° vertical FOV )
A system was developed based on a general- purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92° FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360° FOV (azimuth). A seventh camera, also with a FOV of 92°, is installed normal to the plane of the other 6 cameras giving the system a > 90° FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software).

Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts.

The system’s onboard software (SW) supports autonomous operations such as pattern recognition and tracking. For example, when the system is commanded to detect and track an object of interest, the SW continuously reads data from all the cameras until the object appears in one (or more) camera’s FOV. The SW then reads these camera(s) and only returns to Earth the portion of the data that includes the object of interest.

Each camera weighs 50 g, measures 2 cm in diameter, 4 cm in length, and consumes less than 50 mW. The central electronics is a cylinder 14 cm in diameter and 4 cm thick. Variations with different and smaller form factors are possible.

By using the massively parallel architecture inherent to field-programmable gate arrays (FPGAs), per-imager processing may be performed concurrently by separate computational units within the FPGA. This architecture allows tracking algorithms to scan the entire FOV for a set of features and then switch to a second operating mode that performs processing targeted to only the imagers capturing those features. This architecture would provide considerable bonus to science by improving the efficiency of longrange survey with no additional mass and very small power cost.

This work was done by Paula J. Pingree, Thomas J. Cunningham, Thomas A. Werne, Michael L. Eastwood, Marc J. Walch, and Robert L. Staehle of Caltech for NASA’s Jet Propulsion Laboratory. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it.. NPO-48172

NASA Tech Briefs Magazine

This article first appeared in the December, 2012 issue of NASA Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.