New Method Generates High-Resolution, Moving Holograms in 3D

The 3D effect produced by stereoscopic glasses used to watch movies cannot provide perfect depth cues. Furthermore, it is not possible to move one’s head and observe that objects appear different from different angles — a real-life effect known as motion parallax. Researchers have developed a new way of generating high-resolution, full-color, 3D videos that uses holographic technology. Holograms are considered to be truly 3D, because they allow the viewer to see different perspectives of a reconstructed 3D object from different angles and locations. Holograms are created using lasers, which can produce the complex light interference patterns, including spatial data, required to re-create a complete 3D object. To enhance the resolution of holographic videos, researchers used an array of spatial light modulators (SLMs). SLMs are used to display hologram pixels and create 3D objects by light diffraction. Each SLM can display up to 1.89 billion hologram pixels every second. Source:

Posted in: News, Video


NGDCS Linux Application for Imaging-Spectrometer Data Acquisition and Display

NASA’s Jet Propulsion Laboratory, Pasadena, California A simple method of controlling recording and display of imaging spectrometer data in (airborne) flight was needed. Existing commercial packages were overly complicated, and sometimes difficult to operate in a bouncing plane. The software also was required to keep up with the imaging data rate, while still running on commodity hardware and a desktop operating system. Finally, the software needed to be as robust as possible — repeating a flight because of lost data is sometimes impossible, and always expensive.

Posted in: Briefs, Displays/Monitors/HMIs, Data Acquisition


Detection of Carried and Dropped Objects in Surveillance Video

This software analyzes a video input stream and automatically detects carried and dropped objects in near-real-time. NASA’s Jet Propulsion Laboratory, Pasadena, California DARPA’s Mind’s Eye Program aims to develop a smart camera surveillance system that can autonomously monitor a scene and report back human-readable text descriptions of activities that occur in the video. An important aspect is whether objects are brought into the scene, exchanged between persons, left behind, picked up, etc. While some objects can be detected with an object-specific recognizer, many others are not well suited for this type of approach. For example, a carried object may be too small relative to the resolution of the camera to be easily identifiable, or an unusual object, such as an improvised explosive device, may be too rare or unique in its appearance to have a dedicated recognizer. Hence, a generic object detection capability, which can locate objects without a specific model of what to look for, is used. This approach can detect objects even when partially occluded or overlapping with humans in the scene.

Posted in: Briefs, TSP, Cameras, Electronics & Computers, Data Acquisition, Detectors


Visualization of fMRI Network Data

NASA’s Jet Propulsion Laboratory, Pasadena, California Functional connections within the brain can be revealed through functional magnetic resonance imaging (fMRI), which shows simultaneous activations of blood flow in the brain during response tests. However, fMRI specialists currently do not have a tool for visualizing the complex data that comes from fMRI scans. They work with correlation matrices that table what functional region connections exist, but they have no corresponding visualization.

Posted in: Briefs, TSP, Visualization Software, Electronics & Computers, Data Acquisition


Viewpoints Software for Visualization of Multivariate Data

Ames Research Center, Moffett Field, California Viewpoints software allows interactive visualization of multi-variate data using a variety of standard techniques. The software is built exclusively from high-performance, cross-platform, open-source, standards-compliant languages, libraries, and components. The techniques included are:

Posted in: Briefs, Visualization Software, Electronics & Computers, Data Acquisition, Mathematical/Scientific Software


Vision Algorithms Catch Defects in Screen Displays

Software based on NASA vision research is used in making laptop, cellphone, and TV displays. NASA has sent more than a few robotic missions into space, but it never loses sight of its goal to enable human exploration of the cosmos. A core component of planning for future manned missions is the Human Systems Integration Division, headquartered at Ames Research Center, that focuses on advancing our understanding of how people process information and interact with mechanical and electronic systems.

Posted in: Articles


Researcher Spotlight: Atom­Thick Material Offers 2D Imaging Possibilities

Rice University scientists have developed a two-­dimensional, atom­-thick, light-­sensitive material called CIS, a single­-layer matrix of copper, indium, and selenium atoms. Sidong Lei, a graduate student, also built a prototype — a three-­pixel charge­-coupled device (CCD) sensor — to prove the material’s ability to capture an image. The optoelectronic memory material may be the basis for future flat imaging devices and two­-dimensional electronics.

Posted in: Articles, Sensors


Synthetic Vision Systems Improve Pilots' Situational Awareness

Visual Flight Rules (VFR) define a minimum of clear weather conditions under which a pilot can operate an aircraft using visual cues, such as the horizon and buildings. Under VFR, a pilot is expected to “see and avoid” obstacles and aircraft, and essentially only needs to see out of the window.

Posted in: Articles, Aviation, Machine Vision


3D Vision System Aids 560-Mile Piloted Drive

Audi completed a long-distance test drive of its Audi A7 Sportback semi-autonomous concept vehicle, finishing the journey at the International CES 2015 consumer electronics show in Las Vegas. The “piloted driving” — Audi’s take on combining autonomous driving with individual control — began in Stanford, California and ended two days and 560 miles later on January 6, 2015.

Posted in: Application Briefs, Cameras, Video, Machinery & Automation, Sensors


CMOS Smart Camera

VC F series intelligent cameras from Vision Components (Ettlingen, Germany) provide a 4 x 1 GHz computing power. Equipped with a programmable GPU, the systems use the VC Linux operating system. The cameras feature either one or two remote 1/1.8" CMOS sensors connected by a 30 mm or 80 mm cable; the two-sensor version is also suited for stereo camera applications. Depending on the model, the sensors provide a 1,280 × 1,024 (WVGA) or 1,600 × 1,200 pixel resolution. A 1 GBit Ethernet interface enables integration into automation environments. The VC F series includes two USB interfaces and an HDMI output.

Posted in: Products, Cameras