JPL has produced a series of FPGA (field programmable gate array) vision algorithms that were written with custom interfaces to get data in and out of each vision module. Each module has unique requirements on the data interface, and further vision modules are continually being developed, each with their own custom interfaces.

Each memory module had also been designed for direct access to memory or to another memory module. On the development board originally used (an Alpha Data XRC4), there were six independent SSRAM (synchronous static RAM) banks that allowed each module sole access. For a flight mission, there likely would be between one and three memory banks, and arbitration of those banks would need to be supported, interleaving access to individual memory banks between multiple modules.

An FPGA data architecture was required to allow arbitration to onboard DDR (double data rate) and/or SSRAM memory, and to allow up to 10 to 30 independent agents access to that memory. It also required a method of exchanging data directly between modules without reducing the throughput of memory access. This architecture also had to support both low-latency reads and writes, and offer high throughput.

Each FPGA vision module had slightly different input and output requirements. Some required serial access to data, and some were random access. There were 8-bit, 16-bit, and 32-bit input/output widths. Three modules could connect directly together in a series or go directly to memory, depending on runtime configuration options. One of the larger difficulties was posed by the random read access. For industry-standard buses such as AMBA, PLB, or OPB, a single randomread request can take 5 to 10 clock cycles, locking out all other users on the bus until the request was complete. This is far too slow for the vision modules and would effectively reduce performance by 2 to 5 times.

An architecture was created that met the same data throughput as the prior custom interface that had no arbitration. The new architecture also allowed for multiple memory types (DDR, DDRII, SSRAM, NAND memory) without any modification of the FPGA vision modules themselves.

The current Rover Navigation FPGA Vision system contains five vision modules: Rectification, Filtering, Disparity, Feature Detector (via a Harris detector), and Visual Odometry score computation (via a sum of absolute differences operator). Further modules to handle path planning are likely.

Each vision module has an “agent” — an interface to memory for both reads and writes of different sizes. R32 means a read agent of width 32 bits, and W8 means a write agent of width 8 bits. Each memory bank has a single arbiter that handles all memory requests to its bank. Each agent maps to a single arbiter, but because this mapping will be dependent upon the memory devices used and the number of memory devices available (i.e. two DDR banks vs. six SSRAM banks), there is a large multiplexer called the “vision agent to bank mapping,” which assigns agents to appropriate arbiters and memory banks.

Each agent can queue multiple memory requests and queue multiple responses from memory. This allows bursting of data for high throughput, and de-couples the action of requesting memory from the action of receiving data. Many of the vision modules have one part dedicated to computing the location of the next request, and a separate part dedicated to handling the data at that location.

This work was done by Arin C. Morfopoulos and Thang D. Pham of Caltech for NASA’s Jet Propulsion Laboratory.

This invention is owned by NASA, and a patent application has been filed. Inquiries concerning nonexclusive or exclusive license for its commercial development should be addressed to the Patent Counsel, NASA Management Office–JPL. NPO-47869



This Brief includes a Technical Support Package (TSP).
Document cover
FPGA Vision Data Architecture

(reference NPO-47869) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the January, 2013 issue of NASA Tech Briefs Magazine (Vol. 37 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document outlines the development of an FPGA Vision Data Architecture at NASA's Jet Propulsion Laboratory (JPL), aimed at enhancing the efficiency and performance of vision modules used in space missions, particularly for the Mars Rover 2018. The architecture addresses the challenges posed by custom interfaces that each vision module required, which were costly to develop, test, and maintain.

The architecture allows for direct access to various memory types, including DDR, DDRII, SSRAM, and NAND memory, without necessitating modifications to the FPGA vision modules. This flexibility is crucial for adapting to the specific memory requirements of flight missions. The design supports up to 10-30 independent agents accessing memory, facilitating high throughput and low latency for data reads and writes. The architecture was developed to overcome the limitations of industry-standard buses, which could not meet the required memory bandwidths of 495 MB/sec during operations involving Rectification, Filtering, and Disparity processing.

The document highlights the unique relationship between the vision modules, where data can flow directly between them without accessing the memory interface, thus maintaining high throughput. The architecture also features point-to-point data paths, which enhance performance by minimizing arbitration delays.

A significant innovation of this architecture is its ability to generalize the interface requirements across all vision modules, which simplifies future development and reduces costs. The previous reliance on specific development boards (Alpha Data XRC4 and XRC5T1) limited flexibility; however, this new design liberates future projects from such constraints.

The research was funded under the R&TD 01STCR R.10.107.032 initiative, focusing on power-efficient fast traverse for Mars rovers. The advancements made in this architecture are positioned to move FPGA-based vision processing closer to flight readiness, thereby contributing to NASA's aeronautical and space activities.

In summary, the document presents a comprehensive overview of the FPGA Vision Data Architecture, emphasizing its novel approach to standardizing interfaces, enhancing data throughput, and supporting multiple memory types, all of which are critical for the success of future space missions.