A vectorized rebinning (down-sampling) algorithm, applicable to Ndimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as “images,” and each value in an image as a “pixel.” The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns.
Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications. Additional applications include image processing, filter design, and anti-aliasing techniques. Data rebinning is a computationally intensive procedure and thus the goal in this technology development is a more efficient algorithm with reduced run times, as compared to existing rebinning approaches. This approach is able to take advantage of vectorized instructions such as Single Instruction Multiple Data (SIMD), to perform the rebinning operation.
The algorithm completely vectorizes the data rebinning operation, in the sense that a “single” arithmetic operation is applied simultaneously to multiply distinct data sets and is executed with the approximate run time of that operation applied to a single data set. For lower-level computer languages, such as C or assembly, vectorized operations can be implemented using central processing unit (CPU) single-instruction, multiple- data (SIMD) capabilities, such as streaming SIMD Extensions 3 (SSE3) on x86 computer architecture or AltiVec on PowerPC processors. Thus, although the algorithm has been implemented using MATLAB, it is not fundamentally tied to MATLAB, and can be implemented using other programming languages.
The vectorized data rebinning (downsampling) procedure offers a reduced run time when compared with standard rebinning algorithms. In general, algorithms are often optimized by trading decreased run time for increased memory, where the latter is needed for storing additional code, pre-computed results, or other ancillary data. However, the vectorized rebinning approach does not have increased memory requirements compared with conventional approaches. The underlying fundamental advantage to this technology is the utilization of vectorized instructions for the rebinning operation.
This work was done by Bruce Dean, David Aronstein, and Jeffrey Smith of Goddard Space Flight Center.