High-performance computing (HPC) has transformed science and engineering over the past 20 years, but some fields have yet to fully realize its benefits due to software limitations. This article discusses software approaches to increase productivity in the life science discipline of neural imaging. Like other imaging-based endeavors, neural imaging faces daunting quantities of raw data, proprietary image formats, lossy vs. non-lossy compression, detector noise, complex object segmentation, and visualization challenges.

MRI brain scan

Parallelization enables faster data processing and the scalability to handle larger datasets. It can alleviate bottlenecks caused by serial processing of large data, which in turn enables scientists and engineers to more rapidly develop novel methods to monitor neurobiology and next-generation therapies. However, traditional parallelization approaches are often thwarted by lengthy and challenging application development efforts, so implementing an efficient, user-friendly HPC environment is critical. Making HPC environments easy to use requires that low-level tasks such as launching distributed applications, data distribution, parallel I/O, and job scheduling are handled automatically for the user. This article describes two computationally intensive areas of neural imaging: image analysis and data visualization.

Image Analysis

Star-P software runs MATLAB® programs on parallel clusters.

Several imaging modalities are used in neural imaging, depending on the measurements required. Most commonly, relatively low-resolution methods such as magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) are used individually or in combination to probe areas of neural activity over time within living subjects. This longitudinal image data allows changes in activity to be mapped over time. Intensity maps of MRI-detected signals are typically interpreted qualitatively by physicians to localize regions of activity or damage. DTI data is generated with gradients of magnetic fields applied over a range of directions so that anisotropic diffusion of dyes can be computed using a 3 × 3 mathematical operator called a tensor. Physicians can use the resulting image to map neural connections and areas of injury at much higher resolutions than with traditional MRI.

Mapping the brain structure involves efforts over a range of scales. At the largest scale, morphometric analyses such as Large Deformation Diffeomorphic Metric Mapping (LDDMM) segment regions using 3D structures as query templates. This information populates morphometry databases that, in turn, can be annotated and queried in the future. Brain atlases are being constructed for several species to map not only the fine structure of this complex tissue, but also the gene expression profiles of sub-regions detected using in situ hybridization.

Similarly, high-resolution optical methods trace the 3D structure of fluorescently stained neurons either in extracted tissue or in situ using surgically-inserted windows through which images can be recorded over long periods of time. Intensity-tracing and edge-detection algorithms are applied to 2D, 3D, and time-resolved 3D images using specialized software packages such as ImarisXT (Bitplane) and Neurolucida (MBF Bioscience). These applications segment structures of interest, enabling easier downstream visualization and extraction of quantitative data such as volume, shape, location of regions and length, direction, and branching of neural processes.

Data Visualization

Structures and dynamic features measured in neural imaging are typically represented as 3D maps that the user can interact with in real time. The scale of the data may range from millimeters to microns and contain different information relating to its physical structure and molecular composition. Visualization tools may be tailored to users’ needs. In many Web-based atlases, the user can view 3D structures interactively. More sophisticated image data such as that collected by DTI often employs a combination of pseudo-coloring and annotation using vector fields to show structures detected by anisotropic diffusion. However, in a research setting, the user requirements for data visualization are typically far more sophisticated. The most critical and effective visualization environments, such as the Mouse BIRN Atlasing Toolkit (www.loni.ucla.edu/software), are linked to analysis tools and databases containing repositories of images and extracted data. This capability enables iterative analysis and review of image processing methods being developed or being applied to new data. Often rate-limiting steps include querying the data repository and moving large quantities of image and calculated table data from servers to desktops.

Meeting Performance Requirements

All projects have some form of performance requirements. The need for performance varies with the project as well as the stage in the pipeline and the type and purpose of the research. For example, large single images may create a requirement to run on a system with large memory capacity, and turn-around goals (such as “complete overnight”) may require running on a parallel cluster.

Today, users prefer very high-level languages (VHLLs) such as Python, MATLAB®, Mathematica, and others to develop their imaging models due to the tools’ high-level constructs and interactive environments. The problem is that these desktop tools have not been used to program applications optimized for parallel systems — whether shared memory machines, clusters, or grids — which are still programmed in lower-level languages such as C, Fortran, and with the inter-processor communication protocol MPI (message passing interface).

The good news is that a variety of new tools are available to bridge the desktop-cluster divide. Over the past several years, both commercial desktop application vendors and the open source community have introduced programming tools that extend these applications to parallel systems. These include Interactive Supercomputing’s Star-P, GridMathematica from Wolfram Research, The Distributed Computing Toolbox from The MathWorks, and parallel extensions to the open-source language Python, to name a few. While these tools vary widely in terms of imaging and visualization capabilities, they all represent a great leap forward for the neural imaging community.

This article was written by David Rich, VP of Marketing at Interactive Supercomputing, Waltham, MA. For more information, visit: http://info.hotims.com/22918-166 .