2012

Benefits of Using Rigorously Tested Routines From Numerical Libraries

This technology helps technical application developers incorporate mathematical and statistical functionality in their applications, while providing the documentation needed for software validation.

altMedical device manufacturers often have difficulties ensuring that code is correct, debugging is properly done, and documentation is available for software validation required for regulatory compliance. Because most medical technology involves intensive mathematical and statistical methods, it is timely to reexamine how computational frameworks are or are not designed for maximum performance.

Although many medical technology developers have long relied on prepackaged software for many routine tasks, new areas of research and business development have often outpaced these pre-packaged offerings. Concomitantly, whether realized or not, nearly every biotech researcher today is working in a computational infrastructure that employs multicore processors that significantly slow the performance of legacy applications originally developed for single-processor computing environments.

Performance gains today, by and large, are no longer accessible through hardware upgrades — the historic path taken by commercial enterprises of all kinds through the decades. In this day and age, investment in software, not hardware, may matter most. For these reasons, a re-examination of the computational infrastructure at work throughout the biotech industry is extremely relevant.

Traditionally, performance improvements were largely attributable to the use of faster clocked processors. However, in the multicore environment, the clock speed of each core is now slower, and application performance will decrease unless more than one core can be utilized. Many organizations may therefore notice degradation in application performance when they deploy the latest hardware because they are using applications coded for use on a single processor when the new hardware is equipped with a multicore chip.

Counterintuitively, hardware that could potentially speed processing time by orders of magnitude may be responsible for significantly slowing down many applications. These organizations then experience significant bottom line impact without realizing why this is occurring. The issue for which they are unprepared is that programming a multicore computer is more complex than software development on a single-processor system.

Numerical libraries have typically been the preferred mechanism by which sophisticated technical application de velopers could readily incorporate mathematical and statistical functionality in their applications. These libraries offer organizations a convenient way to access the true power of multicore systems.

Custom-developed numerical code, for use in a specific application, can be incredibly time-consuming to produce, not to mention costly in the long term. Such code may take a long time to develop because of the complexities of designing the best-match algorithmic approach appropriate to the solution of the specific problem and the difficulty of encoding that algorithm in an accurate and stable numerical manner. Also, the very fact that it is being written for one current application suggests that the developers may not consider the possibility of extended numerical requirements and therefore may not include the flexibility and documentation required to enable the next advance for the product or the next development project.

It can be argued that free algorithms, available from the Internet, can provide an alternative to commercially available numerical libraries. Unfortunately support, maintenance, and rigorous testing of these sources is at best unpredictable, and therefore the user of such software is, perhaps unwittingly, risking the longterm viability of the application. The risk incurred may be acceptable, in the short term, for non-critical applications, but as new computing architectures emerge, this risk increases significantly and may prevent optimal use of the code in the long run.

This latter point is especially important because at any given point in time, multiple computing environments can be utilized by a single organization. They want to be free to choose between different hardware platforms and programming languages to best take advantage of the particular characteristics of hardware and software available, but still have confidence in any results produced.

The individual numerical methods used in diverse fields such as modeling, research, analytics, design, exploration, financial engineering, and product development must constantly evolve as well. This is because new, more reliable algorithms for general or specific hardware configurations are constantly being developed.

Developers of numerical libraries are constantly striving, through algorithmic innovations, to provide problem-solving software that is appropriate, efficient, and robust on the wide range of computing platforms used at the time by the technical computing community. In this way, their work continually replenishes the contents of numerical libraries with new material and improved techniques and makes these libraries available on the hardware of choice.

Going forward a few years, another widespread shift in the normal technical computing architecture is widely predicted. This shift will have a similar range of ramifications to the current migration from single to multicore, namely the move to manycore1 or GPU2 computing. This scenario vividly illustrates the major problem faced by organizations — investment in development of specialist code for a specific computing architecture may have a short lifespan before it is obsolete. In this changing environment, most organizations ought not to try to justify such a cost when there are off-the-shelf alternatives produced by numerical software specialists.