An ever-increasing reliance on software control has meant that many companies from non-aerospace business sectors (automotive, nuclear power, MRI scanners, financial systems) that do not have a traditional requirement for sophisticated software development processes now find themselves compelled to undertake safety-critical and safety-related analysis and testing. With the need for increased software quality across different industries, a tendency has emerged for companies to look outside their own market sector for best practice approaches, techniques or standards. Examples of such industry crossover have been seen in the automotive and avionics industries with the adoption of elements of the DO-178B standard by the former and a similar adoption of the Motor Industry Software Reliability Association (MISRA) standards by the latter.

In adopting out-of-sector quality and testing standards, new and unfamiliar development and testing techniques need to be implemented, such as:

  • conformance to a set of coding standards, such as MISRA-C or Joint Strike Fighter Air Vehicle Coding Standards (JSF++ AV), along with an automated checking tool;
  • formal unit testing along with informal debugging to demonstrate that requirements are satisfied as they are incrementally implemented;
  • code coverage that validates the effectiveness of testing and isolates non-executable code;
  • code coverage reports that trace all aspects of each line of source code for safety-critical components.

Let's look at each technique in detail to understand the specific challenges involved and learn ways to overcome them.

Coding Standards

Software in airborne systems and equipment in the early 1980s resulted in a need for industry-accepted guidelines for satisfying airworthiness requirements. DO-178, "Software Considerations in Airborne Systems and Equipment Certification," in its revised version — DO-178B — became the defining standard for aerospace systems and software.

DO-178B is primarily a process-oriented document in which objectives are defined and a means of satisfying these objectives is described. Failure conditions associated with the system and its software components undergo system safety assessment according to the famous A-E categories, which determine the level of effort required to show compliance with certification requirements.

Similarly, in 1998 MISRA published their C standard to promote the use of "safe C" in the UK automotive industry. MISRA promotes the safest possible use of the language by encouraging good programming practice, focusing on coding rules, complexity measurement and code coverage, and ensuring well designed and tested code.

Lockheed Martin built on the MISRA-C guidelines by adding a set of rules for C++ language features (e.g., templates, inheritance) to create the JSF++ AV coding standard. Its adoption ensures that all software follows a consistent style, is portable to other architectures, free from common errors, and easily understandable and maintainable. To provide a general C++ guideline, MISRA released a C++ standard in June 2008.

To conform to these standards via a traditional, manual peer review process would be tedious, time consuming, and without any guarantee of completeness nor ability to demonstrate to a certification authority that source code is 100% conformant.

Tools automate the code review process by developing a fast, repeatable process which can deliver useful and usable quality reports. The MISRA standards, when used within the wider process framework of DO-178B, provide an extended model that addresses both quality and reliability. Projects that have adopted this approach find real cost and reliability gains that benefit non-aerospace industries as the need for quality increases.

Functional & Unit Testing

Software teams in any industry never deliver a component without testing, but the testing may be dubious. As the final phase in software development, testing gets squeezed as earlier phases overrun and delivery dates must be met.

Even with time, the typical approach is to use functional testing to demonstrate the capability of the software to meet its requirements. This style of testing is usually performed at the system and/or subsystem levels; it is highly procedural, consisting of hundreds of "steps", and is part of a top-down process of system validation.

Functional testing is only as good as the requirements against which the tests are developed. Standish Group's Chaos Report reveals that only 35% (up from 16.2% in 1994) of software projects are completed on time, on budget, and to user requirements. Functional testing, which requires that the system (or subsystem) under test must be coded and functional before testing can begin, does not address meeting requirements which may be missed due to complexity, ambiguity, and imprecise definition or scope creep.

Many aerospace companies now employ iterative development processes in which they focus on subsets of a modular system. This technique, typically called unit testing, is a bottom-up process that focuses on system internals, such as classes and individual functions. Unit testing facilitates early stage, prototype development and covers the paths and branches in the software that may be unpredictable or impractical to exercise from a functional testing perspective (e.g., error handlers).

Unit testing verifies a small, incomplete portion of a system that cannot execute independently. Test drivers and harnesses provide input values, record outputs, and stub missing functionality to build an executable environment. Unit testing, under-used by 90% of software engineers, is challenged by:

  1. The huge overhead associated with manually creating and maintaining test scripts;
  2. The test scripts, harnesses and drivers being software and therefore prone to the same failings of all software;
  3. The component to be tested using language features, such as data hiding, which make it difficult to provide input values or verify outputs;
  4. No unified and structured method, so techniques are applied on a project-by-project basis with little reuse via industry-wide standards.

Traditional, manual unit testing processes require high skill levels and involve considerable, additional overhead. Automation of these processes via tools standardizes the techniques while leaving room for intuition-benefits that increase efficiency and reduce costs. Automation facilitates development of repeatable processes and standardized testing practices. Tools also capture and store complete test information in a configuration management system with the corresponding source code, which can be retrieved and imported later for regression testing.

Source Code Verification

Functional and unit testing prove that software satisfies requirements and errors have been removed. To determine how effective testing has been, you need coverage analysis applied in tandem with a set of test cases that exercise requirements and highlight which sections of code have and have not been executed. The identification of non-executed code pinpoints a number of potential shortcomings:

  1. Errors in the test cases;
  2. Imprecise or inadequate requirements;
  3. Inadequate requirement testing;
  4. Dead code, i.e., code impossible to execute.

Code coverage has several levels of precision with, at a minimum for DO- 178B, Level C, which shows that test cases executed a line of source code at least once. Greater precision is required as the safety level increases. The following table illustrates the amount of structural coverage required per safety level:

Statement coverage may be sufficient to identify missing test cases as illustrated in the following code snippet:


With red signifying covered code, it is clear that a new test case is needed to exercise the situation where "reading" is not greater than 10.


Statement coverage is sufficient to identify dead code here:


No matter how many additional test cases are created, the call to "update_panel" can never be reached. The root of the problem may be a design error in another part of the code, or the code is not traceable to a requirement.


Typically, due to "if-then" branches and loops, there are several routes through a software component and, above safety level C, each route must be exercised and reported as covered. This is known as decision coverage and may be illustrated by the following code snippet:


The report shows that we have exercised the code with values of "reading" up to 10.0 but not above. Statement coverage would highlight this too, of course, but will not show the converse, i.e., values of "reading" above 10.0 but not below. And, we need to be sure that "update_panel" has been called with "led_mode" set to 4 and when left with its initial value of 0


The highest level of coverage precision is modified condition/decision coverage (MC/DC). This is reserved for software components assessed at safety level A and places the component under exhaustive testing to prove that each decision tries every possible outcome, each condition in a decision takes on every possible outcome, and each condition in a decision is shown to independently affect the outcome of the decision. In simple terms, we are concerned with the permutations of a compound condition as illustrated in this code snippet:


The coverage report needs to confirm that we have exercised the code where "reading" is both above 10.0 and below in combination with "submode" being 3 and some other value, i.e., 4 permutations.


Source code verification gauges the effectiveness of testing, whether proving all requirements are satisfied or by uncovering problems. This task cannot be undertaken manually. Thankfully, coverage analysis is highly automated in most test tools, offering virtually transparent usage. The tools increase the quality of code and integrate the overall test process, reducing application failure rates and maintenance costs.

Object Code Verification

Object-code verification focuses on how much the control flow structure of the compiler-generated object code differs from its application source code. Given that traditional structural coverage is applied at the source code, but the object code executes on the processor, differences in control flow structure between the two can result in significant gaps. These flow graphs are generated from the same procedure, but the Object-box Mode graph would show a branch which doesn't appear in the Source Mode graph.

Visible, easy-to-use reports help engineers quickly build test cases to achieve 100% unit coverage. Without such reports, the effort required to identify each path through the object code would result in longer timescales and higher cost.

Software components in aerospace systems with a DO-178B Level A must be object code verified. While this is the toughest testing discipline, it is now considered as more safety-critical components are deployed in automobiles, medical equipment, and transport control systems. Similarly, critical components in telecom and financial systems witness increased quality requirements due to the high monetary cost of failure. Although safety-critical components represent a subset of the whole application, object code level requires considerable resources in terms of time and money. Deploying automated, compiler-independent processes helps reduce overall development costs by several factors and delivers high-quality software components with minimal chance for failure.

Conclusion

Without doubt, the adoption of out-of-sector development processes and standards presents a significant challenge. However, the processes and directives of DO-178B are being adopted as best practices outside the aerospace industry for systems with safety-related characteristics. With the right tools and facilities, the scope of these challenges may be greatly reduced, enabling projects to realise the full potential and benefits that rigorous quality analysis, testing and verification may bring in terms of increased code quality, improved reliability and cost savings.

This article was written by Brian Hooper, Field Applications Engineer, and Bill StClair, Technical Evangelist, LDRA Technology, Inc. (San Bruno, CA). For more information, contact Mr. Hooper at This email address is being protected from spambots. You need JavaScript enabled to view it.; Mr. StClair at This email address is being protected from spambots. You need JavaScript enabled to view it., or click here .