MILS: An Architecture for Security, Safety, and Real Time
- Created: Tuesday, 31 October 2006
The unrelenting growth and integration of embedded controls, information processing, and communications has created a need for systems that provide robust protection for resources and services in the face of serious threats. Formerly diverse requirements for different kinds of systems are now being merged into combined requirements to be met by a single system. To address this trend, a partnership of government, industry, and research institutions are developing the MILS (Multiple Independent Levels of Security/ Safety) architecture. Although being pursued initially for defense applications, MILS provides a foundation for critical systems of all kinds. Its security, safety, and real-time properties make it suitable for such diverse applications as financial, medical, and critical infrastructures. Based on a new breed of commercially available high-assurance products, MILS provides a modular, flexible, component- based approach to high-assurance systems.
There are well-established but distinct traditions for the construction of dependable safety-critical and securitycritical systems. Both require a degree of assurance that goes beyond that provided by “best commercial practice.” The safety of commercial airborne systems is governed by RTCA DO-178B requirements. DO-178B Level A (the highest level) may be characterized as technically conservative because it applies conventional development process and testing practices, albeit very thoroughly and conscientiously. Security standards are achieved through conformance to the Common Criteria (CC), an international standard for security requirements. The highest level of security described by the CC is Evaluation Assurance Level (EAL) 7 (see figure 1). Methods used in the pursuit of secure systems can be characterized as technically progressive because they have applied advanced construction and validation methods. Secure systems face not only an unpredictable environment, but resourceful attackers. Confidence in secure systems historically has been sought through rigorous software engineering and the use of formal methods. In fact, security and formal methods grew up together in the 1970s, when much advancement in formal methods were motivated and funded by security projects.
Since the 1960s, security projects have recognized that security should be designed and implemented at the lowest levels: the operating system and its supporting hardware. Early secure operating system developments include ADEPT-50, Multics, UCLA Data Secure Unix, Kernelized Secure Operating System (KSOS), Secure Communications Processor (SCOMP), Provably Secure Operating System (PSOS), Multi-Net Gateway, BLACKER, Boeing’s MLS LAN, and GEMSOS. Classically, these systems incorporated security policy enforcement in a security kernel, a general-purpose operating system. The policy was mandatory access control (MAC), typically a version of the Bell-LaPadula Model (BLP), also known as multilevel security (MLS). A consequence of the classic approach is the need for trusted processes that require privileges in order to perform their functions. A security kernel taken together with trusted processes comprises a Trusted Computing Base (TCB). To deploy a TCB in the protection of high-value assets requires a high degree of assurance that the TCB is indeed trustworthy to enforce its security policy. The primary barrier to proving the trustworthiness of a TCB is one of complexity. TCB components typically comprise large, complex, monolithic objects. To perform rigorous analysis of such objects using formal methods challenged the state-of-the-art 25 years ago and arguably does so even today. “Formal methods” refers to the use of specifications written in languages that have formal semantics and an associated proof system, so that analyses of the specifications and proofs of desired properties may be automated (or human directed), objective, repeatable, and logically sound. Such methods can only be practically applied to objects of disciplined structure and limited complexity. Spurred by the U.S. National Security Agency, MLS operating systems were produced commercially by numerous computer vendors from the mid-1980s through the 1990s. For the most part, these trusted OSs were evaluated to a standard of medium assurance. Unfortunately, such systems do not meet accreditation requirements for deployment in high-risk environments where sensitive data must be protected, and high assurance is required.
Separation Kernels & Middleware
MILS applies venerable, long-established principles and techniques alongside recent advancements in microprocessors, computer security, avionics safety, software engineering, and formal methods. A separation kernel, first proposed in 1981 by John Rushby of SRI International, uses protection mechanisms provided by a microprocessor to enforce (with high assurance) the primitive guarantees of data isolation and information flow control (see Figure 2). These are the prerequisite guarantees needed for the construction of software mechanisms to enforce higher-level security policies such as MLS. When the separation kernel was first conceived, microprocessor features and performance were not adequate to implement complex systems while paying the security tax for robust protection provided by a separation kernel. Today, quantitative improvement in processor speed has enabled a qualitatively different approach to security, putting MILS separation kernels squarely in the sweet spot of the perennial performance/security tradeoff.The MILS paradigm depends explicitly on Saltzer’s and Schroeder’s Principle of Least Privilege and Principle of Complete Mediation (“The Protection of Information in Computer Systems,” 1975), which are enforced within the separation kernel, and provided by the separation kernel to higher levels of the system. One of the tenets of MILS is to decompose a system into reusable components, each of which is simple enough to be rigorously analyzed for correctness and/or security properties. Most of the services provided by conventional operating systems are pushed out of the separation kernel into other high-assurance subsystems referred to as MILS middleware. The MILS separation kernel and middleware components are required to be sufficiently simple to enable formal analysis of their properties. The MILS paradigm calls for each high-assurance subsystem to be decomposed into as many elements as necessary to facilitate that analysis. Complex high-assurance systems can, in turn, be constructed from MILS subsystems by building upon their functionality and security properties. LynuxWorks, in collaboration with SRI International, is developing a separation kernel and an integrated formal development environment for MILS systems. The companies also are helping lead the MILS community through The Open Group’s Real-Time Embedded Systems (RTES) Forum (http://www.opengroup.org/rtforum/ index.tpl). The project’s goal is to enable experienced engineers, though not expert in formal methods, to use a MILS separation kernel and middleware to develop high-assurance systems. The LynxSecure Separation Kernel is to be certified at the highest levels of security and safety: CC EAL 7+ and DO- 178B Level A. Using Intel Virtualization Technology as a platform for its first release, it will create virtual machines able to run heavyweight operating systems, such as Microsoft Windows, as guest OSs without modification. Within The Open Group’s RTES Forum, MILS stakeholders are working in concert to achieve a common vision of the MILS architecture and to develop a coherent set of community standards.