Running a Secure, Tactical, Type 1 Hypervisor on the CHAMP XD1

Bringing Virtualization to Mission Computing and Radar Systems

AdobeStock_120328814.jpeg

In our last post, we disclosed how our partnership with Curtiss-Wright brings the benefits of virtualization to advanced mission computing and radar systems. We've pre-integrated our secure tactical virtualization solution, Titanium Secure Hypervisor, with Curtiss-Wright's popular mission computing and radar board, the CHAMP-XD1. The result: a qualified solution that meets anti-tamper and cybersecurity security requirements with high levels of performance and minimal SWaP costs. 

Our Titanium Secure Hypervisor is built upon the open-source and widely deployed Xen Project, and is specifically designed for hostile computing environments. It operates as trusted supervisory software within the processor – configuring and controlling both hardware resources and software execution in order to ensure and maintain the integrity of system operations. 

Titanium Secure Hypervisor leverages hardware-based root-of-trust to perform a secure boot process and can optionally leverage hardware-provided security services at runtime. During system operation, the hypervisor enforces physical and logical isolation such that software loads execute within private enclaves, even though they may be running on a single physical processing board. Titanium Secure Hypervisor also has strong technology and anti-reverse engineering protections built-in. These features ensure that sensitive applications and data remain protected against unauthorized access, theft, and malicious modification – even in the face of dedicated hackers who have physical and/or logical administrative access to the processing board.

Curtiss Wright's CHAMP-XD1 provides a hardware foundation of trust on the hardware side, fully leveraging Intel's hardware building blocks. The CHAMP-XD1 integrates the Intel hardware foundations, enabling the Titanium Security Hypervisor to act as a software root of trust and isolation engine for the rest of the platform. The CHAMP XD1 enables the use of multiple peripherals leveraging SR-IOV to provide distinct peripherals to each guest and support the direct passthrough of other peripherals. The CHAMP XD1 also provides 8-12 CPU cores, 16-32GB of memory, local storage, TPM, and other physical infrastructure required for a secure, tactical virtualization platform.

We worked with Curtiss Wright by pre-integrating these two technologies. Integration of the CHAMP XD1 and Titanium Security Hypervisor followed a number of steps leading us to the final solution; steps that can be followed by anyone for the same results. The integration was straightforward. Here's how we did it:

1. Verifying the Peripheral Configuration

The first step in the integration process was ensuring the onboard peripherals and their associated drivers support virtualization, utilize well-established DMA channels and (memory) ranges, and look for lurking gremlins such as transparent PCI-E switches, which require special handling in a virtualized environment. If the peripherals and drivers don't support virtualization, there's no amount of coaxing you can do to fix that unless you embark on a costly development project. With these out of the way, we can move on to configuring the BIOS / UEFI.

2. Configuring the BIOS / UEFI

Within the BIOS / UEFI, we first needed to configure the onboard peripherals for passthrough. The OS or hypervisor loaded on the SBC will need to access the peripherals (either thru direct pass-thru, Intel's VT-d instructions, or as a purely virtual device) and cannot do so unless this configuration is set. 

Next, we needed to set up legacy emulation for the environment. Legacy emulation allows older operating systems to utilize legacy hardware exposed through newer interfaces. An example would be allowing a virtual machine running DOS to use USB mice and keyboards while presenting (or emulating) the devices as PS/2 to the VM.

Finally, we disabled hyper-threading for both performance and security reasons. Despite the performance improvements hyperthreading provides, various side-channel attacks exist in virtualized environments. Additionally, we wanted to tie specific processors (physical cores) to specific virtual machines for predictable performance and security.

3. Enabling Intel Virtualization Bits

Modern CPUs have hardware virtualization acceleration built-in, but it often needs to be explicitly turned on within the system BIOS / UEFI. These three, in particular, should be confirmed for highly performant and secure virtualization:

  1. Intel VT-x – This allows the virtual machines to leverage hardware acceleration features built into the Intel chipset for maximum virtualization performance.

  2. Intel VT-d – Enabling this feature allows the virtual machines to directly access peripheral devices and to restrict access by a guest to specific peripherals. It allows PCI device passthrough and IOMMU configuration.

  3. Intel TXT – This is Intel's trusted boot mechanism that is used with virtualization to ensure the proper, authenticated boot of the machine.

Of course, once the features have been enabled in BIOS, they need to be verified as operational in a guest or hypervisor as several features can be toggled, but the toggles don't always twiddle the correct bits, and even a runtime environment such as Linux may have to implement heuristics to correctly identify and use specific features. This is all part of the testing and integration to ensure that the Titanium Security Hypervisor can access all of the hardware and use the correct heuristics to identify any features present / enabled. 

4. Installing the Hypervisor to Local Storage

Much like any standard environment, we needed to install our virtualization software onto the single-board computer, so control would be transferred to it during the boot process. In the case of a Type-1 hypervisor such as Titanium Security Hypervisor, it was necessary to install a base OS to act as the control domain, such as Linux. Titanium Security Hypervisor was then installed in conjunction with the OS, where the actual hypervisor is added to the boot environment, and the actual control domain was established. As part of finishing the installation of the control domain, any drivers and/or BSP packages were installed for any peripherals used by the hypervisor, either directly or indirectly.

5. Installing Linux

The final few steps before boot include installing the guest operating system, in our case, Linux for the guest VM(s). With that in place, we configured the board support package (in the guest this time), then forced a secure configuration using virtualization extensions and the associated provisioning tools for Titanium Security Hypervisor.

6. Booting and Verification

After the first boot, we checked for device peripherals that didn't correctly handle "function level reset" and verify that guests are operating with the expected peripherals. Then, we can check the performance of the native vs. virtualized guest using our test bench. Finally, we verify devices operate as expected in various guests, including thruput, interrupt servicing, status monitoring, and daemon/service operations. Assuming all those checks work out, we're good to go, and we have a virtualized environment combining hardware and software.

Results

It took ~4 hours to get this solution up and running from ground zero. We had zero modifications to the Curtiss Wright software and minimal tweaks to the Titanium Secure Hypervisor, mostly related to the heuristics used for various virtualization detections.

We are excited to deliver a solution that can solve the high-performance needs of many mission computing and radar systems in a robust and secure way, ensuring cybersecurity at rest and cyber-resiliency during runtime on our nation's most mission-critical systems.

Click here to read more about our partnership with Curtiss-Wright to integrate Titanium Security Suite on field-proven hardware.