BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


TECHNOLOGY DEPLOYED

Hypervisors and Virtualization for Multicore

New Approaches to Combating Rootkits

Rootkits are a pernicious means of invading and attacking computer systems. As operating system complexity increases, the assurance of security drops. Fortunately for embedded and mobile devices, there appear to be effective ways to combat this threat.

DAVID KLEIDERMACHER, GREEN HILLS SOFTWARE

  • Page 1 of 1
    Bookmark and Share

Article Media

Many computer security problems involve some form of malware that attempts to infiltrate a computer and subvert its security for nefarious purposes. A rootkit is a particular type of malware that has successfully obtained elevated (e.g., root) privilege. Given this command over computer resources, a rootkit often attempts to hide its existence to avoid detection. For example, a rootkit may disable anti-malware applications or internal kernel self-checking services. In 2011, McAfee asserted the existence of over 2 million unique rootkits, and reported that 1200 new rootkits were being detected every single day.

Every developer of electronic products should be concerned about ensuring the integrity of the operating system kernel and any other application or service that is designed to operate at kernel privilege. By ensuring that only trusted software is running on the platform, rootkits cannot take hold. And if prevention of rootkits is not practical, then the goal should be at least to detect them and hopefully take some corrective action. 

There are two ways that kernel integrity can be violated by rootkits. First, the disk or flash blocks that contain the trusted software can be modified to include the rootkit. This is called a permanent rootkit. Rootkit installation can be performed with a physical attack on the storage system, or by using an operating system vulnerability to gain run-time access to the storage system. The second method is to “hook” into the kernel’s execution pathways during run-time. Installing the rootkit into the volatile memory image of the operating system enables temporary control, but the pristine image is reloaded on the next system boot.

Much of the world’s modern operating system security research is centered on making it more difficult for a rootkit to take hold by obfuscating operating system execution (e.g., address space layout randomization), reducing general operating system vulnerabilities, and by designing mechanisms to protect kernel integrity.

Secure Boot and Remote Attestation

Secure boot is the most obvious and effective way to prevent, or at least detect, permanent rootkits. The goal of secure boot is to ensure that the entire platform, including its hardware, ROM boot loaders, application-level boot loaders and operating system—everything that contributes to the establishment of the known, trusted initial state of the system—is measured and found to be authentic. Of course, secure boot does not imply that the system is secure, but rather only that it is running the expected trusted computing base (TCB).

If the hardware and boot loader have the capability to load the system firmware (operating system, hypervisor, entire TCB) from an alternative device, such as USB, rather than the intended, trusted device (e.g., flash), then an attacker with access to the system can boot an evil operating system that may act like the trusted operating system but with malicious behavior, such as disabling network authentication services or adding backdoor logins. But this is only one way to subvert systems that lack secure boot.

Instead of a malicious boot loader or operating system, an evil hypervisor can be booted, and the hypervisor can then launch the trusted operating system within a virtual machine. The evil hypervisor, such as the one known as SubVirt, has complete access to RAM and hence can silently observe the trusted environment, stealing encryption keys or modifying the system security policy. Another infamous attack, called the BluePill, extended the SubVirt approach to create a permanent rootkit that could easily be launched on the fly using weaknesses in the factory-installed Windows operating system.

The typical secure boot method is to verify the authenticity of each component in the boot chain. If any link in the chain is broken, the secure initial state is compromised. The first stage ROM loader must also have a pre-burned cryptographic key used to verify the digital signature of the next level boot loader. This key may be integrated into the ROM loader image itself, installed using a one-time programmable fuse, or stored in a local TPM that may provide enhanced tamper protection. The hardware root of trust must include this initial verification key.

The signature key is used to verify the authenticity of the second stage component in the boot chain. The known good signature must therefore also be stored in the hardware-protected area. The verification of the second-level component covers its executable image as well as the known good signature and signature verification key of the third stage, if any. The chain of verification can be indefinitely long. It is not uncommon for some sophisticated computing systems to have surprisingly long chains or even trees of verified components that make up the TCB. Figure 1 depicts an example three-level secure boot sequence. When the verification chain begins at system reset and includes all firmware that executes prior to the establishment of the run-time steady state, this is referred to as a static root of trust.

Figure 1
Secure boot chain.

A dynamic root of trust, in contrast, allows an already running system (which may not be in a known secure state) to perform a measurement of the TCB chain and then partially reset the computer resources such that only this dynamic chain contributes to the secure initial state. Dynamic root of trust requires specialized hardware, such as Intel’s Trusted Execution Technology (TXT), available on some higher-end embedded Intel Architecture-based chipsets. The primary impetus behind dynamic root of trust is to remove large boot-time components, which must run to initialize a computer, from the TCB. On Intel Architecture-based systems, the BIOS is often an extremely large piece of software and has been proven to contain vulnerabilities that can be exploited to insert rootkits. By performing the dynamic reset—also sometimes referred to as late launch—after the BIOS has initialized the hardware, all privilege is removed from the BIOS execution environment. Therefore, the system in theory has reduced its TCB and improved the probability of a secure initial state. Unfortunately, researchers have found several weaknesses, both in hardware and software that implement the late launch mechanism, bringing into question the ability to achieve a high level of trust in complicated boot environments.

The good news for secure boot is that most embedded and mobile computing systems rely on simple boot loaders that lend themselves well to the static root of trust approach that can be implemented without specialized hardware.

Secure boot provides embedded system developers with confidence that the deployed product is resistant to low-level, boot-time firmware attacks. Nevertheless, a risk may persist in which sophisticated attackers can compromise the secure boot process. Furthermore, an attacker may be able to replace the deployed product wholesale with a malicious impersonation. For example, a smart meter can be ripped off of the telephone pole and replaced with a rogue smart meter that looks the same but covertly sends private energy accounting information to a malicious web site. Therefore, even with secure boot, users and administrators may require assurance that a deployed product is actively running the known good TCB.

When embedded systems are connected to management networks, remote attestation can be used to provide this important security function. Once again, the Trusted Computing Group (TCG) has standardized a mechanism for TCG-compliant systems to perform remote attestation using trusted platform module (TPM)-based measurements. Network access can be denied when a connecting client fails to provide proper attestation. Within TCG, this function is called Trusted Network Connect (TNC). However, a simple, hardware-independent approach can be used for any computing system by establishing a mutually authenticated connection (e.g., via IKE/IPsec or TLS). As long as the device’s static private key and secure connection protocol software are included in the TCB validated during secure boot, the attester has assurance that the device is running known good firmware. An improvement to this approach, providing assurance that the device is running a specific set of trusted firmware components, is to have the client transmit the complete set of digital signatures corresponding to the TCB chain to the attester that stores the known good set of signatures locally.

Hyperhooking

Unfortunately, secure boot and attestation do not protect against run-time subversion via some vulnerability in the TCB. The software security industry is overflowing with snake oil solutions claiming to prevent malware. But every day brings a zero-day, and rootkits remain commonplace.

Computer security and operating system firms are slowly coming to the realization that modern sophisticated operating systems cannot be adequately protected from within, but rather require some out-of-band mechanism immune to vulnerabilities in the operating system itself.

Due to its wide availability in Intel-based desktop and server microprocessors, and increasing availability in ARM-based mobile and embedded microprocessors, hardware-based virtualization support is rapidly emerging as the mechanism of choice. Hardware virtualization hooks enable a piece of software to take over control of the computer during certain security-sensitive computing operations. These can be such things as operating system exceptions and interrupts, supervisor mode instructions and write accesses to sensitive memory locations. We introduce the term hyperhooking for this general security approach. The hardware virtualization hooks enable a trusted agent to look for rootkits by examining system state during these trapped operations (Figure 2). These are the same hardware hooks that commercial hypervisors use to provide virtual machine services.

Figure 2
Hyperhooking.

The discerning reader will note that these same hardware virtualization hooks were used in the aforementioned hypervisor rootkit attacks; secure boot is required to ensure that only the trusted agent is installed and able to use these capabilities. And the trusted agent itself must be secure against attack.

A commercial example of hyperhooking is McAfee’s DeepSAFE technology (Intel VT hardware specific), although little is publicized about what DeepSAFE actually does. Another commercial example that uses Intel VT is Bromium’s vSentry, in which the hyperhook agent’s actions in response to hardware traps can be configured via policy.

Both DeepSAFE and vSentry attempt to retrofit rootkit protection to sophisticated operating systems. But as has been proven with other OS-visors like SELinux, there simply is too much complexity in these operating systems to manage and control. The retrofit will only temporarily raise the bar for attackers.

In 2009 researchers applying the name HookSafe to their technique demonstrated how thousands of Linux control functions could be protected against rootkit reflection, which attacks by replacing a known good function with a malicious one. The researchers employed hardware virtualization capabilities to detect and prevent attempts to overwrite function pointers used to invoke these control functions. Despite covering thousands of control functions, the researchers admit that the technique fails to address the independent problem of rootkits that manipulate dynamic data objects (vs. control flow) to achieve their purpose. Even the set of control functions covered is not complete; a single vulnerable control point is sufficient to defeat the entire system. As the researchers state, “a fundamental limitation … is that hook access profiles are constructed on dynamic analysis and thus may be incomplete.” The researchers admit that determining the complete set of kernel hooks exploitable by rootkits is an “interesting research problem” with no known solution.

Hyperhosting

Many computer security experts have already come to the inevitable conclusion that there never will be a method that prevents all rootkits in sophisticated operating systems due to their insurmountable complexity and high rate of change, thus assuring a constant and fertile stream of vulnerability surface area. These experts are applying the same hardware-based virtualization hooks for out-of-band security components. But rather than use these hooks only for protecting the operating system, they are used to isolate those capabilities of the operating system that must be protected in separate virtual memory processes and/or virtual machines. Regardless of how many rootkits are installed in the operating system, the isolated software components remain unaffected. We call this concept of hosting security components on a hypervisor hyperhosting.

The scope of functionality that can be deployed in these isolated containers can range from simple cryptographic functions, such as those commonly found in smart cards, to full-scale secondary operating system environments, such as found in “dual persona” mobile phones (one OS used for personal environment and a second instance used for the enterprise or some other critical environment). The Integrity Multivisor is an example of a bare metal hypervisor that runs on ARM or Intel processors and provides this kind of isolation environment. Unlike typical hypervisors, Integrity Multivisor can hyperhost lightweight processes in addition to full virtual machines containing guest operating systems such as Linux, Android and Windows. This architecture (Figure 3) can be used for rootkit hyperhooking, network security, data encryption, system monitoring and attestation, etc. These hypervisor components are protected against rootkits that subvert the main operating system. The hypervisor is built on top of separation kernel technology that has been certified to numerous security and safety certifications that are far more stringent than what can be met by commercial operating systems alone.

Figure 3
Hyperhosting security functionality.

To further differentiate this new approach, we coin the term interspection, which applies to supervisory, health management, anti-X services from outside the main domain of interest instead of from within (introspection). While the coarser-grained division of labor solves security problems that introspection cannot, hyperhosting requires a new way of thinking for system design: the unprotected portions of the system need well-defined interfaces to the protected portions. Standards organizations such as GlobalPlatform, Trusted Computing Group’s mobile TPM, and AUTOSAR are working to address this requirement. 

Green Hills Software, Santa Barbara, CA. (805) 965-6044. [www.ghs.com].