Star 0

Abstract

An essential goal of Virtual Machine Introspection (VMI) is assuring security policy enforcement and overall functionality in the presence of an untrustworthy OS. A fundamental obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor’s hardware-level view of a guest OS, called the semantic gap. Over the twelve years since the semantic gap problem was identified, immense progress has been made in developing powerful VMI tools. Unfortunately, much of this progress has been made at the cost of reintroducing trust into the guest OS, often in direct contradiction to the underlying threat model motivating the introspection. Although this choice is reasonable in some contexts and has facilitated progress, the ultimate goal of reducing the trusted computing base of software systems is best served by a clear-eyed look at the VMI design space. This paper organizes previous work on VMI based on the essential design considerations behind any VMI system, and then explains how these design choices dictate the trust model and security properties of the overall system. The paper then observes portions of the VMI design space which have been underexplored, as well as potential adaptations of existing techniques to bridge the semantic gap without trusting the guest OS. Overall, this paper aims to create an essential checkpoint in the broader quest of achieving meaningful trust in virtualized environments through VM introspection.