Bug: 300201306

Clone this repo:
  1. 59f4c71 Add trusty rules.mk file am: 05f2bbcbc3 by Orlando Arbildo · 3 months ago main master
  2. 05f2bbc Add trusty rules.mk file by Orlando Arbildo · 6 months ago
  3. c1a7517 Migrate to cargo_embargo. am: 7d7876e458 am: af5cdd35f7 am: 719aaf6bdb by Andrew Walbran · 6 months ago
  4. 19a9027 Migrate to cargo_embargo. am: 7d7876e458 am: e6f6c6e57f am: 5ce082d7ba by Andrew Walbran · 6 months ago
  5. 719aaf6 Migrate to cargo_embargo. am: 7d7876e458 am: af5cdd35f7 by Andrew Walbran · 6 months ago

vm-memory

crates.io docs.rs

Design

In a typical Virtual Machine Monitor (VMM) there are several components, such as boot loader, virtual device drivers, virtio backend drivers and vhost drivers, that need to access the VM physical memory. The vm-memory crate provides a set of traits to decouple VM memory consumers from VM memory providers. Based on these traits, VM memory consumers can access the physical memory of the VM without knowing the implementation details of the VM memory provider. Thus VMM components based on these traits can be shared and reused by multiple virtualization solutions.

The detailed design of the vm-memory crate can be found here.

Platform Support

  • Arch: x86, AMD64, ARM64
  • OS: Linux/Unix/Windows

Xen support

Supporting Xen requires special handling while mapping the guest memory and hence a separate feature is provided in the crate: xen. Mapping the guest memory for Xen requires an ioctl() to be issued along with mmap() for the memory area. The arguments for the ioctl() are received via the vhost-user protocol's memory region area.

Xen allows two different mapping models: Foreign and Grant.

In Foreign mapping model, the entire guest address space is mapped at once, in advance. In Grant mapping model, the memory for few regions, like those representing the virtqueues, is mapped in advance. The rest of the memory regions are mapped (partially) only while accessing the buffers and the same is immediately deallocated after the buffer is accessed. Hence, special handling for the same in VolatileMemory.rs.

In order to still support standard Unix memory regions, for special regions and testing, the Xen specific implementation here allows a third mapping type: MmapXenFlags::UNIX. This performs standard Unix memory mapping and the same is used for all tests in this crate.

It was decided by the rust-vmm maintainers to keep the interface simple and build the crate for either standard Unix memory mapping or Xen, and not both.

Xen is only supported for Unix platforms.

Usage

Add vm-memory as a dependency in Cargo.toml

[dependencies]
vm-memory = "*"

Then add extern crate vm-memory; to your crate root.

Examples

  • Creating a VM physical memory object in hypervisor specific ways using the GuestMemoryMmap implementation of the GuestMemory trait:
fn provide_mem_to_virt_dev() {
    let gm = GuestMemoryMmap::from_ranges(&[
        (GuestAddress(0), 0x1000),
        (GuestAddress(0x1000), 0x1000)
    ]).unwrap();
    virt_device_io(&gm);
}
  • Consumers accessing the VM's physical memory:
fn virt_device_io<T: GuestMemory>(mem: &T) {
    let sample_buf = &[1, 2, 3, 4, 5];
    assert_eq!(mem.write(sample_buf, GuestAddress(0xffc)).unwrap(), 5);
    let buf = &mut [0u8; 5];
    assert_eq!(mem.read(buf, GuestAddress(0xffc)).unwrap(), 5);
    assert_eq!(buf, sample_buf);
}

License

This project is licensed under either of