Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oxidize Hafnium's initialization sequence (successor of #45) #52

Merged
merged 50 commits into from
Sep 11, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
00a1f2d
Oxidize load.c and main.c
efenniht Aug 30, 2019
bc7ca9f
Oxidize unit test code (abi_test.cc)
efenniht Aug 31, 2019
2636ccc
Remove c++ test codes and add `cargo test` in test script.
efenniht Aug 31, 2019
30e6ded
Oxidize FDT
efenniht Aug 31, 2019
864fa4e
Oxidize FDT (cont'd).
efenniht Aug 31, 2019
8a640d8
Export C functions.
efenniht Sep 1, 2019
172c40a
Format and fix to call fdt_handler::{map, unmap, patch}.
efenniht Sep 1, 2019
05accd4
Remove plat.rs, since `#pragma weak` is not supported in Rust. Make s…
efenniht Sep 4, 2019
77b1dc8
Refactor init, load, boot_params
efenniht Sep 4, 2019
d21425f
Fix wrongly ported code (!*p should be *p == 0)
efenniht Sep 4, 2019
c1e4486
Fix wrongly ported code (write to the given reference, and continue e…
efenniht Sep 4, 2019
f2fba0f
Use different constants for unit tests.
efenniht Sep 4, 2019
aa8020d
Resolve warnings on FDT and FDT handler.
efenniht Sep 4, 2019
4d1e6a9
Make a singleton object representing MemoryManager.
efenniht Sep 5, 2019
14c7d6e
rustfmt.
efenniht Sep 5, 2019
4677f7c
Resolve some clippy issues.
efenniht Sep 5, 2019
4b85b64
Fix mem_ranges_available to be initialized properly
efenniht Sep 5, 2019
fc87686
Refactor VM init sequence.
efenniht Sep 5, 2019
896d267
Refactor load_secondary.
efenniht Sep 5, 2019
be86a72
Refactor VM init sequence. (cont'd)
efenniht Sep 5, 2019
16c7781
Refactor API init sequence.
efenniht Sep 5, 2019
7d75af6
Remove vm_find.
efenniht Sep 5, 2019
7dbad0b
Refactor CPU init sequence.
efenniht Sep 5, 2019
0a9cb50
Correct one-time initialization to run
efenniht Sep 6, 2019
d6b350b
Make safe wrapper functions to read singletons.
efenniht Sep 6, 2019
7dac8a4
Fix Vm to know its address correctly while initialized
efenniht Sep 6, 2019
54c3e18
Remove unused imports
efenniht Sep 6, 2019
eebc853
Resolve simple issues on the review of #45
efenniht Sep 7, 2019
9e52fae
More idiomatic init of Cpu
efenniht Sep 7, 2019
cfdb622
Resolve simple issues on the review of #45 (cont'd)
efenniht Sep 8, 2019
7fc3b7f
Fix a bug introduced from cfdb6227a98e4a13f048ef451dfa1ef9bdbdba45
efenniht Sep 9, 2019
5f7a90b
Resolve simple issues on the review of #45 (cont'd)
efenniht Sep 9, 2019
1f8d79a
Call one_time_init directly in hypervisor_entry, so remove INITED.
efenniht Sep 9, 2019
cf675b4
Remove cpu_module_init and move related statics to init.rs
efenniht Sep 9, 2019
9d61512
Fix stack overflow during Vm initialization.
efenniht Sep 10, 2019
7a371f5
Call arch_cpu_module_init() in init.rs
efenniht Sep 11, 2019
950dd1d
Revert 1f8d79a, but call one_time_init in cpu_entry.
efenniht Sep 11, 2019
7de19be
Remove singleton.rs and gather all singletons into `Hypervisor`.
efenniht Sep 11, 2019
04dbdf8
Use ptr::write not to drop PageTable, and remove unnecessary stuffs.
efenniht Sep 11, 2019
2c70109
Use ptr::write more.
efenniht Sep 11, 2019
0180a56
Do not initialize ArchRegs when created, to prevent stack overflow.
efenniht Sep 11, 2019
8de583d
Checks cpu_ids contains zero or multiple boot CPU IDs, and make a TODO.
efenniht Sep 11, 2019
31b1169
Use ok_or! / unwrap_or!
efenniht Sep 11, 2019
4c5395e
Revert a part of 31b1169, that was wrong.
efenniht Sep 11, 2019
3d5c7c8
Make basic getters and util functions of MemIter. Remove `pub`s of me…
efenniht Sep 11, 2019
c02d16b
Don't call cpu_entry; just jump.
efenniht Sep 11, 2019
fa7b08a
Renaming unwrap_or, to some_or.
efenniht Sep 11, 2019
045a5e0
Resolve simple issues on the review of #52
efenniht Sep 11, 2019
9bf3363
FP-style seeking
efenniht Sep 11, 2019
b9d83ad
Make INITED be atomic.
efenniht Sep 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions hfo2/src/abi.rs
Original file line number Diff line number Diff line change
Expand Up @@ -115,3 +115,84 @@ impl TryFrom<usize> for HfShare {
}
}
}

#[cfg(test)]
mod test {
use super::*;

/// Encode a preempted response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_preempted() {
let res = HfVCpuRunReturn::Preempted;
assert_eq!(res.into_raw(), 0);
}

/// Encode a yield response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_yield() {
let res = HfVCpuRunReturn::Yield;
assert_eq!(res.into_raw(), 1);
}

/// Encode wait-for-interrupt response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_wait_for_interrupt() {
let res = HfVCpuRunReturn::WaitForInterrupt {
ns: HF_SLEEP_INDEFINITE,
};
assert_eq!(res.into_raw(), 0xffffffffffffff02);
}

/// Encoding wait-for-interrupt response with too large sleep duration will
/// drop the top octet.
#[test]
fn abi_hf_vcpu_run_return_encode_wait_for_interrupt_sleep_too_long() {
let res = HfVCpuRunReturn::WaitForInterrupt {
ns: 0xcc22888888888888,
};
assert_eq!(res.into_raw(), 0x2288888888888802);
}

/// Encode wait-for-message response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_wait_for_message() {
let res = HfVCpuRunReturn::WaitForMessage {
ns: HF_SLEEP_INDEFINITE,
};
assert_eq!(res.into_raw(), 0xffffffffffffff03);
}

/// Encoding wait-for-message response with too large sleep duration will
/// drop the top octet.
#[test]
fn abi_hf_vcpu_run_return_encode_wait_for_message_sleep_too_long() {
let res = HfVCpuRunReturn::WaitForMessage {
ns: 0xaa99777777777777,
};
assert_eq!(res.into_raw(), 0x9977777777777703);
}

/// Encode wake up response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_wake_up() {
let res = HfVCpuRunReturn::WakeUp {
vm_id: 0x1234,
vcpu: 0xabcd,
};
assert_eq!(res.into_raw(), 0x1234abcd0004);
}

/// Encode a 'notify waiters' response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_notify_waiters() {
let res = HfVCpuRunReturn::NotifyWaiters;
assert_eq!(res.into_raw(), 6);
}

/// Encode an aborted response without leaking.
#[test]
fn abi_hf_vcpu_run_return_encode_aborted() {
let res = HfVCpuRunReturn::Aborted;
assert_eq!(res.into_raw(), 7);
}
}
116 changes: 35 additions & 81 deletions hfo2/src/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ use crate::abi::*;
use crate::addr::*;
use crate::arch::*;
use crate::cpu::*;
use crate::init::*;
use crate::mm::*;
use crate::mpool::*;
use crate::page::*;
Expand All @@ -47,19 +48,6 @@ use crate::vm::*;
// of a page.
const_assert_eq!(hf_mailbox_size; HF_MAILBOX_SIZE, PAGE_SIZE);

/// A global page pool for sharing memories. Its mutability is needed only for
/// initialization.
static mut API_PAGE_POOL: MaybeUninit<MPool> = MaybeUninit::uninit();

/// Initialises the API page pool by taking ownership of the contents of the
/// given page pool.
/// TODO(HfO2): The ownership of `ppool` is actually moved from `one_time_init`
/// to here. Refactor this function like `Api::new(ppool: MPool) -> Api`. (#31)
#[no_mangle]
pub unsafe extern "C" fn api_init(ppool: *const MPool) {
API_PAGE_POOL = MaybeUninit::new(MPool::new_from(&*ppool));
}

/// Switches the physical CPU back to the corresponding vcpu of the primary VM.
///
/// This triggers the scheduling logic to run. Run in the context of secondary
Expand All @@ -70,11 +58,11 @@ unsafe fn switch_to_primary(
mut primary_ret: HfVCpuRunReturn,
secondary_state: VCpuStatus,
) -> *mut VCpu {
let primary = vm_find(HF_PRIMARY_VM_ID);
let next = vm_get_vcpu(
primary,
cpu_index(current.get_inner().cpu) as spci_vcpu_index_t,
);
let primary = hafnium().vm_manager.get(HF_PRIMARY_VM_ID).unwrap();
let next = primary
.vcpus
.get(cpu_index(&*current.get_inner().cpu))
.unwrap();

// If the secondary is blocked but has a timer running, sleep until the
// timer fires rather than indefinitely.
Expand All @@ -94,16 +82,15 @@ unsafe fn switch_to_primary(
// The use of `get_mut_unchecked()` is safe because the currently running pCPU implicitly owns
// `next`. Notice that `next` is the vCPU of the primary VM that corresponds to the currently
// running pCPU.
(*next)
.inner
next.inner
.get_mut_unchecked()
.regs
.set_retval(primary_ret.into_raw());

// Mark the current vcpu as waiting.
current.get_inner_mut().state = secondary_state;

next
next as *const _ as usize as *mut _
}

/// Returns to the primary vm and signals that the vcpu still has work to do so.
Expand Down Expand Up @@ -225,19 +212,14 @@ pub unsafe extern "C" fn api_vcpu_get_count(
vm_id: spci_vm_id_t,
current: *const VCpu,
) -> spci_vcpu_count_t {
let vm;

// Only the primary VM needs to know about vcpus for scheduling.
if (*(*current).vm).id != HF_PRIMARY_VM_ID {
return 0;
}

vm = vm_find(vm_id);
if vm.is_null() {
return 0;
}
let vm = some_or!(hafnium().vm_manager.get(vm_id), return 0);

(*vm).vcpu_count
vm.vcpus.len() as _
}

/// This function is called by the architecture-specific context switching
Expand Down Expand Up @@ -286,7 +268,7 @@ unsafe fn internal_interrupt_inject(
/// `vcpu.execution_lock` has acquired.
unsafe fn vcpu_prepare_run(
jeehoonkang marked this conversation as resolved.
Show resolved Hide resolved
current: &VCpuExecutionLocked,
vcpu: *mut VCpu,
vcpu: &VCpu,
run_ret: HfVCpuRunReturn,
) -> Result<VCpuExecutionLocked, HfVCpuRunReturn> {
let mut vcpu_inner = (*vcpu).inner.try_lock().map_err(|_| {
Expand All @@ -300,13 +282,9 @@ unsafe fn vcpu_prepare_run(
run_ret
})?;

if (*(*vcpu).vm).aborting.load(Ordering::Relaxed) {
if (*vcpu.vm).aborting.load(Ordering::Relaxed) {
if vcpu_inner.state != VCpuStatus::Aborted {
dlog!(
"Aborting VM {} vCPU {}\n",
(*(*vcpu).vm).id,
vcpu_index(vcpu)
);
dlog!("Aborting VM {} vCPU {}\n", (*vcpu.vm).id, vcpu_index(vcpu));
vcpu_inner.state = VCpuStatus::Aborted;
}
return Err(run_ret);
Expand All @@ -324,13 +302,13 @@ unsafe fn vcpu_prepare_run(
// when it is going to be needed. This ensures there are no inter-vCPU
// dependencies in the common run case meaning the sensitive context
// switch performance is consistent.
VCpuStatus::BlockedMailbox if (*(*vcpu).vm).inner.lock().try_read().is_ok() => {
VCpuStatus::BlockedMailbox if (*vcpu.vm).inner.lock().try_read().is_ok() => {
vcpu_inner.regs.set_retval(SpciReturn::Success as uintreg_t);
}

// Allow virtual interrupts to be delivered.
VCpuStatus::BlockedMailbox | VCpuStatus::BlockedInterrupt
if (*vcpu).interrupts.lock().is_interrupted() =>
if vcpu.interrupts.lock().is_interrupted() =>
{
// break;
}
Expand Down Expand Up @@ -395,18 +373,12 @@ pub unsafe extern "C" fn api_vcpu_run(
}

// The requested VM must exist.
let vm = vm_find(vm_id);
if vm.is_null() {
return ret.into_raw();
}
let vm = some_or!(hafnium().vm_manager.get(vm_id), return ret.into_raw());

// The requested vcpu must exist.
if vcpu_idx >= (*vm).vcpu_count {
return ret.into_raw();
}
let vcpu = some_or!(vm.vcpus.get(vcpu_idx as usize), return ret.into_raw());

// Update state if allowed.
let vcpu = vm_get_vcpu(vm, vcpu_idx);
let mut vcpu_locked = match vcpu_prepare_run(&current, vcpu, ret) {
Ok(locked) => locked,
Err(ret) => return ret.into_raw(),
Expand Down Expand Up @@ -493,10 +465,7 @@ pub unsafe extern "C" fn api_vm_configure(
// TODO: the scope of the can be reduced but will require restructing
// to keep a single unlock point.
let mut vm_inner = (*vm).inner.lock();
if vm_inner
.configure(send, recv, API_PAGE_POOL.get_ref())
.is_err()
{
if vm_inner.configure(send, recv, &hafnium().mpool).is_err() {
return -1;
}

Expand Down Expand Up @@ -550,10 +519,10 @@ pub unsafe extern "C" fn api_spci_msg_send(
}

// Ensure the target VM exists.
let to = vm_find(from_msg_replica.target_vm_id);
if to.is_null() {
return SpciReturn::InvalidParameters;
}
let to = some_or!(
hafnium().vm_manager.get(from_msg_replica.target_vm_id),
return SpciReturn::InvalidParameters
);

// Hf needs to hold the lock on `to` before the mailbox state is checked.
// The lock on `to` must be held until the information is copied to `to` Rx
Expand Down Expand Up @@ -615,8 +584,10 @@ pub unsafe extern "C" fn api_spci_msg_send(
// at spci_msg_handle_architected_message will make several accesses to
// fields in message_buffer. The memory area message_buffer must be
// exclusively owned by Hf so that TOCTOU issues do not arise.
// TODO(HfO2): Refactor `spci_*` functions, in order to pass references
// to VmInner.
let ret = spci_msg_handle_architected_message(
&ManuallyDrop::new(VmLocked::from_raw(to)),
&ManuallyDrop::new(VmLocked::from_raw(to as *const _ as usize as *mut _)),
efenniht marked this conversation as resolved.
Show resolved Hide resolved
&ManuallyDrop::new(VmLocked::from_raw(from)),
architected_message_replica,
&from_msg_replica,
Expand Down Expand Up @@ -730,10 +701,7 @@ pub unsafe extern "C" fn api_mailbox_waiter_get(vm_id: spci_vm_id_t, current: *c
return -1;
}

let vm = vm_find(vm_id);
if vm.is_null() {
return -1;
}
let vm = some_or!(hafnium().vm_manager.get(vm_id), return -1);

// Check if there are outstanding notifications from given vm.
let entry = (*vm).inner.lock().fetch_waiter();
Expand Down Expand Up @@ -833,26 +801,17 @@ pub unsafe extern "C" fn api_interrupt_inject(
next: *mut *mut VCpu,
) -> i64 {
let mut current = ManuallyDrop::new(VCpuExecutionLocked::from_raw(current));
let target_vm = vm_find(target_vm_id);
let target_vm = some_or!(hafnium().vm_manager.get(target_vm_id), return -1);

if intid >= HF_NUM_INTIDS {
return -1;
}

if target_vm.is_null() {
return -1;
}

if target_vcpu_idx >= (*target_vm).vcpu_count {
// The requested vcpu must exist.
return -1;
}

if !is_injection_allowed(target_vm_id, current.deref()) {
return -1;
}

let target_vcpu = vm_get_vcpu(target_vm, target_vcpu_idx);
let target_vcpu = some_or!(target_vm.vcpus.get(target_vcpu_idx as usize), return -1);

dlog!(
"Injecting IRQ {} for VM {} VCPU {} from VM {} VCPU {}\n",
Expand All @@ -868,7 +827,7 @@ pub unsafe extern "C" fn api_interrupt_inject(
/// Clears a region of physical memory by overwriting it with zeros. The data is
/// flushed from the cache so the memory has been cleared across the system.
fn clear_memory(begin: paddr_t, end: paddr_t, ppool: &MPool) -> Result<(), ()> {
let mut hypervisor_ptable = HYPERVISOR_PAGE_TABLE.lock();
let mut hypervisor_ptable = hafnium().memory_manager.hypervisor_ptable.lock();
let size = pa_difference(begin, end);
let region = pa_addr(begin);

Expand Down Expand Up @@ -925,7 +884,7 @@ pub fn spci_share_memory(
// Create a local pool so any freed memory can't be used by another thread.
// This is to ensure the original mapping can be restored if any stage of
// the process fails.
let local_page_pool: MPool = MPool::new_with_fallback(unsafe { API_PAGE_POOL.get_ref() });
let local_page_pool: MPool = MPool::new_with_fallback(&hafnium().mpool);

// Obtain the single contiguous set of pages from the memory_region.
// TODO: Add support for multiple constituent regions.
Expand All @@ -937,7 +896,7 @@ pub fn spci_share_memory(
// Check if the state transition is lawful for both VMs involved in the
// memory exchange, ensure that all constituents of a memory region being
// shared are at the same state.
let (orig_from_mode, from_mode, to_mode) = ok_or_return!(
let (orig_from_mode, from_mode, to_mode) = ok_or!(
spci_msg_check_transition(
&to_locked,
&from_locked,
Expand All @@ -946,7 +905,7 @@ pub fn spci_share_memory(
end,
memory_to_attributes,
),
SpciReturn::InvalidParameters
return SpciReturn::InvalidParameters
);

let pa_begin = pa_from_ipa(begin);
Expand Down Expand Up @@ -1006,12 +965,7 @@ fn share_memory(
}

// Ensure the target VM exists.
let to = unsafe { vm_find(vm_id) };
if to.is_null() {
return Err(());
}

let to = unsafe { &*to };
let to = hafnium().vm_manager.get(vm_id).ok_or(())?;

let begin = addr;
let end = ipa_add(addr, size);
Expand All @@ -1036,7 +990,7 @@ fn share_memory(
// Create a local pool so any freed memory can't be used by another thread.
// This is to ensure the original mapping can be restored if any stage of
// the process fails.
let local_page_pool = MPool::new_with_fallback(unsafe { API_PAGE_POOL.get_ref() });
let local_page_pool = MPool::new_with_fallback(&hafnium().mpool);

let (mut from_inner, mut to_inner) = SpinLock::lock_both(&(*from).inner, &(*to).inner);

Expand Down Expand Up @@ -1115,7 +1069,7 @@ pub unsafe extern "C" fn api_share_memory(
) -> i64 {
// Convert the sharing request to memory management modes.
// The input is untrusted so might not be a valid value.
let share = ok_or_return!(HfShare::try_from(share), -1);
let share = ok_or!(HfShare::try_from(share), return -1);

match share_memory(vm_id, addr, size, share, &*current) {
Ok(_) => 0,
Expand Down
Loading