[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oss-security
Subject:    [oss-security] Xen Security Advisory 278 v1 - x86: Nested VT-x usable even when disabled
From:       Xen.org security team <security () xen ! org>
Date:       2018-10-24 21:13:02
Message-ID: E1gFQSc-0001y8-8b () xenbits ! xenproject ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-278

               x86: Nested VT-x usable even when disabled

ISSUE DESCRIPTION
=================

When running HVM guests, virtual extensions are enabled in hardware because
Xen is using them.  As a result, a guest can blindly execute the
virtualisation instructions, and will exit to Xen for processing.

In the case that the guest hasn't followed the correct (virtual) configuration
procedure, it shouldn't be able to use the instructions, and Xen should
respond with #UD exception.  When nested virtualisation is disabled for the
guest, it is not permitted to complete the configuration procedure.

Unfortunately, when nested virtualisation is intended to be disabled for the
guest, an incorrect default value leads Xen to believe that the configuration
procedure has already been completed.

IMPACT
======

Guest software which blindly plays with the VT-x instructions can cause Xen to
operate on uninitialised data.  As the backing memory is zeroed, this causes
Xen to suffer a NULL pointer dereference, causing a host Denial of Service.

Other behaviours such as memory corruption or privilege escalation have not
been ruled out.

VULNERABLE SYSTEMS
==================

Systems running Xen 4.9 or later are vulnerable.  Systems running Xen 4.8 or
earlier are not vulnerable.

Only Intel x86 systems are vulnerable.  Systems from other x86 vendors, and
other hardware vendors are not vulnerable.

Only x86 HVM and PVH guests can leverage this vulnerability.  x86 PV guests
cannot leverage this vulnerability.

MITIGATION
==========

Running only x86 PV guests will avoid the issue.

For x86 HVM guests, while enabling nested virtualisation for affected guests
does work around this particular DoS, it is not a security supported
configuration and has other know DoS and suspected privilege escalation
vulnerabilities.  Therefore, it is not a mitigation.

CREDITS
=======

This issue was discovered by Sergey Dyasli of Citrix.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa278.patch           xen-unstable
xsa278-4.11.patch      Xen 4.11, 4.10, 4.9

$ sha256sum xsa278*
d94c59ee170f96af14f0cf696221ba8b9447b86820fe99fba1815ab93cc89cd7  xsa278.patch
22686a9bbfbd38bb74292a28a452012d263875c9064815d4afd3fd6c62df0c3a  xsa278-4.11.patch
$

NOTE CONCERNING LACK OF EMBARGO
===============================

This issue was first reported in private and was in the usual XSA process.

It was later independently reported in public with enough detail for the issue
to be considered fully public.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAlvQ4AQMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZMncIAKPKEhtKfaVxNp3WxA2UYRYQCLjrPieFwn8WF/Bx
Fcou5sCUhKZuRQccM5sOyDT8q/GRwYcvkcn3yXqXCKkijhsEA4fzsDYrCvQlO7RS
xcRMJSBhovz81PPrlDfGVGB6f2Iq3JePVP9DNxwHhgNQJN0+3kdjzEUtKJx3VczE
8LwIpQYyG4Xn3HBIjVD7R6+UiJLcDrD5sdRh9yOgNFNQQUqERtsAOEFJ2raYs/Cm
hUvb5m3HBJSzcsZqdfTe5ovLwpumNygao43xt+lAA1KvKk148yEjO4E1dIklmFOE
1d6Za6n9VD/+vTAo2JMDr0WpHZjzvBxNHkOg4levkYvKiCg=
=fPmO
-----END PGP SIGNATURE-----

["xsa278.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled

c/s ac6a4500b "vvmx: set vmxon_region_pa of vcpu out of VMX operation to an
invalid address" was a real bugfix as described, but has a very subtle bug
which results in all VT-x instructions being usable by a guest.

The toolstack constructs a guest by issuing:

  XEN_DOMCTL_createdomain
  XEN_DOMCTL_max_vcpus

and optionally later, HVMOP_set_param to enable nested virt.

As a result, the call to nvmx_vcpu_initialise() in hvm_vcpu_initialise()
(which is what makes the above patch look correct during review) is actually
dead code.  In practice, nvmx_vcpu_initialise() first gets called when nested
virt is enabled, which is typically never.

As a result, the zeroed memory of struct vcpu causes nvmx_vcpu_in_vmx() to
return true before nested virt is enabled for the guest.

Fixing the order of initialisation is a work in progress for other reasons,
but not viable for security backports.

A compounding factor is that the vmexit handlers for all instructions, other
than VMXON, pass 0 into vmx_inst_check_privilege()'s vmxop_check parameter,
which skips the CR4.VMXE check.  (This is one of many reasons why nested virt
isn't a supported feature yet.)

However, the overall result is that when nested virt is not enabled by the
toolstack (i.e. the default configuration for all production guests), the VT-x
instructions (other than VMXON) are actually usable, and Xen very quickly
falls over the fact that the nvmx structure is uninitialised.

In order to fail safe in the supported case, re-implement all the VT-x
instruction handling using a single function with a common prologue, covering
all the checks which should cause #UD or #GP faults.  This deliberately
doesn't use any state from the nvmx structure, in case there are other lurking
issues.

This is XSA-278

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d16129f..7a49075 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4003,57 +4003,17 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         break;
 
     case EXIT_REASON_VMXOFF:
-        if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMXON:
-        if ( nvmx_handle_vmxon(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMCLEAR:
-        if ( nvmx_handle_vmclear(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
- 
     case EXIT_REASON_VMPTRLD:
-        if ( nvmx_handle_vmptrld(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMPTRST:
-        if ( nvmx_handle_vmptrst(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMREAD:
-        if ( nvmx_handle_vmread(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
- 
     case EXIT_REASON_VMWRITE:
-        if ( nvmx_handle_vmwrite(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMLAUNCH:
-        if ( nvmx_handle_vmlaunch(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMRESUME:
-        if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_INVEPT:
-        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_INVVPID:
-        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+        if ( nvmx_handle_vmx_insn(regs, exit_reason) == X86EMUL_OKAY )
             update_guest_eip();
         break;
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 0e45db8..aa202e0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1470,7 +1470,7 @@ void nvmx_switch_guest(void)
  * VMX instructions handling
  */
 
-int nvmx_handle_vmxon(struct cpu_user_regs *regs)
+static int nvmx_handle_vmxon(struct cpu_user_regs *regs)
 {
     struct vcpu *v=current;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -1522,7 +1522,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
+static int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
 {
     struct vcpu *v=current;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -1611,7 +1611,7 @@ static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmresume(struct cpu_user_regs *regs)
+static int nvmx_handle_vmresume(struct cpu_user_regs *regs)
 {
     bool_t launched;
     struct vcpu *v = current;
@@ -1645,7 +1645,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs)
     return nvmx_vmresume(v,regs);
 }
 
-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
+static int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
 {
     bool_t launched;
     struct vcpu *v = current;
@@ -1688,7 +1688,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     return rc;
 }
 
-int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
+static int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1759,7 +1759,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
+static int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1784,7 +1784,7 @@ int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmclear(struct cpu_user_regs *regs)
+static int nvmx_handle_vmclear(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1836,7 +1836,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmread(struct cpu_user_regs *regs)
+static int nvmx_handle_vmread(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1878,7 +1878,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
+static int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1926,7 +1926,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_invept(struct cpu_user_regs *regs)
+static int nvmx_handle_invept(struct cpu_user_regs *regs)
 {
     struct vmx_inst_decoded decode;
     unsigned long eptp;
@@ -1954,7 +1954,7 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+static int nvmx_handle_invvpid(struct cpu_user_regs *regs)
 {
     struct vmx_inst_decoded decode;
     unsigned long vpid;
@@ -1980,6 +1980,81 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason)
+{
+    struct vcpu *curr = current;
+    int ret;
+
+    if ( !(curr->arch.hvm.guest_cr[4] & X86_CR4_VMXE) ||
+         !nestedhvm_enabled(curr->domain) ||
+         (vmx_guest_x86_mode(curr) < (hvm_long_mode_active(curr) ? 8 : 2)) )
+    {
+        hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
+        return X86EMUL_EXCEPTION;
+    }
+
+    if ( vmx_get_cpl() > 0 )
+    {
+        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        return X86EMUL_EXCEPTION;
+    }
+
+    switch ( exit_reason )
+    {
+    case EXIT_REASON_VMXOFF:
+        ret = nvmx_handle_vmxoff(regs);
+        break;
+
+    case EXIT_REASON_VMXON:
+        ret = nvmx_handle_vmxon(regs);
+        break;
+
+    case EXIT_REASON_VMCLEAR:
+        ret = nvmx_handle_vmclear(regs);
+        break;
+
+    case EXIT_REASON_VMPTRLD:
+        ret = nvmx_handle_vmptrld(regs);
+        break;
+
+    case EXIT_REASON_VMPTRST:
+        ret = nvmx_handle_vmptrst(regs);
+        break;
+
+    case EXIT_REASON_VMREAD:
+        ret = nvmx_handle_vmread(regs);
+        break;
+
+    case EXIT_REASON_VMWRITE:
+        ret = nvmx_handle_vmwrite(regs);
+        break;
+
+    case EXIT_REASON_VMLAUNCH:
+        ret = nvmx_handle_vmlaunch(regs);
+        break;
+
+    case EXIT_REASON_VMRESUME:
+        ret = nvmx_handle_vmresume(regs);
+        break;
+
+    case EXIT_REASON_INVEPT:
+        ret = nvmx_handle_invept(regs);
+        break;
+
+    case EXIT_REASON_INVVPID:
+        ret = nvmx_handle_invvpid(regs);
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        domain_crash(curr->domain);
+        ret = X86EMUL_UNHANDLEABLE;
+        break;
+    }
+
+    return ret;
+}
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index a20bd9e..6b9c4ae 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -94,9 +94,6 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 bool_t nvmx_ept_enabled(struct vcpu *v);
 
-int nvmx_handle_vmxon(struct cpu_user_regs *regs);
-int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-
 #define EPT_TRANSLATE_SUCCEED       0
 #define EPT_TRANSLATE_VIOLATION     1
 #define EPT_TRANSLATE_MISCONFIG     2
@@ -189,15 +186,7 @@ enum vmx_insn_errno set_vvmcs_real_safe(const struct vcpu *, u32 encoding,
    set_vvmcs_virtual_safe(vcpu_nestedhvm(vcpu).nv_vvmcx, encoding, val))
 
 void nvmx_destroy_vmcs(struct vcpu *v);
-int nvmx_handle_vmptrld(struct cpu_user_regs *regs);
-int nvmx_handle_vmptrst(struct cpu_user_regs *regs);
-int nvmx_handle_vmclear(struct cpu_user_regs *regs);
-int nvmx_handle_vmread(struct cpu_user_regs *regs);
-int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
-int nvmx_handle_vmresume(struct cpu_user_regs *regs);
-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
-int nvmx_handle_invept(struct cpu_user_regs *regs);
-int nvmx_handle_invvpid(struct cpu_user_regs *regs);
+int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 

["xsa278-4.11.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/vvmx: Disallow the use of VT-x instructions when nested virt is disabled

c/s ac6a4500b "vvmx: set vmxon_region_pa of vcpu out of VMX operation to an
invalid address" was a real bugfix as described, but has a very subtle bug
which results in all VT-x instructions being usable by a guest.

The toolstack constructs a guest by issuing:

  XEN_DOMCTL_createdomain
  XEN_DOMCTL_max_vcpus

and optionally later, HVMOP_set_param to enable nested virt.

As a result, the call to nvmx_vcpu_initialise() in hvm_vcpu_initialise()
(which is what makes the above patch look correct during review) is actually
dead code.  In practice, nvmx_vcpu_initialise() first gets called when nested
virt is enabled, which is typically never.

As a result, the zeroed memory of struct vcpu causes nvmx_vcpu_in_vmx() to
return true before nested virt is enabled for the guest.

Fixing the order of initialisation is a work in progress for other reasons,
but not viable for security backports.

A compounding factor is that the vmexit handlers for all instructions, other
than VMXON, pass 0 into vmx_inst_check_privilege()'s vmxop_check parameter,
which skips the CR4.VMXE check.  (This is one of many reasons why nested virt
isn't a supported feature yet.)

However, the overall result is that when nested virt is not enabled by the
toolstack (i.e. the default configuration for all production guests), the VT-x
instructions (other than VMXON) are actually usable, and Xen very quickly
falls over the fact that the nvmx structure is uninitialised.

In order to fail safe in the supported case, re-implement all the VT-x
instruction handling using a single function with a common prologue, covering
all the checks which should cause #UD or #GP faults.  This deliberately
doesn't use any state from the nvmx structure, in case there are other lurking
issues.

This is XSA-278

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Sergey Dyasli <sergey.dyasli@citrix.com>

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a6415f0..a4d2829 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3982,57 +3982,17 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         break;
 
     case EXIT_REASON_VMXOFF:
-        if ( nvmx_handle_vmxoff(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMXON:
-        if ( nvmx_handle_vmxon(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMCLEAR:
-        if ( nvmx_handle_vmclear(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
- 
     case EXIT_REASON_VMPTRLD:
-        if ( nvmx_handle_vmptrld(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMPTRST:
-        if ( nvmx_handle_vmptrst(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMREAD:
-        if ( nvmx_handle_vmread(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
- 
     case EXIT_REASON_VMWRITE:
-        if ( nvmx_handle_vmwrite(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMLAUNCH:
-        if ( nvmx_handle_vmlaunch(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_VMRESUME:
-        if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_INVEPT:
-        if ( nvmx_handle_invept(regs) == X86EMUL_OKAY )
-            update_guest_eip();
-        break;
-
     case EXIT_REASON_INVVPID:
-        if ( nvmx_handle_invvpid(regs) == X86EMUL_OKAY )
+        if ( nvmx_handle_vmx_insn(regs, exit_reason) == X86EMUL_OKAY )
             update_guest_eip();
         break;
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e97db33..88cb58c 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1470,7 +1470,7 @@ void nvmx_switch_guest(void)
  * VMX instructions handling
  */
 
-int nvmx_handle_vmxon(struct cpu_user_regs *regs)
+static int nvmx_handle_vmxon(struct cpu_user_regs *regs)
 {
     struct vcpu *v=current;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -1522,7 +1522,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
+static int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
 {
     struct vcpu *v=current;
     struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
@@ -1611,7 +1611,7 @@ static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmresume(struct cpu_user_regs *regs)
+static int nvmx_handle_vmresume(struct cpu_user_regs *regs)
 {
     bool_t launched;
     struct vcpu *v = current;
@@ -1645,7 +1645,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs)
     return nvmx_vmresume(v,regs);
 }
 
-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
+static int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
 {
     bool_t launched;
     struct vcpu *v = current;
@@ -1688,7 +1688,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     return rc;
 }
 
-int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
+static int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1759,7 +1759,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
+static int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1784,7 +1784,7 @@ int nvmx_handle_vmptrst(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmclear(struct cpu_user_regs *regs)
+static int nvmx_handle_vmclear(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1836,7 +1836,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmread(struct cpu_user_regs *regs)
+static int nvmx_handle_vmread(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1878,7 +1878,7 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
+static int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
     struct vmx_inst_decoded decode;
@@ -1926,7 +1926,7 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_invept(struct cpu_user_regs *regs)
+static int nvmx_handle_invept(struct cpu_user_regs *regs)
 {
     struct vmx_inst_decoded decode;
     unsigned long eptp;
@@ -1954,7 +1954,7 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
-int nvmx_handle_invvpid(struct cpu_user_regs *regs)
+static int nvmx_handle_invvpid(struct cpu_user_regs *regs)
 {
     struct vmx_inst_decoded decode;
     unsigned long vpid;
@@ -1980,6 +1980,81 @@ int nvmx_handle_invvpid(struct cpu_user_regs *regs)
     return X86EMUL_OKAY;
 }
 
+int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason)
+{
+    struct vcpu *curr = current;
+    int ret;
+
+    if ( !(curr->arch.hvm_vcpu.guest_cr[4] & X86_CR4_VMXE) ||
+         !nestedhvm_enabled(curr->domain) ||
+         (vmx_guest_x86_mode(curr) < (hvm_long_mode_active(curr) ? 8 : 2)) )
+    {
+        hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
+        return X86EMUL_EXCEPTION;
+    }
+
+    if ( vmx_get_cpl() > 0 )
+    {
+        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        return X86EMUL_EXCEPTION;
+    }
+
+    switch ( exit_reason )
+    {
+    case EXIT_REASON_VMXOFF:
+        ret = nvmx_handle_vmxoff(regs);
+        break;
+
+    case EXIT_REASON_VMXON:
+        ret = nvmx_handle_vmxon(regs);
+        break;
+
+    case EXIT_REASON_VMCLEAR:
+        ret = nvmx_handle_vmclear(regs);
+        break;
+
+    case EXIT_REASON_VMPTRLD:
+        ret = nvmx_handle_vmptrld(regs);
+        break;
+
+    case EXIT_REASON_VMPTRST:
+        ret = nvmx_handle_vmptrst(regs);
+        break;
+
+    case EXIT_REASON_VMREAD:
+        ret = nvmx_handle_vmread(regs);
+        break;
+
+    case EXIT_REASON_VMWRITE:
+        ret = nvmx_handle_vmwrite(regs);
+        break;
+
+    case EXIT_REASON_VMLAUNCH:
+        ret = nvmx_handle_vmlaunch(regs);
+        break;
+
+    case EXIT_REASON_VMRESUME:
+        ret = nvmx_handle_vmresume(regs);
+        break;
+
+    case EXIT_REASON_INVEPT:
+        ret = nvmx_handle_invept(regs);
+        break;
+
+    case EXIT_REASON_INVVPID:
+        ret = nvmx_handle_invvpid(regs);
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        domain_crash(curr->domain);
+        ret = X86EMUL_UNHANDLEABLE;
+        break;
+    }
+
+    return ret;
+}
+
 #define __emul_value(enable1, default1) \
     ((enable1 | default1) << 32 | (default1))
 
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 9ea35eb..fc4a8d1 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -94,9 +94,6 @@ void nvmx_domain_relinquish_resources(struct domain *d);
 
 bool_t nvmx_ept_enabled(struct vcpu *v);
 
-int nvmx_handle_vmxon(struct cpu_user_regs *regs);
-int nvmx_handle_vmxoff(struct cpu_user_regs *regs);
-
 #define EPT_TRANSLATE_SUCCEED       0
 #define EPT_TRANSLATE_VIOLATION     1
 #define EPT_TRANSLATE_MISCONFIG     2
@@ -191,15 +188,7 @@ enum vmx_insn_errno set_vvmcs_real_safe(const struct vcpu *, u32 encoding,
 uint64_t get_shadow_eptp(struct vcpu *v);
 
 void nvmx_destroy_vmcs(struct vcpu *v);
-int nvmx_handle_vmptrld(struct cpu_user_regs *regs);
-int nvmx_handle_vmptrst(struct cpu_user_regs *regs);
-int nvmx_handle_vmclear(struct cpu_user_regs *regs);
-int nvmx_handle_vmread(struct cpu_user_regs *regs);
-int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
-int nvmx_handle_vmresume(struct cpu_user_regs *regs);
-int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
-int nvmx_handle_invept(struct cpu_user_regs *regs);
-int nvmx_handle_invvpid(struct cpu_user_regs *regs);
+int nvmx_handle_vmx_insn(struct cpu_user_regs *regs, unsigned int exit_reason);
 int nvmx_msr_read_intercept(unsigned int msr,
                                 u64 *msr_content);
 


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic