[prev in list] [next in list] [prev in thread] [next in thread] 

List:       xen-users
Subject:    [Xen-users] Xen Security Advisory 110 (CVE-2014-8595) - Missing privilege level checks in x86 emulat
From:       Xen.org security team <security () xen ! org>
Date:       2014-11-18 12:24:11
Message-ID: E1XqhpL-0003SX-7Z () xenbits ! xen ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2014-8595 / XSA-110
                              version 3

    Missing privilege level checks in x86 emulation of far branches

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The emulation of far branch instructions (CALL, JMP, and RETF in Intel
assembly syntax, LCALL, LJMP, and LRET in AT&T assembly syntax)
incompletely performs privilege checks.

However these instructions are not usually handled by the emulator.
Exceptions to this are
- - when a memory operand lives in (emulated or passed through) memory
  mapped IO space,
- - in the case of guests running in 32-bit PAE mode, when such an
  instruction is (in execution flow) within four instructions of one
  doing a page table update,
- - when an Invalid Opcode exception gets raised by a guest instruction,
  and the guest then (likely maliciously) alters the instruction to
  become one of the affected ones,
- - when the guest is in real mode (in which case there are no privilege
  checks anyway).

IMPACT
======

Malicious HVM guest user mode code may be able to elevate its
privileges to guest supervisor mode, or to crash the guest.

VULNERABLE SYSTEMS
==================

Xen 3.2.1 and onward are vulnerable on x86 systems.

ARM systems are not vulnerable.

Only user processes in x86 HVM guests can take advantage of this
vulnerability.

MITIGATION
==========

Running only PV guests will avoid this issue.

There is no mitigation available for HVM guests.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa110-unstable.patch        xen-unstable, Xen 4.4.x
xsa110-4.3-and-4.2.patch     Xen 4.3.x, Xen 4.2.x

$ sha256sum xsa110*.patch
a114ba586d18125b368112527a077abfe309826ad47aca8cc80ba4549c5f9ae2  xsa110-4.3-and-4.2.patch
eac4691848dcd093903e0a0f5fd7ab15be15d0f10b98575379911e91e5dcbd70  xsa110.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJUazojAAoJEIP+FMlX6CvZF18H/1/G49MGk6/Fq6CtpvoEvQsl
u7Q0UHoMuwqN119fRKJOorAh+MPKWDaPBjZoNmfJxIKEHD5tpA1Kr97y67Ye/dtz
UfXxQPiIYpOe/Z59E3erKGDyzC5TLlPfa7fZBvZdeStIWsC+d2pUWDTRBioDHBGZ
IeNnXkrLuhLrjGOs9a4ZNdP/jTFkJQ7vKJXF8nFhcEpK8XZx9D8e2xExTWZ2BJ/N
u6KbWgMAf01M10hcQze99Wm3Fuva/HkVhiza8Rj5cgsV9SD4ZrQMhH9Mm86/YG52
AEwT6j8KWd83zZz8WZjFS30edZ4/eIXW+2e3KuaUFKBiei88tlF6CYWq6upS/5U=
=u7Zi
-----END PGP SIGNATURE-----

["xsa110-4.3-and-4.2.patch" (application/octet-stream)]

x86emul: enforce privilege level restrictions when loading CS

Privilege level checks were basically missing for the CS case, the
only check that was done (RPL == DPL for nonconforming segments)
was solely covering a single special case (return to non-conforming
segment).

Additionally in long mode the L bit set requires the D bit to be clear,
as was recently pointed out for KVM by Nadav Amit
<namit@cs.technion.ac.il>.

Finally we also need to force the loaded selector's RPL to CPL (at
least as long as lret/retf emulation doesn't support privilege level
changes).

This is XSA-110.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1107,7 +1107,7 @@ realmode_load_seg(
 static int
 protmode_load_seg(
     enum x86_segment seg,
-    uint16_t sel,
+    uint16_t sel, bool_t is_ret,
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
 {
@@ -1179,9 +1179,23 @@ protmode_load_seg(
         /* Code segment? */
         if ( !(desc.b & (1u<<11)) )
             goto raise_exn;
-        /* Non-conforming segment: check DPL against RPL. */
-        if ( ((desc.b & (6u<<9)) != (6u<<9)) && (dpl != rpl) )
+        if ( is_ret
+             ? /*
+                * Really rpl < cpl, but our sole caller doesn't handle
+                * privilege level changes.
+                */
+               rpl != cpl || (desc.b & (1 << 10) ? dpl > rpl : dpl != rpl)
+             : desc.b & (1 << 10)
+               /* Conforming segment: check DPL against CPL. */
+               ? dpl > cpl
+               /* Non-conforming segment: check RPL and DPL against CPL. */
+               : rpl > cpl || dpl != cpl )
             goto raise_exn;
+        /* 64-bit code segments (L bit set) must have D bit clear. */
+        if ( in_longmode(ctxt, ops) &&
+             (desc.b & (1 << 21)) && (desc.b & (1 << 22)) )
+            goto raise_exn;
+        sel = (sel ^ rpl) | cpl;
         break;
     case x86_seg_ss:
         /* Writable data segment? */
@@ -1246,7 +1260,7 @@ protmode_load_seg(
 static int
 load_seg(
     enum x86_segment seg,
-    uint16_t sel,
+    uint16_t sel, bool_t is_ret,
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
 {
@@ -1255,7 +1269,7 @@ load_seg(
         return X86EMUL_UNHANDLEABLE;
 
     if ( in_protmode(ctxt, ops) )
-        return protmode_load_seg(seg, sel, ctxt, ops);
+        return protmode_load_seg(seg, sel, is_ret, ctxt, ops);
 
     return realmode_load_seg(seg, sel, ctxt, ops);
 }
@@ -1852,7 +1866,7 @@ x86_emulate(
         if ( (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes),
                               &dst.val, op_bytes, ctxt, ops)) != 0 )
             goto done;
-        if ( (rc = load_seg(src.val, (uint16_t)dst.val, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(src.val, dst.val, 0, ctxt, ops)) != 0 )
             return rc;
         break;
 
@@ -2222,7 +2236,7 @@ x86_emulate(
         enum x86_segment seg = decode_segment(modrm_reg);
         generate_exception_if(seg == decode_segment_failed, EXC_UD, -1);
         generate_exception_if(seg == x86_seg_cs, EXC_UD, -1);
-        if ( (rc = load_seg(seg, (uint16_t)src.val, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(seg, src.val, 0, ctxt, ops)) != 0 )
             goto done;
         if ( seg == x86_seg_ss )
             ctxt->retire.flags.mov_ss = 1;
@@ -2303,7 +2317,7 @@ x86_emulate(
                               &_regs.eip, op_bytes, ctxt)) )
             goto done;
 
-        if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
             goto done;
         _regs.eip = eip;
         break;
@@ -2526,7 +2540,7 @@ x86_emulate(
         if ( (rc = read_ulong(src.mem.seg, src.mem.off + src.bytes,
                               &sel, 2, ctxt, ops)) != 0 )
             goto done;
-        if ( (rc = load_seg(dst.val, (uint16_t)sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(dst.val, sel, 0, ctxt, ops)) != 0 )
             goto done;
         dst.val = src.val;
         break;
@@ -2600,7 +2614,7 @@ x86_emulate(
                               &dst.val, op_bytes, ctxt, ops)) ||
              (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes + offset),
                               &src.val, op_bytes, ctxt, ops)) ||
-             (rc = load_seg(x86_seg_cs, (uint16_t)src.val, ctxt, ops)) )
+             (rc = load_seg(x86_seg_cs, src.val, 1, ctxt, ops)) )
             goto done;
         _regs.eip = dst.val;
         break;
@@ -2647,7 +2661,7 @@ x86_emulate(
         _regs.eflags &= mask;
         _regs.eflags |= (uint32_t)(eflags & ~mask) | 0x02;
         _regs.eip = eip;
-        if ( (rc = load_seg(x86_seg_cs, (uint16_t)cs, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, cs, 1, ctxt, ops)) != 0 )
             goto done;
         break;
     }
@@ -3277,7 +3291,7 @@ x86_emulate(
         generate_exception_if(mode_64bit(), EXC_UD, -1);
         eip = insn_fetch_bytes(op_bytes);
         sel = insn_fetch_type(uint16_t);
-        if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
             goto done;
         _regs.eip = eip;
         break;
@@ -3590,7 +3604,7 @@ x86_emulate(
                     goto done;
             }
 
-            if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+            if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
                 goto done;
             _regs.eip = dst.val;
 
@@ -3671,7 +3685,7 @@ x86_emulate(
         generate_exception_if(!in_protmode(ctxt, ops), EXC_UD, -1);
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         if ( (rc = load_seg((modrm_reg & 1) ? x86_seg_tr : x86_seg_ldtr,
-                            src.val, ctxt, ops)) != 0 )
+                            src.val, 0, ctxt, ops)) != 0 )
             goto done;
         break;
 

["xsa110.patch" (application/octet-stream)]

x86emul: enforce privilege level restrictions when loading CS

Privilege level checks were basically missing for the CS case, the
only check that was done (RPL == DPL for nonconforming segments)
was solely covering a single special case (return to non-conforming
segment).

Additionally in long mode the L bit set requires the D bit to be clear,
as was recently pointed out for KVM by Nadav Amit
<namit@cs.technion.ac.il>.

Finally we also need to force the loaded selector's RPL to CPL (at
least as long as lret/retf emulation doesn't support privilege level
changes).

This is XSA-110.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1119,7 +1119,7 @@ realmode_load_seg(
 static int
 protmode_load_seg(
     enum x86_segment seg,
-    uint16_t sel,
+    uint16_t sel, bool_t is_ret,
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
 {
@@ -1185,9 +1185,23 @@ protmode_load_seg(
         /* Code segment? */
         if ( !(desc.b & (1u<<11)) )
             goto raise_exn;
-        /* Non-conforming segment: check DPL against RPL. */
-        if ( ((desc.b & (6u<<9)) != (6u<<9)) && (dpl != rpl) )
+        if ( is_ret
+             ? /*
+                * Really rpl < cpl, but our sole caller doesn't handle
+                * privilege level changes.
+                */
+               rpl != cpl || (desc.b & (1 << 10) ? dpl > rpl : dpl != rpl)
+             : desc.b & (1 << 10)
+               /* Conforming segment: check DPL against CPL. */
+               ? dpl > cpl
+               /* Non-conforming segment: check RPL and DPL against CPL. */
+               : rpl > cpl || dpl != cpl )
             goto raise_exn;
+        /* 64-bit code segments (L bit set) must have D bit clear. */
+        if ( in_longmode(ctxt, ops) &&
+             (desc.b & (1 << 21)) && (desc.b & (1 << 22)) )
+            goto raise_exn;
+        sel = (sel ^ rpl) | cpl;
         break;
     case x86_seg_ss:
         /* Writable data segment? */
@@ -1252,7 +1266,7 @@ protmode_load_seg(
 static int
 load_seg(
     enum x86_segment seg,
-    uint16_t sel,
+    uint16_t sel, bool_t is_ret,
     struct x86_emulate_ctxt *ctxt,
     const struct x86_emulate_ops *ops)
 {
@@ -1261,7 +1275,7 @@ load_seg(
         return X86EMUL_UNHANDLEABLE;
 
     if ( in_protmode(ctxt, ops) )
-        return protmode_load_seg(seg, sel, ctxt, ops);
+        return protmode_load_seg(seg, sel, is_ret, ctxt, ops);
 
     return realmode_load_seg(seg, sel, ctxt, ops);
 }
@@ -2003,7 +2017,7 @@ x86_emulate(
         if ( (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes),
                               &dst.val, op_bytes, ctxt, ops)) != 0 )
             goto done;
-        if ( (rc = load_seg(src.val, (uint16_t)dst.val, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(src.val, dst.val, 0, ctxt, ops)) != 0 )
             return rc;
         break;
 
@@ -2357,7 +2371,7 @@ x86_emulate(
         enum x86_segment seg = decode_segment(modrm_reg);
         generate_exception_if(seg == decode_segment_failed, EXC_UD, -1);
         generate_exception_if(seg == x86_seg_cs, EXC_UD, -1);
-        if ( (rc = load_seg(seg, (uint16_t)src.val, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(seg, src.val, 0, ctxt, ops)) != 0 )
             goto done;
         if ( seg == x86_seg_ss )
             ctxt->retire.flags.mov_ss = 1;
@@ -2438,7 +2452,7 @@ x86_emulate(
                               &_regs.eip, op_bytes, ctxt)) )
             goto done;
 
-        if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
             goto done;
         _regs.eip = eip;
         break;
@@ -2662,7 +2676,7 @@ x86_emulate(
         if ( (rc = read_ulong(src.mem.seg, src.mem.off + src.bytes,
                               &sel, 2, ctxt, ops)) != 0 )
             goto done;
-        if ( (rc = load_seg(dst.val, (uint16_t)sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(dst.val, sel, 0, ctxt, ops)) != 0 )
             goto done;
         dst.val = src.val;
         break;
@@ -2736,7 +2750,7 @@ x86_emulate(
                               &dst.val, op_bytes, ctxt, ops)) ||
              (rc = read_ulong(x86_seg_ss, sp_post_inc(op_bytes + offset),
                               &src.val, op_bytes, ctxt, ops)) ||
-             (rc = load_seg(x86_seg_cs, (uint16_t)src.val, ctxt, ops)) )
+             (rc = load_seg(x86_seg_cs, src.val, 1, ctxt, ops)) )
             goto done;
         _regs.eip = dst.val;
         break;
@@ -2785,7 +2799,7 @@ x86_emulate(
         _regs.eflags &= mask;
         _regs.eflags |= (uint32_t)(eflags & ~mask) | 0x02;
         _regs.eip = eip;
-        if ( (rc = load_seg(x86_seg_cs, (uint16_t)cs, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, cs, 1, ctxt, ops)) != 0 )
             goto done;
         break;
     }
@@ -3415,7 +3429,7 @@ x86_emulate(
         generate_exception_if(mode_64bit(), EXC_UD, -1);
         eip = insn_fetch_bytes(op_bytes);
         sel = insn_fetch_type(uint16_t);
-        if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+        if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
             goto done;
         _regs.eip = eip;
         break;
@@ -3714,7 +3728,7 @@ x86_emulate(
                     goto done;
             }
 
-            if ( (rc = load_seg(x86_seg_cs, sel, ctxt, ops)) != 0 )
+            if ( (rc = load_seg(x86_seg_cs, sel, 0, ctxt, ops)) != 0 )
                 goto done;
             _regs.eip = src.val;
 
@@ -3781,7 +3795,7 @@ x86_emulate(
         generate_exception_if(!in_protmode(ctxt, ops), EXC_UD, -1);
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         if ( (rc = load_seg((modrm_reg & 1) ? x86_seg_tr : x86_seg_ldtr,
-                            src.val, ctxt, ops)) != 0 )
+                            src.val, 0, ctxt, ops)) != 0 )
             goto done;
         break;
 


_______________________________________________
Xen-users mailing list
Xen-users@lists.xen.org
http://lists.xen.org/xen-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic