[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oss-security
Subject:    [oss-security] Xen Security Advisory 246 (CVE-2017-17044) - x86: infinite loop due to missing PoD er
From:       Xen.org security team <security () xen ! org>
Date:       2017-11-30 11:59:51
Message-ID: E1eKNVP-0008EZ-Au () xenbits ! xenproject ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2017-17044 / XSA-246
                              version 3

         x86: infinite loop due to missing PoD error checking

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

Failure to recognize errors being returned from low level functions in
Populate on Demand (PoD) code may result in higher level code entering
an infinite loop.

IMPACT
======

A malicious HVM guest can cause one pcpu to permanently hang.  This
normally cascades into the whole system freezing, resulting in a a
host Denial of Service (DoS).

VULNERABLE SYSTEMS
==================

Xen versions from 3.4.x onwards are affected.

Only x86 systems are vulnerable.  ARM is not vulnerable.

x86 PV VMs cannot leverage the vulnerability.

Only systems with 2MiB or 1GiB HAP pages enabled are vulnerable.

The vulnerability is largely restricted to HVM guests which have been
constructed in Populate-on-Demand mode (i.e. with memory < maxmem):

x86 HVM domains without PoD (i.e. started with memory == maxmem, or
without mentioning "maxmem" in the guest config file) also cannot
leverage the vulnerability, in recent enough Xen versions:
  4.8.x and later: all versions safe if PoD not configured
  4.7.x: 4.7.1 and later safe if PoD not configured
  4.6.x: 4.6.4 and later safe if PoD not configured
  4.5.x: 4.5.4 and later safe if PoD not configured
  4.4.x and earlier: all versions vulnerable even if PoD not configured

The commit required to prevent this vulnerability when PoD
not configured is 2a99aa99fc84a45f505f84802af56b006d14c52e
  xen/physmap: Do not permit a guest to populate PoD pages for itself
and the corresponding backports.

MITIGATION
==========

Running only PV guests will avoid this issue.

Running HVM guests only in non-PoD mode (maxmem == memory) will also
avoid this issue.  NOTE: In older releases of Xen, an HVM guest can
create PoD entries itself; so this mitigation will not be effective.

Specifying "hap_1gb=0 hap_2mb=0" on the hypervisor command line will
avoid the vulnerability.

Alternatively, running all x86 HVM guests in shadow mode will also
avoid this vulnerability.  (For example, by specifying "hap=0" in the
xl domain configuration file.)

CREDITS
=======

This issue was discovered by Julien Grall of Linaro.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa246.patch           xen-unstable
xsa246-4.9.patch       Xen 4.9.x, Xen 4.8.x
xsa246-4.7.patch       Xen 4.7.x, Xen 4.6.x, Xen 4.5.x

$ sha256sum xsa246*
df08a3be419f2384b495dc52c3e6ebef1eb67d8b562afe85fb6fe6a723334472  xsa246.patch
b41550688e88a2a7a22349a07168f3a3ddf6fad8b3389fa27de44ae6731b6a8b  xsa246-4.7.patch
ea591542774c22db65dcb340120cebf58e759670b5a9fbde42ee93ed594650c8  xsa246-4.9.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators, with ONE exception:

Removing the ability to boot in populate-on-demand mode is NOT
permitted during the embargo on public cloud systems.  This is because
doing so might alert attackers to the nature of the vulnerability.
Deployment of this mitigation is permitted only AFTER the embargo
ends.

Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBCAAGBQJaH/KNAAoJEIP+FMlX6CvZi5sIAJojGRp1oewRc3JkEolFITBb
EcQDq9n7ppd21q+hRmYKT+IwRILahxGFbfQJHzCbu9KHFNswt9dgFNlkBk2Jk+rD
q3LzoSeIBrNVazfIX6VpGtgrPxlsAJzsmvAp5LgdZ+H3tYqsHYdyeoaPFAtbuB40
6WgBWv2Z003uRCvK5F8ka3cRrWsuuiae9gnUKfLjt4eAu7rES8wFG+VE4n/o6RLg
YJuyCL97rYXgBRAABO/1bR7mKWhyux3WbtZjjDSnUcn+O3UOpVH49bL5JHb1XKuS
ycdwYtfjTwB7nB1o7TnYGEQKieqlWDWOylWTWJxA3cOsIHHB2syTV3L9vVukti0=
=CmSs
-----END PGP SIGNATURE-----

["xsa246.patch" (application/octet-stream)]

From: Julien Grall <julien.grall@linaro.org>
Subject: x86/pod: prevent infinite loop when shattering large pages

When populating pages, the PoD may need to split large ones using
p2m_set_entry and request the caller to retry (see ept_get_entry for
instance).

p2m_set_entry may fail to shatter if it is not possible to allocate
memory for the new page table. However, the error is not propagated
resulting to the callers to retry infinitely the PoD.

Prevent the infinite loop by return false when it is not possible to
shatter the large mapping.

This is XSA-246.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1113,9 +1113,8 @@ p2m_pod_demand_populate(struct p2m_domai
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
-                      p2m_populate_on_demand, p2m->default_access);
-        return true;
+        return !p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
+                              p2m_populate_on_demand, p2m->default_access);
     }
 
     /* Only reclaim if we're in actual need of more cache. */
@@ -1147,8 +1146,12 @@ p2m_pod_demand_populate(struct p2m_domai
 
     BUG_ON((mfn_x(mfn) & ((1UL << order) - 1)) != 0);
 
-    p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
-                  p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
+                       p2m->default_access) )
+    {
+        p2m_pod_cache_add(p2m, p, order);
+        goto out_fail;
+    }
 
     for( i = 0; i < (1UL << order); i++ )
     {
@@ -1193,14 +1196,17 @@ remap_and_retry:
     BUG_ON(order != PAGE_ORDER_2M);
     pod_unlock(p2m);
 
-    /* Remap this 2-meg region in singleton chunks */
     /*
+     * Remap this 2-meg region in singleton chunks. See the comment on the
+     * 1G page splitting path above for why a single call suffices.
+     *
      * NOTE: In a p2m fine-grained lock scenario this might
      * need promoting the gfn lock from gfn->2M superpage.
      */
-    for ( i = 0; i < (1UL << order); i++ )
-        p2m_set_entry(p2m, gfn_add(gfn_aligned, i), INVALID_MFN, PAGE_ORDER_4K,
-                      p2m_populate_on_demand, p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_4K,
+                       p2m_populate_on_demand, p2m->default_access) )
+        return false;
+
     if ( tb_init_done )
     {
         struct {

["xsa246-4.7.patch" (application/octet-stream)]

From: Julien Grall <julien.grall@linaro.org>
Subject: x86/pod: prevent infinite loop when shattering large pages

When populating pages, the PoD may need to split large ones using
p2m_set_entry and request the caller to retry (see ept_get_entry for
instance).

p2m_set_entry may fail to shatter if it is not possible to allocate
memory for the new page table. However, the error is not propagated
resulting to the callers to retry infinitely the PoD.

Prevent the infinite loop by return false when it is not possible to
shatter the large mapping.

This is XSA-246.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1073,9 +1073,8 @@ p2m_pod_demand_populate(struct p2m_domai
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
-                      p2m_populate_on_demand, p2m->default_access);
-        return 0;
+        return p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+                             p2m_populate_on_demand, p2m->default_access);
     }
 
     /* Only reclaim if we're in actual need of more cache. */
@@ -1106,8 +1105,12 @@ p2m_pod_demand_populate(struct p2m_domai
 
     gfn_aligned = (gfn >> order) << order;
 
-    p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
-                  p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
+                       p2m->default_access) )
+    {
+        p2m_pod_cache_add(p2m, p, order);
+        goto out_fail;
+    }
 
     for( i = 0; i < (1UL << order); i++ )
     {
@@ -1152,13 +1155,18 @@ remap_and_retry:
     BUG_ON(order != PAGE_ORDER_2M);
     pod_unlock(p2m);
 
-    /* Remap this 2-meg region in singleton chunks */
-    /* NOTE: In a p2m fine-grained lock scenario this might
-     * need promoting the gfn lock from gfn->2M superpage */
+    /*
+     * Remap this 2-meg region in singleton chunks. See the comment on the
+     * 1G page splitting path above for why a single call suffices.
+     *
+     * NOTE: In a p2m fine-grained lock scenario this might
+     * need promoting the gfn lock from gfn->2M superpage.
+     */
     gfn_aligned = (gfn>>order)<<order;
-    for(i=0; i<(1<<order); i++)
-        p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
-                      p2m_populate_on_demand, p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+                       p2m_populate_on_demand, p2m->default_access) )
+        return -1;
+
     if ( tb_init_done )
     {
         struct {

["xsa246-4.9.patch" (application/octet-stream)]

From: Julien Grall <julien.grall@linaro.org>
Subject: x86/pod: prevent infinite loop when shattering large pages

When populating pages, the PoD may need to split large ones using
p2m_set_entry and request the caller to retry (see ept_get_entry for
instance).

p2m_set_entry may fail to shatter if it is not possible to allocate
memory for the new page table. However, the error is not propagated
resulting to the callers to retry infinitely the PoD.

Prevent the infinite loop by return false when it is not possible to
shatter the large mapping.

This is XSA-246.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1071,9 +1071,8 @@ p2m_pod_demand_populate(struct p2m_domai
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
-                      p2m_populate_on_demand, p2m->default_access);
-        return 0;
+        return p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_2M,
+                             p2m_populate_on_demand, p2m->default_access);
     }
 
     /* Only reclaim if we're in actual need of more cache. */
@@ -1104,8 +1103,12 @@ p2m_pod_demand_populate(struct p2m_domai
 
     gfn_aligned = (gfn >> order) << order;
 
-    p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
-                  p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
+                       p2m->default_access) )
+    {
+        p2m_pod_cache_add(p2m, p, order);
+        goto out_fail;
+    }
 
     for( i = 0; i < (1UL << order); i++ )
     {
@@ -1150,13 +1153,18 @@ remap_and_retry:
     BUG_ON(order != PAGE_ORDER_2M);
     pod_unlock(p2m);
 
-    /* Remap this 2-meg region in singleton chunks */
-    /* NOTE: In a p2m fine-grained lock scenario this might
-     * need promoting the gfn lock from gfn->2M superpage */
+    /*
+     * Remap this 2-meg region in singleton chunks. See the comment on the
+     * 1G page splitting path above for why a single call suffices.
+     *
+     * NOTE: In a p2m fine-grained lock scenario this might
+     * need promoting the gfn lock from gfn->2M superpage.
+     */
     gfn_aligned = (gfn>>order)<<order;
-    for(i=0; i<(1<<order); i++)
-        p2m_set_entry(p2m, gfn_aligned + i, INVALID_MFN, PAGE_ORDER_4K,
-                      p2m_populate_on_demand, p2m->default_access);
+    if ( p2m_set_entry(p2m, gfn_aligned, INVALID_MFN, PAGE_ORDER_4K,
+                       p2m_populate_on_demand, p2m->default_access) )
+        return -1;
+
     if ( tb_init_done )
     {
         struct {


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic