[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oss-security
Subject:    [oss-security] Xen Security Advisory 389 v3 (CVE-2021-28705,CVE-2021-28709) - issues with partially 
From:       Xen.org security team <security () xen ! org>
Date:       2021-11-23 12:11:14
Message-ID: E1mpUdm-0004Vs-4E () xenbits ! xenproject ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

     Xen Security Advisory CVE-2021-28705,CVE-2021-28709 / XSA-389
                               version 3

          issues with partially successful P2M updates on x86

UPDATES IN VERSION 3
====================

Add CVE numbers to patches.

Public release.

ISSUE DESCRIPTION
=================

x86 HVM and PVH guests may be started in populate-on-demand (PoD) mode,
to provide a way for them to later easily have more memory assigned.

Guests are permitted to control certain P2M aspects of individual
pages via hypercalls.  These hypercalls may act on ranges of pages
specified via page orders (resulting in a power-of-2 number of pages).
In some cases the hypervisor carries out the requests by splitting
them into smaller chunks.  Error handling in certain PoD cases has
been insufficient in that in particular partial success of some
operations was not properly accounted for.

There are two code paths affected - page removal (CVE-2021-28705) and
insertion of new pages (CVE-2021-28709).  (We provide one patch which
combines the fix to both issues.)

IMPACT
======

Malicious or buggy guest kernels may be able to mount a Denial of
Service (DoS) attack affecting the entire system.  Privilege escalation
and information leaks cannot be ruled out.

VULNERABLE SYSTEMS
==================

All Xen versions from 3.4 onwards are affected.  Xen versions 3.3 and
older are believed to not be affected.

Only x86 HVM and PVH guests started in populate-on-demand mode are
believed to be able to leverage the vulnerability.  Populate-on-demand
mode is activated when the guest's xl configuration file specifies a
"maxmem" value which is larger than the "memory" value.

MITIGATION
==========

Not starting x86 HVM or PVH guests in populate-on-demand mode is
believed to allow avoiding the vulnerability.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa389.patch           xen-unstable
xsa389-4.15.patch      Xen 4.15.x
xsa389-4.14.patch      Xen 4.14.x
xsa389-4.13.patch      Xen 4.13.x
xsa389-4.12.patch      Xen 4.12.x

$ sha256sum xsa389*
c00f5b07594a6459bdd6f7334acc373bc3b0c14a5b0e444ec624ac60f857fc6f  xsa389.patch
bf0d66623c3239e334a17332035be5d7c7e33cfdd7f04f9b385f70ce8fa92752  xsa389-4.12.patch
2737affcf1e0fae5d412067ea8c7fe1cc91a28fa22f3f7e97a502cbd032582cc  xsa389-4.13.patch
b243284679b32ab8c817a2e41562d8694d9781fa8096c268bb41b0cd91684baa  xsa389-4.14.patch
0a213e141089fe7808eae067b3c43beed6c7d5887fa4c901e8f9352618788e5a  xsa389-4.15.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on public-
facing systems with untrusted guest users and administrators.

HOWEVER, deployment of the mitigation described above is NOT permitted
during the embargo on public-facing systems with untrusted guest users
and administrators.  This is because such a configuration change is
recognizable by the affected guests.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmGc2jkMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZkOAIAInsof4UP5VTDcLtiwCvGskCXZT0SwbJ5OKbmxG7
RmPJg+R5sy89aHyJ4BP4eRfgrfbG35qBSCB5zLHy2FR3oioRmDz3y4KAFP3hXJRc
B0hSNM9Al9nEfdt0YQeVxt297X0Ouz/bihLoHXKOTZ2AqKcafu9GRIdK0Kcj1v49
azcW1ndfAkIEYDGvtcdZDXYT3CyjLusQme3pweohZGwcQW6UYg7DhRKl0KPQZP/L
paQZd60walNWgDcV7qfMnWit2jYxF4AptLW8c+KFig7qorLE5z9Xj7AIJ6kGriry
fnwy/DE2xRr4IxWk/FsJgDxeAS6mv3KQ2Mpgx2bRAD0jB6I=
=3P7k
-----END PGP SIGNATURE-----

["xsa389.patch" (application/octet-stream)]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/P2M: deal with partial success of p2m_set_entry()

M2P and PoD stats need to remain in sync with P2M; if an update succeeds
only partially, respective adjustments need to be made. If updates get
made before the call, they may also need undoing upon complete failure
(i.e. including the single-page case).

Log-dirty state would better also be kept in sync.

Note that the change to set_typed_p2m_entry() may not be strictly
necessary (due to the order restriction enforced near the top of the
function), but is being kept here to be on the safe side.

This is CVE-2021-28705 and CVE-2021-28709 / XSA-389.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
I think the M2P aspect (wrt wrongly allowing multiple GFN mappings of a
page that may later get freed, i.e. primarily the grant table v2 status
pages) is only theoretical at this point. That's because any such
special mappings would only ever be 4k ones, and replacing existing
small mappings can't fail with -ENOMEM from intermediate page table
allocation. And I think all other error returns would, if in fact
reachable, represent (presumably security relevant) bugs themselves, so
addressing this part may be merely defense-in-depth.
---
v4: Re-base ahead of "x86/mm: update log-dirty bitmap when manipulating
    P2M".
v3: Remove bogus special casing of order-0 from p2m_remove_page() and
    guest_physmap_add_entry(). Adjust description accordingly.
v2: Correct undo conditional in p2m_remove_page().

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -794,6 +794,7 @@ p2m_remove_page(struct p2m_domain *p2m,
     unsigned long i;
     p2m_type_t t;
     p2m_access_t a;
+    int rc;
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
     P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_x(gfn), mfn_x(mfn));
@@ -825,8 +826,27 @@ p2m_remove_page(struct p2m_domain *p2m,
 
     ioreq_request_mapcache_invalidate(p2m->domain);
 
-    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
-                         p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
+                       p2m->default_access);
+    if ( likely(!rc) || !mfn_valid(mfn) )
+        return rc;
+
+    /*
+     * The operation may have partially succeeded. For the failed part we need
+     * to undo the M2P update and, out of precaution, mark the pages dirty
+     * again.
+     */
+    for ( i = 0; i < (1UL << page_order); ++i )
+    {
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
+        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        {
+            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+            paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_x(gfn) + i));
+        }
+    }
+
+    return rc;
 }
 
 int
@@ -1022,13 +1042,8 @@ guest_physmap_add_entry(struct domain *d
 
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access);
-    if ( rc == 0 )
+    if ( likely(!rc) )
     {
-        pod_lock(p2m);
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -1036,6 +1051,42 @@ guest_physmap_add_entry(struct domain *d
                                   gfn_x(gfn_add(gfn, i)));
         }
     }
+    else
+    {
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update M2P and dirty state, while for the failed part we
+         * may need to adjust PoD stats as well as undo the earlier M2P update.
+         */
+        for ( i = 0; i < (1UL << page_order); ++i )
+        {
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, NULL, NULL);
+            if ( p2m_is_pod(ot) )
+            {
+                BUG_ON(!pod_count);
+                --pod_count;
+            }
+            else if ( mfn_eq(omfn, mfn_add(mfn, i)) && ot == t &&
+                      a == p2m->default_access && !p2m_is_grant(t) )
+            {
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
+            else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
+            {
+                ASSERT(mfn_valid(omfn));
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+            }
+        }
+    }
+
+    if ( pod_count )
+    {
+        pod_lock(p2m);
+        p2m->pod.entry_count -= pod_count;
+        BUG_ON(p2m->pod.entry_count < 0);
+        pod_unlock(p2m);
+    }
 
 out:
     p2m_unlock(p2m);
@@ -1325,6 +1376,49 @@ static int set_typed_p2m_entry(struct do
             return 0;
         }
     }
+
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    if ( unlikely(rc) )
+    {
+        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
+                 gfn_l, order, rc, mfn_x(mfn));
+
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update PoD stats, M2P, and dirty state.
+         */
+        if ( order != PAGE_ORDER_4K )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1UL << order); ++i )
+            {
+                p2m_type_t t;
+                mfn_t cmfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
+                                            NULL, NULL);
+
+                if ( !mfn_eq(cmfn, mfn_add(mfn, i)) || t != gfn_p2mt ||
+                     a != access )
+                    continue;
+
+                if ( p2m_is_ram(ot) )
+                {
+                    ASSERT(mfn_valid(mfn_add(omfn, i)));
+                    set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+
+                    ioreq_request_mapcache_invalidate(d);
+                }
+                else if ( p2m_is_pod(ot) )
+                {
+                    pod_lock(p2m);
+                    BUG_ON(!p2m->pod.entry_count);
+                    --p2m->pod.entry_count;
+                    pod_unlock(p2m);
+                }
+            }
+        }
+    }
     else if ( p2m_is_ram(ot) )
     {
         unsigned long i;
@@ -1337,12 +1431,6 @@ static int set_typed_p2m_entry(struct do
 
         ioreq_request_mapcache_invalidate(d);
     }
-
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
-    if ( rc )
-        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn_l, order, rc, mfn_x(mfn));
     else if ( p2m_is_pod(ot) )
     {
         pod_lock(p2m);

["xsa389-4.12.patch" (application/octet-stream)]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/P2M: deal with partial success of p2m_set_entry()

M2P and PoD stats need to remain in sync with P2M; if an update succeeds
only partially, respective adjustments need to be made. If updates get
made before the call, they may also need undoing upon complete failure
(i.e. including the single-page case).

Log-dirty state would better also be kept in sync.

Note that the change to set_typed_p2m_entry() may not be strictly
necessary (due to the order restriction enforced near the top of the
function), but is being kept here to be on the safe side.

This is CVE-2021-28705 and CVE-2021-28709 / XSA-389.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -780,6 +780,7 @@ p2m_remove_page(struct p2m_domain *p2m,
     gfn_t gfn = _gfn(gfn_l);
     p2m_type_t t;
     p2m_access_t a;
+    int rc;
 
     /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(p2m->domain) )
@@ -811,8 +812,27 @@ p2m_remove_page(struct p2m_domain *p2m,
                 set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
         }
     }
-    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
-                         p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
+                       p2m->default_access);
+    if ( likely(!rc) || !mfn_valid(_mfn(mfn)) )
+        return rc;
+
+    /*
+     * The operation may have partially succeeded. For the failed part we need
+     * to undo the M2P update and, out of precaution, mark the pages dirty
+     * again.
+     */
+    for ( i = 0; i < (1UL << page_order); ++i )
+    {
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
+        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        {
+            set_gpfn_from_mfn(mfn + i, gfn_l + i);
+            paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_l + i));
+        }
+    }
+
+    return rc;
 }
 
 int
@@ -980,15 +1000,8 @@ guest_physmap_add_entry(struct domain *d
 
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access);
-    if ( rc == 0 )
+    if ( likely(!rc) )
     {
-#ifdef CONFIG_HVM
-        pod_lock(p2m);
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-#endif
-
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -996,6 +1009,44 @@ guest_physmap_add_entry(struct domain *d
                                   gfn_x(gfn_add(gfn, i)));
         }
     }
+    else
+    {
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update M2P and dirty state, while for the failed part we
+         * may need to adjust PoD stats as well as undo the earlier M2P update.
+         */
+        for ( i = 0; i < (1UL << page_order); ++i )
+        {
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, NULL, NULL);
+            if ( p2m_is_pod(ot) )
+            {
+                BUG_ON(!pod_count);
+                --pod_count;
+            }
+            else if ( mfn_eq(omfn, mfn_add(mfn, i)) && ot == t &&
+                      a == p2m->default_access && !p2m_is_grant(t) )
+            {
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
+            else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
+            {
+                ASSERT(mfn_valid(omfn));
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+            }
+        }
+    }
+
+#ifdef CONFIG_HVM
+    if ( pod_count )
+    {
+        pod_lock(p2m);
+        p2m->pod.entry_count -= pod_count;
+        BUG_ON(p2m->pod.entry_count < 0);
+        pod_unlock(p2m);
+    }
+#endif
 
  out:
     p2m_unlock(p2m);
@@ -1278,6 +1329,49 @@ static int set_typed_p2m_entry(struct do
         domain_crash(d);
         return -EPERM;
     }
+
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    if ( unlikely(rc) )
+    {
+        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
+                 gfn_l, order, rc, mfn_x(mfn));
+
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update PoD stats, M2P, and dirty state.
+         */
+        if ( order != PAGE_ORDER_4K )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1UL << order); ++i )
+            {
+                p2m_type_t t;
+                mfn_t cmfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
+                                            NULL, NULL);
+
+                if ( !mfn_eq(cmfn, mfn_add(mfn, i)) || t != gfn_p2mt ||
+                     a != access )
+                    continue;
+
+                if ( p2m_is_ram(ot) )
+                {
+                    ASSERT(mfn_valid(mfn_add(omfn, i)));
+                    set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+                }
+#ifdef CONFIG_HVM
+                else if ( p2m_is_pod(ot) )
+                {
+                    pod_lock(p2m);
+                    BUG_ON(!p2m->pod.entry_count);
+                    --p2m->pod.entry_count;
+                    pod_unlock(p2m);
+                }
+#endif
+            }
+        }
+    }
     else if ( p2m_is_ram(ot) )
     {
         unsigned long i;
@@ -1288,12 +1382,6 @@ static int set_typed_p2m_entry(struct do
             set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
         }
     }
-
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
-    if ( rc )
-        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn_l, order, rc, mfn_x(mfn));
 #ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {

["xsa389-4.13.patch" (application/octet-stream)]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/P2M: deal with partial success of p2m_set_entry()

M2P and PoD stats need to remain in sync with P2M; if an update succeeds
only partially, respective adjustments need to be made. If updates get
made before the call, they may also need undoing upon complete failure
(i.e. including the single-page case).

Log-dirty state would better also be kept in sync.

Note that the change to set_typed_p2m_entry() may not be strictly
necessary (due to the order restriction enforced near the top of the
function), but is being kept here to be on the safe side.

This is CVE-2021-28705 and CVE-2021-28709 / XSA-389.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -781,6 +781,7 @@ p2m_remove_page(struct p2m_domain *p2m,
     gfn_t gfn = _gfn(gfn_l);
     p2m_type_t t;
     p2m_access_t a;
+    int rc;
 
     /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(p2m->domain) )
@@ -812,8 +813,27 @@ p2m_remove_page(struct p2m_domain *p2m,
                 set_gpfn_from_mfn(mfn+i, INVALID_M2P_ENTRY);
         }
     }
-    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
-                         p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
+                       p2m->default_access);
+    if ( likely(!rc) || !mfn_valid(_mfn(mfn)) )
+        return rc;
+
+    /*
+     * The operation may have partially succeeded. For the failed part we need
+     * to undo the M2P update and, out of precaution, mark the pages dirty
+     * again.
+     */
+    for ( i = 0; i < (1UL << page_order); ++i )
+    {
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
+        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        {
+            set_gpfn_from_mfn(mfn + i, gfn_l + i);
+            paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_l + i));
+        }
+    }
+
+    return rc;
 }
 
 int
@@ -1002,13 +1022,8 @@ guest_physmap_add_entry(struct domain *d
 
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access);
-    if ( rc == 0 )
+    if ( likely(!rc) )
     {
-        pod_lock(p2m);
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -1016,6 +1031,42 @@ guest_physmap_add_entry(struct domain *d
                                   gfn_x(gfn_add(gfn, i)));
         }
     }
+    else
+    {
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update M2P and dirty state, while for the failed part we
+         * may need to adjust PoD stats as well as undo the earlier M2P update.
+         */
+        for ( i = 0; i < (1UL << page_order); ++i )
+        {
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, NULL, NULL);
+            if ( p2m_is_pod(ot) )
+            {
+                BUG_ON(!pod_count);
+                --pod_count;
+            }
+            else if ( mfn_eq(omfn, mfn_add(mfn, i)) && ot == t &&
+                      a == p2m->default_access && !p2m_is_grant(t) )
+            {
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
+            else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
+            {
+                ASSERT(mfn_valid(omfn));
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+            }
+        }
+    }
+
+    if ( pod_count )
+    {
+        pod_lock(p2m);
+        p2m->pod.entry_count -= pod_count;
+        BUG_ON(p2m->pod.entry_count < 0);
+        pod_unlock(p2m);
+    }
 
  out:
     p2m_unlock(p2m);
@@ -1307,6 +1358,49 @@ static int set_typed_p2m_entry(struct do
             return 0;
         }
     }
+
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    if ( unlikely(rc) )
+    {
+        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
+                 gfn_l, order, rc, mfn_x(mfn));
+
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update PoD stats, M2P, and dirty state.
+         */
+        if ( order != PAGE_ORDER_4K )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1UL << order); ++i )
+            {
+                p2m_type_t t;
+                mfn_t cmfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
+                                            NULL, NULL);
+
+                if ( !mfn_eq(cmfn, mfn_add(mfn, i)) || t != gfn_p2mt ||
+                     a != access )
+                    continue;
+
+                if ( p2m_is_ram(ot) )
+                {
+                    ASSERT(mfn_valid(mfn_add(omfn, i)));
+                    set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+                }
+#ifdef CONFIG_HVM
+                else if ( p2m_is_pod(ot) )
+                {
+                    pod_lock(p2m);
+                    BUG_ON(!p2m->pod.entry_count);
+                    --p2m->pod.entry_count;
+                    pod_unlock(p2m);
+                }
+#endif
+            }
+        }
+    }
     else if ( p2m_is_ram(ot) )
     {
         unsigned long i;
@@ -1317,12 +1411,6 @@ static int set_typed_p2m_entry(struct do
             set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
         }
     }
-
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
-    if ( rc )
-        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn_l, order, rc, mfn_x(mfn));
 #ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {

["xsa389-4.14.patch" (application/octet-stream)]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/P2M: deal with partial success of p2m_set_entry()

M2P and PoD stats need to remain in sync with P2M; if an update succeeds
only partially, respective adjustments need to be made. If updates get
made before the call, they may also need undoing upon complete failure
(i.e. including the single-page case).

Log-dirty state would better also be kept in sync.

Note that the change to set_typed_p2m_entry() may not be strictly
necessary (due to the order restriction enforced near the top of the
function), but is being kept here to be on the safe side.

This is CVE-2021-28705 and CVE-2021-28709 / XSA-389.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -780,6 +780,7 @@ p2m_remove_page(struct p2m_domain *p2m,
     unsigned long i;
     p2m_type_t t;
     p2m_access_t a;
+    int rc;
 
     /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(p2m->domain) )
@@ -813,8 +814,27 @@ p2m_remove_page(struct p2m_domain *p2m,
         }
     }
 
-    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
-                         p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
+                       p2m->default_access);
+    if ( likely(!rc) || !mfn_valid(mfn) )
+        return rc;
+
+    /*
+     * The operation may have partially succeeded. For the failed part we need
+     * to undo the M2P update and, out of precaution, mark the pages dirty
+     * again.
+     */
+    for ( i = 0; i < (1UL << page_order); ++i )
+    {
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
+        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        {
+            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+            paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_x(gfn) + i));
+        }
+    }
+
+    return rc;
 }
 
 int
@@ -1003,13 +1023,8 @@ guest_physmap_add_entry(struct domain *d
 
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access);
-    if ( rc == 0 )
+    if ( likely(!rc) )
     {
-        pod_lock(p2m);
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -1017,6 +1032,42 @@ guest_physmap_add_entry(struct domain *d
                                   gfn_x(gfn_add(gfn, i)));
         }
     }
+    else
+    {
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update M2P and dirty state, while for the failed part we
+         * may need to adjust PoD stats as well as undo the earlier M2P update.
+         */
+        for ( i = 0; i < (1UL << page_order); ++i )
+        {
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, NULL, NULL);
+            if ( p2m_is_pod(ot) )
+            {
+                BUG_ON(!pod_count);
+                --pod_count;
+            }
+            else if ( mfn_eq(omfn, mfn_add(mfn, i)) && ot == t &&
+                      a == p2m->default_access && !p2m_is_grant(t) )
+            {
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
+            else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
+            {
+                ASSERT(mfn_valid(omfn));
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+            }
+        }
+    }
+
+    if ( pod_count )
+    {
+        pod_lock(p2m);
+        p2m->pod.entry_count -= pod_count;
+        BUG_ON(p2m->pod.entry_count < 0);
+        pod_unlock(p2m);
+    }
 
 out:
     p2m_unlock(p2m);
@@ -1308,6 +1359,49 @@ static int set_typed_p2m_entry(struct do
             return 0;
         }
     }
+
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    if ( unlikely(rc) )
+    {
+        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
+                 gfn_l, order, rc, mfn_x(mfn));
+
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update PoD stats, M2P, and dirty state.
+         */
+        if ( order != PAGE_ORDER_4K )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1UL << order); ++i )
+            {
+                p2m_type_t t;
+                mfn_t cmfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
+                                            NULL, NULL);
+
+                if ( !mfn_eq(cmfn, mfn_add(mfn, i)) || t != gfn_p2mt ||
+                     a != access )
+                    continue;
+
+                if ( p2m_is_ram(ot) )
+                {
+                    ASSERT(mfn_valid(mfn_add(omfn, i)));
+                    set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+                }
+#ifdef CONFIG_HVM
+                else if ( p2m_is_pod(ot) )
+                {
+                    pod_lock(p2m);
+                    BUG_ON(!p2m->pod.entry_count);
+                    --p2m->pod.entry_count;
+                    pod_unlock(p2m);
+                }
+#endif
+            }
+        }
+    }
     else if ( p2m_is_ram(ot) )
     {
         unsigned long i;
@@ -1318,12 +1412,6 @@ static int set_typed_p2m_entry(struct do
             set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
         }
     }
-
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
-    if ( rc )
-        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn_l, order, rc, mfn_x(mfn));
 #ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {

["xsa389-4.15.patch" (application/octet-stream)]

From: Jan Beulich <jbeulich@suse.com>
Subject: x86/P2M: deal with partial success of p2m_set_entry()

M2P and PoD stats need to remain in sync with P2M; if an update succeeds
only partially, respective adjustments need to be made. If updates get
made before the call, they may also need undoing upon complete failure
(i.e. including the single-page case).

Log-dirty state would better also be kept in sync.

Note that the change to set_typed_p2m_entry() may not be strictly
necessary (due to the order restriction enforced near the top of the
function), but is being kept here to be on the safe side.

This is CVE-2021-28705 and CVE-2021-28709 / XSA-389.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -784,6 +784,7 @@ p2m_remove_page(struct p2m_domain *p2m,
     unsigned long i;
     p2m_type_t t;
     p2m_access_t a;
+    int rc;
 
     /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(p2m->domain) )
@@ -819,8 +820,27 @@ p2m_remove_page(struct p2m_domain *p2m,
 
     ioreq_request_mapcache_invalidate(p2m->domain);
 
-    return p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
-                         p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, INVALID_MFN, page_order, p2m_invalid,
+                       p2m->default_access);
+    if ( likely(!rc) || !mfn_valid(mfn) )
+        return rc;
+
+    /*
+     * The operation may have partially succeeded. For the failed part we need
+     * to undo the M2P update and, out of precaution, mark the pages dirty
+     * again.
+     */
+    for ( i = 0; i < (1UL << page_order); ++i )
+    {
+        p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0, NULL, NULL);
+        if ( !p2m_is_hole(t) && !p2m_is_special(t) && !p2m_is_shared(t) )
+        {
+            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+            paging_mark_pfn_dirty(p2m->domain, _pfn(gfn_x(gfn) + i));
+        }
+    }
+
+    return rc;
 }
 
 int
@@ -1009,13 +1029,8 @@ guest_physmap_add_entry(struct domain *d
 
     /* Now, actually do the two-way mapping */
     rc = p2m_set_entry(p2m, gfn, mfn, page_order, t, p2m->default_access);
-    if ( rc == 0 )
+    if ( likely(!rc) )
     {
-        pod_lock(p2m);
-        p2m->pod.entry_count -= pod_count;
-        BUG_ON(p2m->pod.entry_count < 0);
-        pod_unlock(p2m);
-
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -1023,6 +1038,42 @@ guest_physmap_add_entry(struct domain *d
                                   gfn_x(gfn_add(gfn, i)));
         }
     }
+    else
+    {
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update M2P and dirty state, while for the failed part we
+         * may need to adjust PoD stats as well as undo the earlier M2P update.
+         */
+        for ( i = 0; i < (1UL << page_order); ++i )
+        {
+            omfn = p2m->get_entry(p2m, gfn_add(gfn, i), &ot, &a, 0, NULL, NULL);
+            if ( p2m_is_pod(ot) )
+            {
+                BUG_ON(!pod_count);
+                --pod_count;
+            }
+            else if ( mfn_eq(omfn, mfn_add(mfn, i)) && ot == t &&
+                      a == p2m->default_access && !p2m_is_grant(t) )
+            {
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+                paging_mark_pfn_dirty(d, _pfn(gfn_x(gfn) + i));
+            }
+            else if ( p2m_is_ram(ot) && !p2m_is_paged(ot) )
+            {
+                ASSERT(mfn_valid(omfn));
+                set_gpfn_from_mfn(mfn_x(omfn), gfn_x(gfn) + i);
+            }
+        }
+    }
+
+    if ( pod_count )
+    {
+        pod_lock(p2m);
+        p2m->pod.entry_count -= pod_count;
+        BUG_ON(p2m->pod.entry_count < 0);
+        pod_unlock(p2m);
+    }
 
 out:
     p2m_unlock(p2m);
@@ -1314,6 +1365,51 @@ static int set_typed_p2m_entry(struct do
             return 0;
         }
     }
+
+    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
+    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
+    if ( unlikely(rc) )
+    {
+        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
+                 gfn_l, order, rc, mfn_x(mfn));
+
+        /*
+         * The operation may have partially succeeded. For the successful part
+         * we need to update PoD stats, M2P, and dirty state.
+         */
+        if ( order != PAGE_ORDER_4K )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < (1UL << order); ++i )
+            {
+                p2m_type_t t;
+                mfn_t cmfn = p2m->get_entry(p2m, gfn_add(gfn, i), &t, &a, 0,
+                                            NULL, NULL);
+
+                if ( !mfn_eq(cmfn, mfn_add(mfn, i)) || t != gfn_p2mt ||
+                     a != access )
+                    continue;
+
+                if ( p2m_is_ram(ot) )
+                {
+                    ASSERT(mfn_valid(mfn_add(omfn, i)));
+                    set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
+
+                    ioreq_request_mapcache_invalidate(d);
+                }
+#ifdef CONFIG_HVM
+                else if ( p2m_is_pod(ot) )
+                {
+                    pod_lock(p2m);
+                    BUG_ON(!p2m->pod.entry_count);
+                    --p2m->pod.entry_count;
+                    pod_unlock(p2m);
+                }
+#endif
+            }
+        }
+    }
     else if ( p2m_is_ram(ot) )
     {
         unsigned long i;
@@ -1326,12 +1422,6 @@ static int set_typed_p2m_entry(struct do
 
         ioreq_request_mapcache_invalidate(d);
     }
-
-    P2M_DEBUG("set %d %lx %lx\n", gfn_p2mt, gfn_l, mfn_x(mfn));
-    rc = p2m_set_entry(p2m, gfn, mfn, order, gfn_p2mt, access);
-    if ( rc )
-        gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
-                 gfn_l, order, rc, mfn_x(mfn));
 #ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic