[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oss-security
Subject:    [oss-security] Xen Security Advisory 276 v2 - resource accounting issues in x86 IOREQ server handlin
From:       Xen.org security team <security () xen ! org>
Date:       2018-11-20 13:26:24
Message-ID: E1gP62q-0000ct-UJ () xenbits ! xenproject ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-276
                              version 2

        resource accounting issues in x86 IOREQ server handling

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

Allocation of pages used to communicate with external emulators did not
follow certain principles that are required for proper life cycle
management of guest exposed pages.

IMPACT
======

A compromised DM stubdomain may cause Xen to crash, resulting in a DoS
(Denial of Service) affecting the entire host.  Privilege escalation
as well as information leaks cannot be ruled out.

VULNERABLE SYSTEMS
==================

Only Xen 4.11 is affected by this vulnerability.  Xen 4.10 and older are
not affected by this vulnerability.

Only systems running HVM guests with their devicemodels in a
stubdomain are considered vulnerable.  Note that attackers also need
to exploit the devicemodel in order to have access to this
vulnerability.

Arm guests cannot leverage this vulnerability.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

(The security of a Xen system using stub domains is still better than
with a qemu-dm running as an unrestricted dom0 process.  Therefore
users with these configurations should not switch to an unrestricted
dom0 qemu-dm.)

CREDITS
=======

This issue was discovered by Julien Grall of ARM.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

xsa276/*.patch           xen-unstable
xsa276-4.11/*.patch        Xen 4.11.x

$ sha256sum xsa276* xsa276*/*
efe9f031c5646b111cbfbe35141a7d99eb31ead07c1c6051145abbd9a3def5b9  xsa276.meta
7f77225e3de780a2507714caab5870664634bf9f76215547bebd31a6399a86ef  \
xsa276-4.11/0001-x86-hvm-ioreq-fix-page-referencing.patch \
c93c66090009833cd11fabe72b523cbdb3467fa104cc97d1855d365881aa7f8e  \
xsa276-4.11/0002-x86-hvm-ioreq-use-ref-counted-target-assigned-shared.patch \
ef8b89375866821f4a612f600d10834bf65d811b1784a4ee0fde4a3a409501e0  \
xsa276/0001-x86-hvm-ioreq-fix-page-referencing.patch \
75398ec343b9aaebf0c7dc0c5ef5ed7a3f3be0959f1519db5c7f32c44e7a54d3  \
xsa276/0002-x86-hvm-ioreq-use-ref-counted-target-assigned-shared.patch $

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAlv0C2kMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZpssH/1YDoUGry3iCsHZnymWqfWFiuddW2U03UPmq/BH+
tZ+HxnOeibVkvsB8g9POxCkSqS77MiFksgUTc0l6qV9zZ+A7glFRzMbKSSnmobul
ETP/7AM3UO8H4uSji8P3lfN0l1B/BXetitv6FzogOUTP4iCX1TYfS4eu+UUOTWoj
kg3DglZKeLY/eztTnJSOP5VzT09+Ra44IFvCfzz4gMV6Njgj0dZZ1jyBvKNxY3Rs
bKiuycHDAzTGWHR6hymGVR73EowTgaboLEjpXTWVYbBvKv8HUp/v5UBzCf3TuPy6
GmtUaS/mtDPRYcgAjYPddGa7euVL6ESV+FNsSrMneJCBgk4=
=/tEm
-----END PGP SIGNATURE-----


["xsa276.meta" (application/octet-stream)]
["xsa276-4.11/0001-x86-hvm-ioreq-fix-page-referencing.patch" (application/octet-stream)]

From bcc115ba39d2985dcf356ba8a9ac291e314f1f0f Mon Sep 17 00:00:00 2001
From: Jan Beulich <JBeulich@suse.com>
Date: Thu, 11 Oct 2018 04:00:26 -0600
Subject: [PATCH 1/2] x86/hvm/ioreq: fix page referencing

The code does not take a page reference in hvm_alloc_ioreq_mfn(), only a
type reference. This can lead to a situation where a malicious domain with
XSM_DM_PRIV can engineer a sequence as follows:

- create IOREQ server: no pages as yet.
- acquire resource: page allocated, total 0.
- decrease reservation: -1 ref, total -1.

This will cause Xen to hit a BUG_ON() in free_domheap_pages().

This patch fixes the issue by changing the call to get_page_type() in
hvm_alloc_ioreq_mfn() to a call to get_page_and_type(). This change
in turn requires an extra put_page() in hvm_free_ioreq_mfn() in the case
that _PGC_allocated is still set (i.e. a decrease reservation has not
occurred) to avoid the page being leaked.

This is part of XSA-276.

Reported-by: Julien Grall <julien.grall@arm.com>
Reported-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/ioreq.c | 46 +++++++++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index f39f391929..bdc2687014 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -327,6 +327,7 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
 static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page;
 
     if ( iorp->page )
     {
@@ -349,27 +350,33 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
      * could fail if the emulating domain has already reached its
      * maximum allocation.
      */
-    iorp->page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
+    page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
 
-    if ( !iorp->page )
+    if ( !page )
         return -ENOMEM;
 
-    if ( !get_page_type(iorp->page, PGT_writable_page) )
-        goto fail1;
+    if ( !get_page_and_type(page, s->emulator, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(s->emulator);
+        return -ENODATA;
+    }
 
-    iorp->va = __map_domain_page_global(iorp->page);
+    iorp->va = __map_domain_page_global(page);
     if ( !iorp->va )
-        goto fail2;
+        goto fail;
 
+    iorp->page = page;
     clear_page(iorp->va);
     return 0;
 
- fail2:
-    put_page_type(iorp->page);
-
- fail1:
-    put_page(iorp->page);
-    iorp->page = NULL;
+ fail:
+    if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
+        put_page(page);
+    put_page_and_type(page);
 
     return -ENOMEM;
 }
@@ -377,15 +384,24 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page = iorp->page;
 
-    if ( !iorp->page )
+    if ( !page )
         return;
 
+    iorp->page = NULL;
+
     unmap_domain_page_global(iorp->va);
     iorp->va = NULL;
 
-    put_page_and_type(iorp->page);
-    iorp->page = NULL;
+    /*
+     * Check whether we need to clear the allocation reference before
+     * dropping the explicit references taken by get_page_and_type().
+     */
+    if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
+        put_page(page);
+
+    put_page_and_type(page);
 }
 
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
-- 
2.19.1


["xsa276-4.11/0002-x86-hvm-ioreq-use-ref-counted-target-assigned-shared.patch" (application/octet-stream)]

From 0bb2969630fbc92a0510bf120578b58efb74cdab Mon Sep 17 00:00:00 2001
From: Paul Durrant <Paul.Durrant@citrix.com>
Date: Thu, 1 Nov 2018 17:30:20 +0000
Subject: [PATCH 2/2] x86/hvm/ioreq: use ref-counted target-assigned shared
 pages

Passing MEMF_no_refcount to alloc_domheap_pages() will allocate, as
expected, a page that is assigned to the specified domain but is not
accounted for in tot_pages. Unfortunately there is no logic for tracking
such allocations and avoiding any adjustment to tot_pages when the page
is freed.

The only caller of alloc_domheap_pages() that passes MEMF_no_refcount is
hvm_alloc_ioreq_mfn() so this patch removes use of the flag from that
call-site to avoid the possibility of a domain using an ioreq server as
a means to adjust its tot_pages and hence allocate more memory than it
should be able to.

However, the reason for using the flag in the first place was to avoid
the allocation failing if the emulator domain is already at its maximum
memory limit. Hence this patch switches to allocating memory from the
target domain instead of the emulator domain. There is already an extra
memory allowance of 2MB (LIBXL_HVM_EXTRA_MEMORY) applied to HVM guests,
which is sufficient to cover the pages required by the supported
configuration of a single IOREQ server for QEMU. (Stub-domains do not,
so far, use resource mapping). It also also the case the QEMU will have
mapped the IOREQ server pages before the guest boots, hence it is not
possible for the guest to inflate its balloon to consume these pages.

Reported-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c | 12 ++----------
 xen/arch/x86/mm.c        |  6 ------
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index bdc2687014..fd10ee6146 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -342,20 +342,12 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
         return 0;
     }
 
-    /*
-     * Allocated IOREQ server pages are assigned to the emulating
-     * domain, not the target domain. This is safe because the emulating
-     * domain cannot be destroyed until the ioreq server is destroyed.
-     * Also we must use MEMF_no_refcount otherwise page allocation
-     * could fail if the emulating domain has already reached its
-     * maximum allocation.
-     */
-    page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
+    page = alloc_domheap_page(s->target, 0);
 
     if ( !page )
         return -ENOMEM;
 
-    if ( !get_page_and_type(page, s->emulator, PGT_writable_page) )
+    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
     {
         /*
          * The domain can't possibly know about this page yet, so failure
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7d4871b791..24b215d785 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4396,12 +4396,6 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
 
             mfn_list[i] = mfn_x(mfn);
         }
-
-        /*
-         * The frames will have been assigned to the domain that created
-         * the ioreq server.
-         */
-        *flags |= XENMEM_rsrc_acq_caller_owned;
         break;
     }
 
-- 
2.19.1


["xsa276/0001-x86-hvm-ioreq-fix-page-referencing.patch" (application/octet-stream)]

From 51eff03eceb18db10f19a88f92fc367b34d2a13e Mon Sep 17 00:00:00 2001
From: Jan Beulich <JBeulich@suse.com>
Date: Thu, 11 Oct 2018 04:00:26 -0600
Subject: [PATCH 1/2] x86/hvm/ioreq: fix page referencing

The code does not take a page reference in hvm_alloc_ioreq_mfn(), only a
type reference. This can lead to a situation where a malicious domain with
XSM_DM_PRIV can engineer a sequence as follows:

- create IOREQ server: no pages as yet.
- acquire resource: page allocated, total 0.
- decrease reservation: -1 ref, total -1.

This will cause Xen to hit a BUG_ON() in free_domheap_pages().

This patch fixes the issue by changing the call to get_page_type() in
hvm_alloc_ioreq_mfn() to a call to get_page_and_type(). This change
in turn requires an extra put_page() in hvm_free_ioreq_mfn() in the case
that _PGC_allocated is still set (i.e. a decrease reservation has not
occurred) to avoid the page being leaked.

This is part of XSA-276.

Reported-by: Julien Grall <julien.grall@arm.com>
Reported-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/ioreq.c | 46 +++++++++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index e2e755a8a1..da36ef727e 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -356,6 +356,7 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
 static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page;
 
     if ( iorp->page )
     {
@@ -378,27 +379,33 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
      * could fail if the emulating domain has already reached its
      * maximum allocation.
      */
-    iorp->page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
+    page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
 
-    if ( !iorp->page )
+    if ( !page )
         return -ENOMEM;
 
-    if ( !get_page_type(iorp->page, PGT_writable_page) )
-        goto fail1;
+    if ( !get_page_and_type(page, s->emulator, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(s->emulator);
+        return -ENODATA;
+    }
 
-    iorp->va = __map_domain_page_global(iorp->page);
+    iorp->va = __map_domain_page_global(page);
     if ( !iorp->va )
-        goto fail2;
+        goto fail;
 
+    iorp->page = page;
     clear_page(iorp->va);
     return 0;
 
- fail2:
-    put_page_type(iorp->page);
-
- fail1:
-    put_page(iorp->page);
-    iorp->page = NULL;
+ fail:
+    if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
+        put_page(page);
+    put_page_and_type(page);
 
     return -ENOMEM;
 }
@@ -406,15 +413,24 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 {
     struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page = iorp->page;
 
-    if ( !iorp->page )
+    if ( !page )
         return;
 
+    iorp->page = NULL;
+
     unmap_domain_page_global(iorp->va);
     iorp->va = NULL;
 
-    put_page_and_type(iorp->page);
-    iorp->page = NULL;
+    /*
+     * Check whether we need to clear the allocation reference before
+     * dropping the explicit references taken by get_page_and_type().
+     */
+    if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
+        put_page(page);
+
+    put_page_and_type(page);
 }
 
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
-- 
2.19.1


["xsa276/0002-x86-hvm-ioreq-use-ref-counted-target-assigned-shared.patch" (application/octet-stream)]

From f275a1f8c849c926de39cd66ca8c971b9f436cff Mon Sep 17 00:00:00 2001
From: Paul Durrant <Paul.Durrant@citrix.com>
Date: Thu, 1 Nov 2018 17:30:20 +0000
Subject: [PATCH 2/2] x86/hvm/ioreq: use ref-counted target-assigned shared
 pages

Passing MEMF_no_refcount to alloc_domheap_pages() will allocate, as
expected, a page that is assigned to the specified domain but is not
accounted for in tot_pages. Unfortunately there is no logic for tracking
such allocations and avoiding any adjustment to tot_pages when the page
is freed.

The only caller of alloc_domheap_pages() that passes MEMF_no_refcount is
hvm_alloc_ioreq_mfn() so this patch removes use of the flag from that
call-site to avoid the possibility of a domain using an ioreq server as
a means to adjust its tot_pages and hence allocate more memory than it
should be able to.

However, the reason for using the flag in the first place was to avoid
the allocation failing if the emulator domain is already at its maximum
memory limit. Hence this patch switches to allocating memory from the
target domain instead of the emulator domain. There is already an extra
memory allowance of 2MB (LIBXL_HVM_EXTRA_MEMORY) applied to HVM guests,
which is sufficient to cover the pages required by the supported
configuration of a single IOREQ server for QEMU. (Stub-domains do not,
so far, use resource mapping). It also also the case the QEMU will have
mapped the IOREQ server pages before the guest boots, hence it is not
possible for the guest to inflate its balloon to consume these pages.

Reported-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/ioreq.c | 12 ++----------
 xen/arch/x86/mm.c        |  6 ------
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index da36ef727e..a56d634f31 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -371,20 +371,12 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
         return 0;
     }
 
-    /*
-     * Allocated IOREQ server pages are assigned to the emulating
-     * domain, not the target domain. This is safe because the emulating
-     * domain cannot be destroyed until the ioreq server is destroyed.
-     * Also we must use MEMF_no_refcount otherwise page allocation
-     * could fail if the emulating domain has already reached its
-     * maximum allocation.
-     */
-    page = alloc_domheap_page(s->emulator, MEMF_no_refcount);
+    page = alloc_domheap_page(s->target, 0);
 
     if ( !page )
         return -ENOMEM;
 
-    if ( !get_page_and_type(page, s->emulator, PGT_writable_page) )
+    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
     {
         /*
          * The domain can't possibly know about this page yet, so failure
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 703f3301a5..6580374554 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4467,12 +4467,6 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
 
             mfn_list[i] = mfn_x(mfn);
         }
-
-        /*
-         * The frames will have been assigned to the domain that created
-         * the ioreq server.
-         */
-        *flags |= XENMEM_rsrc_acq_caller_owned;
         break;
     }
 #endif
-- 
2.19.1



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic