[prev in list] [next in list] [prev in thread] [next in thread] 

List:       xen-announce
Subject:    [Xen-announce] Xen Security Advisory 98 (CVE-2014-3969) - insufficient permissions checks accessing 
From:       Xen.org security team <security () xen ! org>
Date:       2015-03-13 15:59:30
Message-ID: E1YWRzm-0008Az-7I () xenbits ! xen ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

            Xen Security Advisory CVE-2014-3969 / XSA-98
                              version 5

       insufficient permissions checks accessing guest memory on ARM

UPDATES IN VERSION 5
====================

The issue described in update 4 also affects Xen 4.5 which was not
released at the time of the original advisory.  The extra patch
supplied with version 4 of this advisory is for Xen 4.5.x (as well as
4.4.x and xen-unstable).

Added credits for updated issue.

UPDATES IN VERSION 4
====================

Supply an additional patch for arm64. The original patches had the
permissions check backwards, meaning that a guest could read a
write-only mapping and vice versa, rendering the original fix
ineffective an inparticular not closing down the ability for a guest
to write to a readonly page via the hypervisor.

This issue was discussed on a public IRC channel and therefore it has
been agreed with the discoverer that it should not subject to a new
embargo.

32-bit ARM systems are not affected by this mistake; the original fix
remains correct for 32-bit.

ISSUE DESCRIPTION
=================

When accessing guest memory Xen does not correctly perform permissions
checks on the (possibly guest provided) virtual address: it only
checks that the mapping is readable by the guest, even when writing on
behalf of the guest.  This allows a guest to write to memory which
it should only be able to read.

A guest running on a vulnerable system is able to write to memory
which should be read-only.  This includes supposedly read only foreign
mappings established using the grant table mechanism.  Such read-only
mappings are commonly used as part of the paravirtualised I/O drivers
(such as guest disk write and network transmit).

In order to exploit this vulnerability the guest must have a mapping
of the memory; it does not allow access to arbitrary addresses.

In the event that a guest executes code from a page which has been
shared read-only with another guest it would be possible to mount a
take over attack on that guest.

IMPACT
======

A domain which is deliberately exchanging data with another,
malicious, domain, may be vulnerable to privilege escalation.  The
vulnerability depends on the precise behaviour of the victim domain.

In a typical configuration this means that, depending on the behaviour
of the toolstack or device driver domain, a malicious guest
administrator might be able to escalate their privilege to that of the
whole host.

VULNERABLE SYSTEMS
==================

Both 32- and 64-bit ARM systems are vulnerable from Xen 4.4 onward.

MITIGATION
==========

None.

CREDITS
=======

This issue was discovered by Julien Grall.

The additional issue reported in update 4 was discovered by Tamas K
Lengyel.

RESOLUTION
==========

Applying the appropriate pair of attached patches along with the
additional update resolves this issue.

xsa98-unstable-{01,02}.patch        xen-unstable
xsa98-4.4-{01,02}.patch             Xen 4.4.x
xsa98-update.patch                  Additional update for unstable, 4.5.x and 4.4.x

$ sha256sum xsa98*.patch
b8535aad5ae969675d59781a81ce0b24491f1abc01aaf36c3620fd7fb6cc84eb  xsa98-unstable-01.patch
f5e8a93525a8905653da6377097f77681ff8121b973063ff6081e27547ceaa67  xsa98-unstable-02.patch
6f63bc2e0a0a39bbd9137513a5d130ae2c78d1fd2ebf9172bf49456f73f0a67b  xsa98-4.4-01.patch
b338472ecce3c31a55d1a936eebbd4e46cb3ad989b91a64d4b8c5d3ca80d875d  xsa98-4.4-02.patch
8bb4a23174c0c9b1a23a41d4669900877483fd526d331d0c377c32845feb2eb8  xsa98-update.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJVAwlFAAoJEIP+FMlX6CvZBGMH/1qZuF20x5mfSn9TPDXJZrU4
dc6Jab7VDISnfy2CkLPsyLeaOolWm34HgP0a+vggInuxtKmo7TIvoJBUVi6ndsJI
mqSWsoUvOl6PthAB1/4WNH2e/wySxBLFEwQWnUZRXxW32LrQzb+rVcJvvHjZiYKR
p7NYKYklCZDKhmX5DdANjO1RDg561UnenEMsgUbOdyjsk2s8o+/ni927ZUzhnxQe
NY9LqpgOyjBLb+5tStq2v03A+ax7mgzRMQLYlWsuY+Vt08HQsPuEPxN9JNkpmEwb
A46OICRNMEwzKmt6ZKpYJSibiffHAMm5aeRd2SalpUjlIAg67H/LHf0vV/4bJ9o=
=igf6
-----END PGP SIGNATURE-----

["xsa98-unstable-01.patch" (application/octet-stream)]

xen: arm: check permissions when copying to/from guest virtual addresses

In particular we need to make sure the guest has write permissions to buffers
which it passes as output buffers for hypercalls, otherwise the guest can
overwrite memory which it shouldn't be able to write (like r/o grant table
mappings).

This is XSA-98.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c424793..d079982 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1005,7 +1005,7 @@ static void initrd_load(struct kernel_info *kinfo)
         s = offs & ~PAGE_MASK;
         l = min(PAGE_SIZE - s, len);
 
-        rc = gvirt_to_maddr(load_addr + offs, &ma);
+        rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
         if ( rc )
         {
             panic("Unable to translate guest address");
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index cea5f97..d1fddec 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g) )
+        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
@@ -62,7 +62,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g) )
+        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
@@ -92,7 +92,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         void *p;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 
-        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
+        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g, GV2M_READ) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index c82906f..69182ec 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -172,7 +172,7 @@ static void kernel_zimage_load(struct kernel_info *info)
         s = offs & ~PAGE_MASK;
         l = min(PAGE_SIZE - s, len);
 
-        rc = gvirt_to_maddr(load_addr + offs, &ma);
+        rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
         if ( rc )
         {
             panic("Unable to map translate guest address");
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 03a3da6..df86ffe 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -837,7 +837,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 
     printk("Guest stack trace from sp=%"PRIvaddr":\n  ", sp);
 
-    if ( gvirt_to_maddr(sp, &stack_phys) )
+    if ( gvirt_to_maddr(sp, &stack_phys, GV2M_READ) )
     {
         printk("Failed to convert stack to physical address\n");
         return;
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 4abb281..9740672 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -87,11 +87,14 @@ static inline uint64_t __va_to_par(vaddr_t va)
 }
 
 /* Ask the MMU to translate a Guest VA for us */
-static inline uint64_t gva_to_ma_par(vaddr_t va)
+static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp;
     tmp = READ_CP64(PAR);
-    WRITE_CP32(va, ATS12NSOPR);
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        WRITE_CP32(va, ATS12NSOPW);
+    else
+        WRITE_CP32(va, ATS12NSOPR);
     isb(); /* Ensure result is available. */
     par = READ_CP64(PAR);
     WRITE_CP64(tmp, PAR);
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 713baf6..bb10164 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -81,11 +81,14 @@ static inline uint64_t __va_to_par(vaddr_t va)
 }
 
 /* Ask the MMU to translate a Guest VA for us */
-static inline uint64_t gva_to_ma_par(vaddr_t va)
+static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp = READ_SYSREG64(PAR_EL1);
 
-    asm volatile ("at s12e1r, %0;" : : "r" (va));
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        asm volatile ("at s12e1r, %0;" : : "r" (va));
+    else
+        asm volatile ("at s12e1w, %0;" : : "r" (va));
     isb();
     par = READ_SYSREG64(PAR_EL1);
     WRITE_SYSREG64(tmp, PAR_EL1);
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b8d4e7d..d0e5cb4 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -233,9 +233,9 @@ static inline void *maddr_to_virt(paddr_t ma)
 }
 #endif
 
-static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa)
+static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
 {
-    uint64_t par = gva_to_ma_par(va);
+    uint64_t par = gva_to_ma_par(va, flags);
     if ( par & PAR_F )
         return -EFAULT;
     *pa = (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK);
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index c38e9c9..e723e5a 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -73,6 +73,10 @@
 #define MATTR_DEV     0x1
 #define MATTR_MEM     0xf
 
+/* Flags for gvirt_to_maddr */
+#define GV2M_READ  (0u<<0)
+#define GV2M_WRITE (1u<<0)
+
 #ifndef __ASSEMBLY__
 
 #include <xen/types.h>

["xsa98-unstable-02.patch" (application/octet-stream)]

xen: arm: ensure we hold a reference to guest pages while we copy to/from them

This at once:
 - prevents the page from being reassigned under our feet
 - ensures that the domain owns the page, which stops a domain from giving a
   grant mapping, MMIO region, other non-RAM as a hypercall input/output.

We need to hold the p2m lock while doing the lookup until we have the
reference.

This also requires that during domain 0 building current is set to an actual
dom0 vcpu, so take care of this at the same time as the p2m is temporarily
loaded.

Lastly when dumping the guest stack we need to make sure that the guest hasn't
pointed its sp off into the weeds and/or misaligned it, which could lead to
hypervisor traps. Solve this by using the new function and checking alignment
first.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d079982..4dd2d84 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1024,6 +1024,7 @@ static void initrd_load(struct kernel_info *kinfo)
 int construct_dom0(struct domain *d)
 {
     struct kernel_info kinfo = {};
+    struct vcpu *saved_current;
     int rc, i, cpu;
 
     struct vcpu *v = d->vcpu[0];
@@ -1060,8 +1061,13 @@ int construct_dom0(struct domain *d)
     if ( rc < 0 )
         return rc;
 
-    /* The following loads use the domain's p2m */
+    /*
+     * The following loads use the domain's p2m and require current to
+     * be a vcpu of the domain, temporarily switch
+     */
+    saved_current = current;
     p2m_restore_state(v);
+    set_current(v);
 
     /*
      * kernel_load will determine the placement of the kernel as well
@@ -1072,6 +1078,10 @@ int construct_dom0(struct domain *d)
     initrd_load(&kinfo);
     dtb_load(&kinfo);
 
+    /* Now that we are done restore the original p2m and current. */
+    set_current(saved_current);
+    p2m_restore_state(saved_current);
+
     discard_initial_modules();
 
     v->is_initialised = 1;
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index d1fddec..0173597 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,6 +1,8 @@
 #include <xen/config.h>
 #include <xen/lib.h>
 #include <xen/domain_page.h>
+#include <xen/sched.h>
+#include <asm/current.h>
 
 #include <asm/mm.h>
 #include <asm/guest_access.h>
@@ -13,20 +15,22 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
+        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
             clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
+        put_page(page);
         len -= size;
         from += size;
         to += size;
@@ -58,18 +62,20 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
+        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += offset;
         memset(p, 0x00, size);
 
         unmap_domain_page(p - offset);
+        put_page(page);
         len -= size;
         to += size;
         /*
@@ -88,19 +94,21 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g, GV2M_READ) )
+        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += ((vaddr_t)from & (~PAGE_MASK));
 
         memcpy(to, p, size);
 
         unmap_domain_page(p);
+        put_page(page);
         len -= size;
         from += size;
         to += size;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index b85143b..5fc5ca6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -701,6 +701,34 @@ unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
     return p >> PAGE_SHIFT;
 }
 
+struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+                                    unsigned long flags)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+    struct page_info *page = NULL;
+    paddr_t maddr;
+
+    ASSERT(d == current->domain);
+
+    spin_lock(&p2m->lock);
+
+    if ( gvirt_to_maddr(va, &maddr, flags) )
+        goto err;
+
+    if ( !mfn_valid(maddr >> PAGE_SHIFT) )
+        goto err;
+
+    page = mfn_to_page(maddr >> PAGE_SHIFT);
+    ASSERT(page);
+
+    if ( unlikely(!get_page(page, d)) )
+        page = NULL;
+
+err:
+    spin_unlock(&p2m->lock);
+    return page;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index df86ffe..d89b75f 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -777,7 +777,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 {
     int i;
     vaddr_t sp;
-    paddr_t stack_phys;
+    struct page_info *page;
     void *mapped;
     unsigned long *stack, addr;
 
@@ -837,13 +837,20 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 
     printk("Guest stack trace from sp=%"PRIvaddr":\n  ", sp);
 
-    if ( gvirt_to_maddr(sp, &stack_phys, GV2M_READ) )
+    if ( sp & ( sizeof(long) - 1 ) )
+    {
+        printk("Stack is misaligned\n");
+        return;
+    }
+
+    page = get_page_from_gva(current->domain, sp, GV2M_READ);
+    if ( page == NULL )
     {
         printk("Failed to convert stack to physical address\n");
         return;
     }
 
-    mapped = map_domain_page(stack_phys >> PAGE_SHIFT);
+    mapped = __map_domain_page(page);
 
     stack = mapped + (sp & ~PAGE_MASK);
 
@@ -861,7 +868,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
         printk("Stack empty.");
     printk("\n");
     unmap_domain_page(mapped);
-
+    put_page(page);
 }
 
 #define STACK_BEFORE_EXCEPTION(regs) ((register_t*)(regs)->sp)
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index d0e5cb4..8bf179d 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -273,6 +273,9 @@ struct domain *page_get_owner_and_reference(struct page_info *page);
 void put_page(struct page_info *page);
 int  get_page(struct page_info *page, struct domain *domain);
 
+struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+                                    unsigned long flags);
+
 /*
  * The MPT (machine->physical mapping table) is an array of word-sized
  * values, indexed on machine frame number. It is expected that guest OSes
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e723e5a..113be5a 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -73,7 +73,7 @@
 #define MATTR_DEV     0x1
 #define MATTR_MEM     0xf
 
-/* Flags for gvirt_to_maddr */
+/* Flags for get_page_from_gva, gvirt_to_maddr etc */
 #define GV2M_READ  (0u<<0)
 #define GV2M_WRITE (1u<<0)
 

["xsa98-4.4-01.patch" (application/octet-stream)]

xen: arm: check permissions when copying to/from guest virtual addresses

In particular we need to make sure the guest has write permissions to buffers
which it passes as output buffers for hypercalls, otherwise the guest can
overwrite memory which it shouldn't be able to write (like r/o grant table
mappings).

This is XSA-98.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 5ca2f15..3da6b83 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -967,7 +967,7 @@ static void initrd_load(struct kernel_info *kinfo)
         s = offs & ~PAGE_MASK;
         l = min(PAGE_SIZE - s, len);
 
-        rc = gvirt_to_maddr(load_addr + offs, &ma);
+        rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
         if ( rc )
         {
             panic("Unable to translate guest address");
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index cea5f97..d1fddec 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -17,7 +17,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g) )
+        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
@@ -62,7 +62,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g) )
+        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
@@ -92,7 +92,7 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         void *p;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 
-        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
+        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g, GV2M_READ) )
             return len;
 
         p = map_domain_page(g>>PAGE_SHIFT);
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 1e3107d..69c7d43 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -141,7 +141,7 @@ static void kernel_zimage_load(struct kernel_info *info)
         s = offs & ~PAGE_MASK;
         l = min(PAGE_SIZE - s, len);
 
-        rc = gvirt_to_maddr(load_addr + offs, &ma);
+        rc = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
         if ( rc )
         {
             panic("Unable to map translate guest address");
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 3a34d33..2e7451b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -836,7 +836,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 
     printk("Guest stack trace from sp=%"PRIvaddr":\n  ", sp);
 
-    if ( gvirt_to_maddr(sp, &stack_phys) )
+    if ( gvirt_to_maddr(sp, &stack_phys, GV2M_READ) )
     {
         printk("Failed to convert stack to physical address\n");
         return;
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index b8221ca..80d5c36 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -90,11 +90,14 @@ static inline uint64_t __va_to_par(vaddr_t va)
 }
 
 /* Ask the MMU to translate a Guest VA for us */
-static inline uint64_t gva_to_ma_par(vaddr_t va)
+static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp;
     tmp = READ_CP64(PAR);
-    WRITE_CP32(va, ATS12NSOPR);
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        WRITE_CP32(va, ATS12NSOPW);
+    else
+        WRITE_CP32(va, ATS12NSOPR);
     isb(); /* Ensure result is available. */
     par = READ_CP64(PAR);
     WRITE_CP64(tmp, PAR);
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 3352821..3922d87 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -83,11 +83,14 @@ static inline uint64_t __va_to_par(vaddr_t va)
 }
 
 /* Ask the MMU to translate a Guest VA for us */
-static inline uint64_t gva_to_ma_par(vaddr_t va)
+static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
 {
     uint64_t par, tmp = READ_SYSREG64(PAR_EL1);
 
-    asm volatile ("at s12e1r, %0;" : : "r" (va));
+    if ( (flags & GV2M_WRITE) == GV2M_WRITE )
+        asm volatile ("at s12e1r, %0;" : : "r" (va));
+    else
+        asm volatile ("at s12e1w, %0;" : : "r" (va));
     isb();
     par = READ_SYSREG64(PAR_EL1);
     WRITE_SYSREG64(tmp, PAR_EL1);
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b8d4e7d..d0e5cb4 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -233,9 +233,9 @@ static inline void *maddr_to_virt(paddr_t ma)
 }
 #endif
 
-static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa)
+static inline int gvirt_to_maddr(vaddr_t va, paddr_t *pa, unsigned int flags)
 {
-    uint64_t par = gva_to_ma_par(va);
+    uint64_t par = gva_to_ma_par(va, flags);
     if ( par & PAR_F )
         return -EFAULT;
     *pa = (par & PADDR_MASK & PAGE_MASK) | ((unsigned long) va & ~PAGE_MASK);
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e00be9e..84562ec 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -73,6 +73,10 @@
 #define MATTR_DEV     0x1
 #define MATTR_MEM     0xf
 
+/* Flags for gvirt_to_maddr */
+#define GV2M_READ  (0u<<0)
+#define GV2M_WRITE (1u<<0)
+
 #ifndef __ASSEMBLY__
 
 #include <xen/types.h>

["xsa98-4.4-02.patch" (application/octet-stream)]

xen: arm: ensure we hold a reference to guest pages while we copy to/from them

This at once:
 - prevents the page from being reassigned under our feet
 - ensures that the domain owns the page, which stops a domain from giving a
   grant mapping, MMIO region, other non-RAM as a hypercall input/output.

We need to hold the p2m lock while doing the lookup until we have the
reference.

This also requires that during domain 0 building current is set to an actual
dom0 vcpu, so take care of this at the same time as the p2m is temporarily
loaded.

Lastly when dumping the guest stack we need to make sure that the guest hasn't
pointed its sp off into the weeds and/or misaligned it, which could lead to
hypervisor traps. Solve this by using the new function and checking alignment
first.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
[ ijc -- backported to 4.4, using p2m_load_VTTBR ]

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3da6b83..c1497f8 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -986,6 +986,7 @@ static void initrd_load(struct kernel_info *kinfo)
 int construct_dom0(struct domain *d)
 {
     struct kernel_info kinfo = {};
+    struct vcpu *saved_current;
     int rc, i, cpu;
 
     struct vcpu *v = d->vcpu[0];
@@ -1021,7 +1022,9 @@ int construct_dom0(struct domain *d)
         return rc;
 
     /* The following loads use the domain's p2m */
+    saved_current = current;
     p2m_load_VTTBR(d);
+    set_current(v);
 #ifdef CONFIG_ARM_64
     d->arch.type = kinfo.type;
     if ( is_pv32_domain(d) )
@@ -1039,6 +1042,10 @@ int construct_dom0(struct domain *d)
     initrd_load(&kinfo);
     dtb_load(&kinfo);
 
+    /* Now that we are done restore the original p2m and current. */
+    set_current(saved_current);
+    p2m_load_VTTBR(current->domain);
+
     discard_initial_modules();
 
     v->is_initialised = 1;
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index d1fddec..0173597 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,6 +1,8 @@
 #include <xen/config.h>
 #include <xen/lib.h>
 #include <xen/domain_page.h>
+#include <xen/sched.h>
+#include <asm/current.h>
 
 #include <asm/mm.h>
 #include <asm/guest_access.h>
@@ -13,20 +15,22 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
+        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
             clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
+        put_page(page);
         len -= size;
         from += size;
         to += size;
@@ -58,18 +62,20 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) to, &g, GV2M_WRITE) )
+        page = get_page_from_gva(current->domain, (vaddr_t) to, GV2M_WRITE);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += offset;
         memset(p, 0x00, size);
 
         unmap_domain_page(p - offset);
+        put_page(page);
         len -= size;
         to += size;
         /*
@@ -88,19 +94,21 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
 
     while ( len )
     {
-        paddr_t g;
         void *p;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
+        struct page_info *page;
 
-        if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g, GV2M_READ) )
+        page = get_page_from_gva(current->domain, (vaddr_t) from, GV2M_READ);
+        if ( page == NULL )
             return len;
 
-        p = map_domain_page(g>>PAGE_SHIFT);
+        p = __map_domain_page(page);
         p += ((vaddr_t)from & (~PAGE_MASK));
 
         memcpy(to, p, size);
 
         unmap_domain_page(p);
+        put_page(page);
         len -= size;
         from += size;
         to += size;
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c882..7fd5920 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -655,6 +655,34 @@ unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
     return p >> PAGE_SHIFT;
 }
 
+struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+                                    unsigned long flags)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+    struct page_info *page = NULL;
+    paddr_t maddr;
+
+    ASSERT(d == current->domain);
+
+    spin_lock(&p2m->lock);
+
+    if ( gvirt_to_maddr(va, &maddr, flags) )
+        goto err;
+
+    if ( !mfn_valid(maddr >> PAGE_SHIFT) )
+        goto err;
+
+    page = mfn_to_page(maddr >> PAGE_SHIFT);
+    ASSERT(page);
+
+    if ( unlikely(!get_page(page, d)) )
+        page = NULL;
+
+err:
+    spin_unlock(&p2m->lock);
+    return page;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 2e7451b..00071a3 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -776,7 +776,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 {
     int i;
     vaddr_t sp;
-    paddr_t stack_phys;
+    struct page_info *page;
     void *mapped;
     unsigned long *stack, addr;
 
@@ -836,13 +836,20 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
 
     printk("Guest stack trace from sp=%"PRIvaddr":\n  ", sp);
 
-    if ( gvirt_to_maddr(sp, &stack_phys, GV2M_READ) )
+    if ( sp & ( sizeof(long) - 1 ) )
+    {
+        printk("Stack is misaligned\n");
+        return;
+    }
+
+    page = get_page_from_gva(current->domain, sp, GV2M_READ);
+    if ( page == NULL )
     {
         printk("Failed to convert stack to physical address\n");
         return;
     }
 
-    mapped = map_domain_page(stack_phys >> PAGE_SHIFT);
+    mapped = __map_domain_page(page);
 
     stack = mapped + (sp & ~PAGE_MASK);
 
@@ -860,7 +867,7 @@ static void show_guest_stack(struct vcpu *v, struct cpu_user_regs *regs)
         printk("Stack empty.");
     printk("\n");
     unmap_domain_page(mapped);
-
+    put_page(page);
 }
 
 #define STACK_BEFORE_EXCEPTION(regs) ((register_t*)(regs)->sp)
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index d0e5cb4..8bf179d 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -273,6 +273,9 @@ struct domain *page_get_owner_and_reference(struct page_info *page);
 void put_page(struct page_info *page);
 int  get_page(struct page_info *page, struct domain *domain);
 
+struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
+                                    unsigned long flags);
+
 /*
  * The MPT (machine->physical mapping table) is an array of word-sized
  * values, indexed on machine frame number. It is expected that guest OSes
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 84562ec..c118309 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -73,7 +73,7 @@
 #define MATTR_DEV     0x1
 #define MATTR_MEM     0xf
 
-/* Flags for gvirt_to_maddr */
+/* Flags for get_page_from_gva, gvirt_to_maddr etc */
 #define GV2M_READ  (0u<<0)
 #define GV2M_WRITE (1u<<0)
 

["xsa98-update.patch" (application/octet-stream)]

xen: arm: correct arm64 version of gva_to_ma_par

The implementation was backwards and checked that the guest could
read when asked about write and vice versa.

This is an update to the fix for XSA-98.

Reported-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index bb10164..386e434 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -86,9 +86,9 @@ static inline uint64_t gva_to_ma_par(vaddr_t va, unsigned int flags)
     uint64_t par, tmp = READ_SYSREG64(PAR_EL1);
 
     if ( (flags & GV2M_WRITE) == GV2M_WRITE )
-        asm volatile ("at s12e1r, %0;" : : "r" (va));
-    else
         asm volatile ("at s12e1w, %0;" : : "r" (va));
+    else
+        asm volatile ("at s12e1r, %0;" : : "r" (va));
     isb();
     par = READ_SYSREG64(PAR_EL1);
     WRITE_SYSREG64(tmp, PAR_EL1);


_______________________________________________
Xen-announce mailing list
Xen-announce@lists.xen.org
http://lists.xen.org/xen-announce

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic