[prev in list] [next in list] [prev in thread] [next in thread] 

List:       oss-security
Subject:    [oss-security] Xen Security Advisory 305 v1 (CVE-2019-11135) - TSX Asynchronous Abort speculative si
From:       Xen.org security team <security () xen ! org>
Date:       2019-11-12 18:01:13
Message-ID: E1iUaTZ-0001y0-Fj () xenbits ! xenproject ! org
[Download RAW message or body]

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2019-11135 / XSA-305

            TSX Asynchronous Abort speculative side channel

ISSUE DESCRIPTION
=================

This is very closely related to the Microarchitectural Data Sampling
vulnerabilities from May 2019.

Please see https://xenbits.xen.org/xsa/advisory-297.html for details
about MDS.

A new way to sample data from microarchitectural structures has been
identified.  A TSX Asynchronous Abort is a state which occurs between a
transaction definitely aborting (usually for reasons outside of the
pipeline's control e.g. receiving an interrupt), and architectural state
being rolled back to start of the transaction.

During this period, speculative execution may be able to infer the value
of data in the microarchitectural structures.

For more details, see:
  https://software.intel.com/security-software-guidance/insights/deep-dive-intel-transactional-synchronization-extensions-intel-tsx-asynchronous-abort


IMPACT
======

An attacker, which could include a malicious untrusted user process on a
trusted guest, or an untrusted guest, can sample the content of
recently-used memory operands and IO Port writes.

This can include data from:

 * A previously executing context (process, or guest, or
   hypervisor/toolstack) at the same privilege level.
 * A higher privilege context (kernel, hypervisor, SMM) which
   interrupted the attacker's execution.

Vulnerable data is that on the same physical core as the attacker.  This
includes, when hyper-threading is enabled, adjacent threads.

An attacker cannot use this vulnerability to target specific data.  An
attack would likely require sampling over a period of time and the
application of statistical methods to reconstruct interesting data.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Only x86 processors are vulnerable.
ARM processors are not believed to be vulnerable.

Only Intel based processors are affected.  Processors from other
manufacturers (e.g. AMD) are not believed to be vulnerable.

Only Intel processors supporting TSX (Transactional Synchronization
eXtensions) are affected.

Systems which have the XSA-297 (MDS) fixes, and do not enumerate
MDS_NO (Hardware fixes to MDS) are not vulnerable to TAA (XSA-305).
(Specifically, the XSA-297 changes of using VERW flushing and disabling
HyperThreading will prevent data leakage via both MDS and TAA.)

If the XSA-297 Xen patches for MDS have been applied, Xen will
identify at boot if the CPU reports MDS_NO.  i.e.

  [root@localhost ~]# xl dmesg | grep MDS_NO
  (XEN)   Hardware features: IBRS/IBPB STIBP L1D_FLUSH SSBD MD_CLEAR IBRS_ALL RDCL_NO \
SKIP_L1DFL MDS_NO

Support for TSX is reported by Linux (>=3.4) as `hle' and `rtm' in the
cpu flags (`grep -e hle -e rtm /proc/cpuinfo').  (Note that applying
Option A from Resolution, below, will disable TSX so suppressing this
report, even if the CPU would be vulnerable with TSX enabled.)

In summary: systems which support TSX and enumerate MDS_NO are
vulnerable to XSA-305 (TAA).

MITIGATION
==========

There is no mitigation available.

RESOLUTION
==========

New microcode is required in all cases.  It may be available via a
firmware update (consult your hardware vendor), or available for
boot-time loading (consult your dom0 OS vendor).

There are two approaches:

Option A:

  * Upgrade to the new microcode.
  * Apply the Xen patches (listed below).

  This will disable TSX (by default, but reenabling it would
  reintroduce the vulnerability).  This option is the recommended
  resolution.

Option B:

  * Upgrade to the new microcode.
  * Boot Xen with `smt=0 spec-ctrl=md-clear'.
  * (The patches are not strictly required.)

  This option is recommended only if it is known that the workload is
  such that it is important to retain the TSX feature.

  `smt=0' disables hyper-threading, and will have a significant
  performance impact.  See `DISCUSSION CONCERNING SMT/HYPER-THREADING'
  in XSA-297 for more information about the implications and options.

  Note that the Xen command line argument `spec-ctrl=md-clear' must be
  specified to mitigate XSA-305, even though some readings of XSA-297
  suggest it might be enabled by default when needed.  This is because
  Option B reuses the same mitigation for a new problem.
  `spec-ctrl=md-clear' is the default on CPUs vulnerable to XSA-297;
  however, it is not the default on CPUs vulnerable to XSA-305.

In each case with the Xen patches applied, appropriate microcode can
be observed by finding TSX_CTRL enumerated:

  [root@localhost ~]# xl dmesg | grep TSX_CTRL
  (XEN)   Hardware features: IBRS/IBPB STIBP L1D_FLUSH SSBD MD_CLEAR IBRS_ALL RDCL_NO \
SKIP_L1DFL MDS_NO TSX_CTRL

There is no further action (beyond Option A or B above) required for
guest kernel/userspace software, and nothing they could do differently
to protect themselves in the absence of those changes.

xsa305/xsa305-*.patch           xen-unstable
xsa305/xsa305-4.12-*.patch      Xen 4.12.x
xsa305/xsa305-4.11-*.patch      Xen 4.11.x
xsa304/xsa304-4.10-*.patch      Xen 4.10.x
xsa304/xsa304-4.9-*.patch       Xen 4.9.x
xsa304/xsa304-4.8-*.patch       Xen 4.8.x

$ sha256sum xsa305*/*
b74bd3954b9c76eee7d9f2c594d5d5c05996f631696b68142f6e5cbe0ceaddf7  xsa305/xsa305-1.patch
67d30c248eefdd8552630c56d55adb9934f575a1fe1f15f7a0fca7d3d099de48  xsa305/xsa305-2.patch
b64837e7a75cad86b0bb52379c781b8ea93094569d1a8f9e044c580cc6654869  xsa305/xsa305-4.8-1.patch
cbb65761ba8d844d8297e50d3f95cb708b656b5a81a03fa808eb05fb7e58dbcd  xsa305/xsa305-4.8-2.patch
607a8fb5006268ca48143729e59d135d6a6d6aac0a77119f44f7ab09a5b600bb  xsa305/xsa305-4.9-1.patch
f26ee247e0346144ed477c731930ba3ce562f586d6d2fb76f2926a1a32ab2807  xsa305/xsa305-4.9-2.patch
91be2c6b9a81e693c9583c0936c78a7eaaf51815d6ae7e0323be383b334ce73c  xsa305/xsa305-4.10-1.patch
b15d2feee4a3b9064a2b5387ee0e218f6b05f8b849f80e18f5bbdffcdaf418bc  xsa305/xsa305-4.10-2.patch
c47fcab07123f551a49c7bf96cad82f7bf9c4bb161b46b84f325e400c6438f3e  xsa305/xsa305-4.11-1.patch
2bb81d261c3dc4f3c825ce9795ab4ac9ea08b0d99537ca61d56876be1e6a5d2a  xsa305/xsa305-4.11-2.patch
c6c7551d1c40340401b3a52a8d4dfa4c24b791764fcc08215d270aabff86474e  xsa305/xsa305-4.12-1.patch
a528aaaed32b632779a17cf2ed648903d7bc48ba213d6c8f7ce2d78f493e097c  xsa305/xsa305-4.12-2.patch
$

NOTE REGARDING LACK OF EMBARGO
==============================

Despite an attempt to organise predisclosure, the discoverers ultimately
did not authorise a predisclosure.
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl3K8aoMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZt8AH/2MugPfnQiX3JYCYypWN0JRqS9vCTo96pvs1WwBM
ohSWjdrgLyb29hKo48QBwV7LzCWAJQmAFKPYVX9CKoRmZOKRJESz9LdQ7zYedeV9
nNDM1HN1PL9dFZ7qyRFh3xuefO9DPQ+oHUdNFRHiJn0ttmu6sv+y8ww0UCBHL6H+
xZl4gCaAM0SNAbaFnJucA7L61NaUSNkcpLmS9r5OmEhAE4wdt+bRaVvqdea4OTc5
y/UvipnaHR2FrDMT6mVhBcnloBCJ99Q1C3uvtErQq6ASKxZ4asNFmpMl9+Vc13bo
JVo4GyT6pVQYxJQdB5TtiVUKWklweCR9ioLtDRMHjuy/b1U=
=G0Eh
-----END PGP SIGNATURE-----


["xsa305/xsa305-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index d2b0020b55..a8cd77855d 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2064,6 +2064,20 @@ pages) must also be specified via the tbuf_size parameter.
 ### tsc (x86)
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode (x86)
 > `= List of [ <integer> | scan=<bool>, nmi=<bool> ]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 6b369f21cb..5e6b9d7028 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -67,6 +67,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index acba0f7583..7055509ed6 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -536,6 +536,20 @@ void recalculate_cpuid_policy(struct domain *d)
     if ( cpu_has_itsc && (d->disable_migrate || d->arch.vtsc) )
         __set_bit(X86_FEATURE_ITSC, max_fs);
 
+    /*
+     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
+     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
+     * pre-mitigation when the TSX features were visbile.
+     *
+     * This situation is compatible (albeit with a perf hit to any TSX code in
+     * the guest), so allow the feature bits to remain set.
+     */
+    if ( cpu_has_tsx_ctrl )
+    {
+        __set_bit(X86_FEATURE_HLE, max_fs);
+        __set_bit(X86_FEATURE_RTM, max_fs);
+    }
+
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 4698d2bba1..da504ce7ae 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -133,6 +133,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
     case MSR_AMD64_LWP_CFG:
     case MSR_AMD64_LWP_CBADDR:
         /* Not offered to guests. */
@@ -275,6 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
     case MSR_AMD64_LWP_CFG:
     case MSR_AMD64_LWP_CBADDR:
         /* Not offered to guests. */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index dec60d0301..00ee87bde5 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1597,6 +1597,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index f86c15bde3..fa691b6ba0 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -368,6 +368,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     if ( xen_guest )
         hypervisor_ap_setup();
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..a8ec2ccc69
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s, NULL);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 32746aa8ae..d5f3899f73 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -53,6 +53,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -60,6 +61,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index b686156ea0..557f9b6dda 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -258,6 +258,16 @@ static always_inline unsigned int cpuid_count_ebx(
     return ebx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static inline unsigned long read_cr0(void)
 {
     unsigned long cr0;
@@ -601,6 +611,9 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */

["xsa305/xsa305-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index a8cd77855d..7651a1ee51 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1958,7 +1958,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -2068,7 +2068,7 @@ pages) must also be specified via the tbuf_size parameter.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -2078,6 +2078,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode (x86)
 > `= List of [ <integer> | scan=<bool>, nmi=<bool> ]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index e74e0cc619..aa632bdcee 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -99,6 +99,9 @@ static int __init parse_spec_ctrl(const char *s)
 
             opt_branch_harden = false;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -310,7 +313,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -322,7 +325,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -336,7 +341,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -345,6 +350,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "",
@@ -867,6 +874,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1094,6 +1102,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index a8ec2ccc69..2d202a0d4e 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index d5f3899f73..3971b992d3 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -54,6 +54,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)

["xsa305/xsa305-4.8-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 7f60ddbbc6..28fcceb6fc 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1727,6 +1727,20 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tsc
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index fc90449ea3..09e7a1096d 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -62,6 +62,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5338d20c41..85350b33aa 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3600,9 +3600,22 @@ void hvm_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
     case 0x7:
         if ( count == 0 )
         {
-            /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
-            *ebx &= (hvm_featureset[FEATURESET_7b0] &
-                     ~special_features[FEATURESET_7b0]);
+            /*
+             * Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view.
+             *
+             * On hardware with MSR_TSX_CTRL, the admin may have elected to
+             * disable TSX and hide the feature bits.  Migrating-in VMs may
+             * have been booted pre-mitigation when the TSX features were
+             * visbile.
+             *
+             * This situation is compatible (albeit with a perf hit to any TSX
+             * code in the guest), so allow the feature bits to remain set.
+             */
+            *ebx &= ((hvm_featureset[FEATURESET_7b0] &
+                      ~special_features[FEATURESET_7b0]) |
+                     (cpu_has_tsx_ctrl ?
+                      (cpufeat_mask(X86_FEATURE_HLE) |
+                       cpufeat_mask(X86_FEATURE_RTM)) : 0));
             *ebx |= (host_featureset[FEATURESET_7b0] &
                      special_features[FEATURESET_7b0]);
 
@@ -3955,6 +3968,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
@@ -4201,6 +4215,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 3a7b36251c..a3fb9251b5 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1484,6 +1484,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 4c602491ba..4a3e080f78 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -361,6 +361,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     smp_callin();
 
     init_percpu_time();
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3c1c4e2c2d..2d5dbcaa3a 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1141,9 +1141,22 @@ void pv_cpuid(uint32_t leaf, uint32_t subleaf,
     case 0x00000007:
         if ( subleaf == 0 )
         {
-            /* Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view. */
-            b &= (pv_featureset[FEATURESET_7b0] &
-                  ~special_features[FEATURESET_7b0]);
+            /*
+             * Fold host's FDP_EXCP_ONLY and NO_FPU_SEL into guest's view.
+             *
+             * On hardware with MSR_TSX_CTRL, the admin may have elected to
+             * disable TSX and hide the feature bits.  Migrating-in VMs may
+             * have been booted pre-mitigation when the TSX features were
+             * visbile.
+             *
+             * This situation is compatible (albeit with a perf hit to any TSX
+             * code in the guest), so allow the feature bits to remain set.
+             */
+            b &= ((pv_featureset[FEATURESET_7b0] &
+                   ~special_features[FEATURESET_7b0]) |
+                  (cpu_has_tsx_ctrl ?
+                   (cpufeat_mask(X86_FEATURE_HLE) |
+                    cpufeat_mask(X86_FEATURE_RTM)) : 0));
             b |= (host_featureset[FEATURESET_7b0] &
                   special_features[FEATURESET_7b0]);
 
@@ -2531,6 +2544,7 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val,
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         break;
 
@@ -2762,6 +2776,7 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val,
     case MSR_ARCH_CAPABILITIES:
         /* The MSR is read-only. */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         break;
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..3a853d38f6
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 0a596f7489..1982137a33 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -55,6 +55,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -62,6 +63,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index a5319e3aaf..dc3f4f8490 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -339,6 +339,16 @@ static always_inline unsigned int cpuid_edx(unsigned int op)
     return edx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static inline unsigned long read_cr0(void)
 {
     unsigned long cr0;
@@ -692,6 +702,9 @@ static inline void pv_cpuid_regs(struct cpu_user_regs *regs)
              &regs->_eax, &regs->_ebx, &regs->_ecx, &regs->_edx);
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index b9d1c87ffd..b9d2ef0c79 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -103,6 +103,16 @@ extern int printk_ratelimit(void);
 #define gprintk(lvl, fmt, args...) \
     printk(XENLOG_GUEST lvl "%pv " fmt, current, ## args)
 
+#define printk_once(fmt, args...)               \
+({                                              \
+    static bool __read_mostly once_;            \
+    if ( unlikely(!once_) )                     \
+    {                                           \
+        once_ = true;                           \
+        printk(fmt, ## args);                   \
+    }                                           \
+})
+
 #ifdef NDEBUG
 
 static inline void

["xsa305/xsa305-4.8-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 28fcceb6fc..6db0daf533 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1617,7 +1617,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -1731,7 +1731,7 @@ pages) must also be specified via the tbuf\_size parameter.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -1741,6 +1741,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 558626c94e..f44df6ff43 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -135,6 +135,9 @@ static int __init parse_spec_ctrl(char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -345,7 +348,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -357,7 +360,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -371,7 +376,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -380,6 +385,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
@@ -870,6 +877,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1096,6 +1104,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 3a853d38f6..1778ff21b7 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 6057d95404..677f414f5c 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -86,6 +86,7 @@ XEN_CPUFEATURE(SC_VERW_IDLE,    (FSCAPINTS+0)*32+27) /* VERW used by Xen for idl
 #define cpu_has_aperfmperf	boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_smep            boot_cpu_has(X86_FEATURE_SMEP)
 #define cpu_has_invpcid         boot_cpu_has(X86_FEATURE_INVPCID)
+#define cpu_has_rtm             boot_cpu_has(X86_FEATURE_RTM)
 #define cpu_has_smap            boot_cpu_has(X86_FEATURE_SMAP)
 #define cpu_has_fpu_sel         (!boot_cpu_has(X86_FEATURE_NO_FPU_SEL))
 #define cpu_has_ffxsr           ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) \
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 1982137a33..ad5e90f020 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)

["xsa305/xsa305-4.9-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index a3194cadc3..0f1b6a176e 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1815,6 +1815,20 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tsc
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 59cadb759d..041068173c 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -63,6 +63,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 2201f8ac75..9aaf8b8283 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -605,6 +605,20 @@ void recalculate_cpuid_policy(struct domain *d)
     if ( cpu_has_itsc && (d->disable_migrate || d->arch.vtsc) )
         __set_bit(X86_FEATURE_ITSC, max_fs);
 
+    /*
+     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
+     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
+     * pre-mitigation when the TSX features were visbile.
+     *
+     * This situation is compatible (albeit with a perf hit to any TSX code in
+     * the guest), so allow the feature bits to remain set.
+     */
+    if ( cpu_has_tsx_ctrl )
+    {
+        __set_bit(X86_FEATURE_HLE, max_fs);
+        __set_bit(X86_FEATURE_RTM, max_fs);
+    }
+
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0b05b0388c..2aa9ac06a4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3444,6 +3444,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
@@ -3669,6 +3670,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 40af7e65d4..48963b186b 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1490,6 +1490,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 1fda6c507a..641f830cd1 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -363,6 +363,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     smp_callin();
 
     init_percpu_time();
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 53301574e2..9b4bb6a009 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2654,6 +2654,7 @@ static int priv_op_read_msr(unsigned int reg, uint64_t *val,
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         break;
 
@@ -2878,6 +2879,7 @@ static int priv_op_write_msr(unsigned int reg, uint64_t val,
     case MSR_ARCH_CAPABILITIES:
         /* The MSR is read-only. */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         break;
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..3a853d38f6
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 5ef894ff29..b7c1673488 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -55,6 +55,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -62,6 +63,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 4487347713..d068a8710f 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -265,6 +265,16 @@ static always_inline unsigned int cpuid_count_ebx(
     return ebx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static inline unsigned long read_cr0(void)
 {
     unsigned long cr0;
@@ -632,6 +642,9 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 2829d478ad..6d97373765 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -108,6 +108,16 @@ extern int printk_ratelimit(void);
 #define gprintk(lvl, fmt, args...) \
     printk(XENLOG_GUEST lvl "%pv " fmt, current, ## args)
 
+#define printk_once(fmt, args...)               \
+({                                              \
+    static bool __read_mostly once_;            \
+    if ( unlikely(!once_) )                     \
+    {                                           \
+        once_ = true;                           \
+        printk(fmt, ## args);                   \
+    }                                           \
+})
+
 #ifdef NDEBUG
 
 static inline void

["xsa305/xsa305-4.9-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 0f1b6a176e..d0a6245995 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1708,7 +1708,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -1819,7 +1819,7 @@ pages) must also be specified via the tbuf\_size parameter.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -1829,6 +1829,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 558626c94e..f44df6ff43 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -135,6 +135,9 @@ static int __init parse_spec_ctrl(char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -345,7 +348,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -357,7 +360,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -371,7 +376,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -380,6 +385,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
@@ -870,6 +877,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1096,6 +1104,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 3a853d38f6..1778ff21b7 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index b7c1673488..5d636cc250 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)

["xsa305/xsa305-4.10-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 0cbfb5096c..1b169c7b72 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1920,6 +1920,20 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tsc
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d86fb97fa3..4e4f39d933 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -65,6 +65,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 98b63f3a01..e943d70bca 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -600,6 +600,20 @@ void recalculate_cpuid_policy(struct domain *d)
     if ( cpu_has_itsc && (d->disable_migrate || d->arch.vtsc) )
         __set_bit(X86_FEATURE_ITSC, max_fs);
 
+    /*
+     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
+     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
+     * pre-mitigation when the TSX features were visbile.
+     *
+     * This situation is compatible (albeit with a perf hit to any TSX code in
+     * the guest), so allow the feature bits to remain set.
+     */
+    if ( cpu_has_tsx_ctrl )
+    {
+        __set_bit(X86_FEATURE_HLE, max_fs);
+        __set_bit(X86_FEATURE_RTM, max_fs);
+    }
+
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 6853d4c120..6ceea913fb 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -134,6 +134,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
@@ -192,6 +193,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 7903204761..949d4abbdf 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1540,6 +1540,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index b0496eb66e..cdf53afc1e 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -370,6 +370,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     if ( xen_guest )
         hypervisor_ap_setup();
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..a8ec2ccc69
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s, NULL);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 47e7c412f2..c96c4f85c9 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -55,6 +55,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -62,6 +63,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index a0f8bf47e5..e707380f43 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -268,6 +268,16 @@ static always_inline unsigned int cpuid_count_ebx(
     return ebx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static always_inline void cpuid_count_leaf(uint32_t leaf, uint32_t subleaf,
                                            struct cpuid_leaf *data)
 {
@@ -622,6 +632,9 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 750f809968..be223a6950 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -116,6 +116,16 @@ extern int printk_ratelimit(void);
 #define gprintk(lvl, fmt, args...) \
     printk(XENLOG_GUEST lvl "%pv " fmt, current, ## args)
 
+#define printk_once(fmt, args...)               \
+({                                              \
+    static bool __read_mostly once_;            \
+    if ( unlikely(!once_) )                     \
+    {                                           \
+        once_ = true;                           \
+        printk(fmt, ## args);                   \
+    }                                           \
+})
+
 #ifdef NDEBUG
 
 static inline void

["xsa305/xsa305-4.10-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 1b169c7b72..7a03f4ec70 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1813,7 +1813,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -1924,7 +1924,7 @@ pages) must also be specified via the tbuf\_size parameter.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -1934,6 +1934,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index e25dadfa89..0f30362111 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -136,6 +136,9 @@ static int __init parse_spec_ctrl(const char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -346,7 +349,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -358,7 +361,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -372,7 +377,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -381,6 +386,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
@@ -891,6 +898,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1120,6 +1128,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index a8ec2ccc69..2d202a0d4e 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index c96c4f85c9..5ef80735b2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)

["xsa305/xsa305-4.11-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 684671cb7b..b86d26399a 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1948,6 +1948,20 @@ pages) must also be specified via the tbuf\_size parameter.
 ### tsc (x86)
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode (x86)
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index da1e4827f4..4c82d9f710 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -65,6 +65,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 5e11970701..04aefa555d 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -622,6 +622,20 @@ void recalculate_cpuid_policy(struct domain *d)
     if ( cpu_has_itsc && (d->disable_migrate || d->arch.vtsc) )
         __set_bit(X86_FEATURE_ITSC, max_fs);
 
+    /*
+     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
+     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
+     * pre-mitigation when the TSX features were visbile.
+     *
+     * This situation is compatible (albeit with a perf hit to any TSX code in
+     * the guest), so allow the feature bits to remain set.
+     */
+    if ( cpu_has_tsx_ctrl )
+    {
+        __set_bit(X86_FEATURE_HLE, max_fs);
+        __set_bit(X86_FEATURE_RTM, max_fs);
+    }
+
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index ebc0665615..35d99a98a1 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -153,6 +153,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
@@ -233,6 +234,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 657160549f..dc13ad6c36 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1551,6 +1551,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index fd52a10cf9..bdc118d88b 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -376,6 +376,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     if ( xen_guest )
         hypervisor_ap_setup();
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..a8ec2ccc69
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s, NULL);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 89ae3e03f1..5ee7a37c12 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -55,6 +55,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -62,6 +63,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 20d1ecb332..66224f23b9 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -258,6 +258,16 @@ static always_inline unsigned int cpuid_count_ebx(
     return ebx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static always_inline void cpuid_count_leaf(uint32_t leaf, uint32_t subleaf,
                                            struct cpuid_leaf *data)
 {
@@ -610,6 +620,9 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 750f809968..be223a6950 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -116,6 +116,16 @@ extern int printk_ratelimit(void);
 #define gprintk(lvl, fmt, args...) \
     printk(XENLOG_GUEST lvl "%pv " fmt, current, ## args)
 
+#define printk_once(fmt, args...)               \
+({                                              \
+    static bool __read_mostly once_;            \
+    if ( unlikely(!once_) )                     \
+    {                                           \
+        once_ = true;                           \
+        printk(fmt, ## args);                   \
+    }                                           \
+})
+
 #ifdef NDEBUG
 
 static inline void

["xsa305/xsa305-4.11-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index b86d26399a..31635a473a 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -1841,7 +1841,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -1952,7 +1952,7 @@ pages) must also be specified via the tbuf\_size parameter.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -1962,6 +1962,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode (x86)
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 2fe16b423d..ab196b156d 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -152,6 +152,9 @@ static int __init parse_spec_ctrl(const char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -362,7 +365,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -374,7 +377,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -388,7 +393,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -397,6 +402,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
@@ -911,6 +918,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1140,6 +1148,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index a8ec2ccc69..2d202a0d4e 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 5ee7a37c12..1761a01f1f 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)

["xsa305/xsa305-4.12-1.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/tsx: Introduce tsx= to use MSR_TSX_CTRL when available

To protect against the TSX Async Abort speculative vulnerability, Intel have
released new microcode for affected parts which introduce the MSR_TSX_CTRL
control, which allows TSX to be turned off.  This will be architectural on
future parts.

Introduce tsx= to provide a global on/off for TSX, including its enumeration
via CPUID.  Provide stub virtualisation of this MSR, as it is not exposed to
guests at the moment.

VMs may have booted before microcode is loaded, or before hosts have rebooted,
and they still want to migrate freely.  A VM which booted seeing TSX can
migrate safely to hosts with TSX disabled - TSX will start unconditionally
aborting, but still behave in a manner compatible with the ABI.

The guest-visible behaviour is equivalent to late loading the microcode and
setting the RTM_DISABLE bit in the course of live patching.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e283017015..b7e1bf8e8b 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2033,6 +2033,20 @@ Xen version.
 ### tsc (x86)
 > `= unstable | skewed | stable:socket`
 
+### tsx
+    = <bool>
+
+    Applicability: x86
+    Default: true
+
+Controls for the use of Transactional Synchronization eXtensions.
+
+On Intel parts released in Q3 2019 (with updated microcode), and future parts,
+a control has been introduced which allows TSX to be turned off.
+
+On systems with the ability to turn TSX off, this boolean offers system wide
+control of whether TSX is enabled or disabled.
+
 ### ucode (x86)
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 8a8d8f060f..9b9a4435fb 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -66,6 +66,7 @@ obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
+obj-y += tsx.o
 obj-y += usercopy.o
 obj-y += x86_emulate.o
 obj-$(CONFIG_TBOOT) += tboot.o
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 57e80694f2..1727497459 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -524,6 +524,20 @@ void recalculate_cpuid_policy(struct domain *d)
     if ( cpu_has_itsc && (d->disable_migrate || d->arch.vtsc) )
         __set_bit(X86_FEATURE_ITSC, max_fs);
 
+    /*
+     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
+     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
+     * pre-mitigation when the TSX features were visbile.
+     *
+     * This situation is compatible (albeit with a perf hit to any TSX code in
+     * the guest), so allow the feature bits to remain set.
+     */
+    if ( cpu_has_tsx_ctrl )
+    {
+        __set_bit(X86_FEATURE_HLE, max_fs);
+        __set_bit(X86_FEATURE_RTM, max_fs);
+    }
+
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 56de0fe9e1..c2722d7c73 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -132,6 +132,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
     case MSR_FLUSH_CMD:
         /* Write-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
@@ -260,6 +261,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     case MSR_ARCH_CAPABILITIES:
         /* Read-only */
     case MSR_TSX_FORCE_ABORT:
+    case MSR_TSX_CTRL:
         /* Not offered to guests. */
         goto gp_fault;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index cf790f36ef..c1c7c44000 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1594,6 +1594,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     early_microcode_init();
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     identify_cpu(&boot_cpu_data);
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 737a44f055..e21cf0a310 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -376,6 +376,8 @@ void start_secondary(void *unused)
     if ( boot_cpu_has(X86_FEATURE_IBRSB) )
         wrmsrl(MSR_SPEC_CTRL, default_xen_spec_ctrl);
 
+    tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
+
     if ( xen_guest )
         hypervisor_ap_setup();
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
new file mode 100644
index 0000000000..a8ec2ccc69
--- /dev/null
+++ b/xen/arch/x86/tsx.c
@@ -0,0 +1,74 @@
+#include <xen/init.h>
+#include <asm/msr.h>
+
+/*
+ * Valid values:
+ *   1 => Explicit tsx=1
+ *   0 => Explicit tsx=0
+ *  -1 => Default, implicit tsx=1
+ *
+ * This is arranged such that the bottom bit encodes whether TSX is actually
+ * disabled, while identifying various explicit (>=0) and implicit (<0)
+ * conditions.
+ */
+int8_t __read_mostly opt_tsx = -1;
+int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+
+static int __init parse_tsx(const char *s)
+{
+    int rc = 0, val = parse_bool(s, NULL);
+
+    if ( val >= 0 )
+        opt_tsx = val;
+    else
+        rc = -EINVAL;
+
+    return rc;
+}
+custom_param("tsx", parse_tsx);
+
+void tsx_init(void)
+{
+    /*
+     * This function is first called between microcode being loaded, and CPUID
+     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
+     * is available.
+     */
+    if ( unlikely(cpu_has_tsx_ctrl < 0) )
+    {
+        uint64_t caps = 0;
+
+        if ( boot_cpu_data.cpuid_level >= 7 &&
+             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+            rdmsrl(MSR_ARCH_CAPABILITIES, caps);
+
+        cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+    }
+
+    if ( cpu_has_tsx_ctrl )
+    {
+        uint64_t val;
+
+        rdmsrl(MSR_TSX_CTRL, val);
+
+        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        /* Check bottom bit only.  Higher bits are various sentinals. */
+        if ( !(opt_tsx & 1) )
+            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+
+        wrmsrl(MSR_TSX_CTRL, val);
+    }
+    else if ( opt_tsx >= 0 )
+        printk_once(XENLOG_WARNING
+                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 32746aa8ae..d5f3899f73 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -53,6 +53,7 @@
 #define ARCH_CAPS_SSB_NO		(_AC(1, ULL) << 4)
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
+#define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)
@@ -60,6 +61,10 @@
 #define MSR_TSX_FORCE_ABORT             0x0000010f
 #define TSX_FORCE_ABORT_RTM             (_AC(1, ULL) <<  0)
 
+#define MSR_TSX_CTRL                    0x00000122
+#define TSX_CTRL_RTM_DISABLE            (_AC(1, ULL) <<  0)
+#define TSX_CTRL_CPUID_CLEAR            (_AC(1, ULL) <<  1)
+
 /* Intel MSRs. Some also available on other CPUs */
 #define MSR_IA32_PERFCTR0		0x000000c1
 #define MSR_IA32_A_PERFCTR0		0x000004c1
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index d33ac34d29..1b52712180 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -263,6 +263,16 @@ static always_inline unsigned int cpuid_count_ebx(
     return ebx;
 }
 
+static always_inline unsigned int cpuid_count_edx(
+    unsigned int leaf, unsigned int subleaf)
+{
+    unsigned int edx, tmp;
+
+    cpuid_count(leaf, subleaf, &tmp, &tmp, &tmp, &edx);
+
+    return edx;
+}
+
 static inline unsigned long read_cr0(void)
 {
     unsigned long cr0;
@@ -609,6 +619,9 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
     return fam;
 }
 
+extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+void tsx_init(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_X86_PROCESSOR_H */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 89939f43c8..6529f12dae 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -114,6 +114,16 @@ extern int printk_ratelimit(void);
 #define gprintk(lvl, fmt, args...) \
     printk(XENLOG_GUEST lvl "%pv " fmt, current, ## args)
 
+#define printk_once(fmt, args...)               \
+({                                              \
+    static bool __read_mostly once_;            \
+    if ( unlikely(!once_) )                     \
+    {                                           \
+        once_ = true;                           \
+        printk(fmt, ## args);                   \
+    }                                           \
+})
+
 #ifdef NDEBUG
 
 static inline void

["xsa305/xsa305-4.12-2.patch" (application/octet-stream)]

From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel

See patch documentation and comments.

This is part of XSA-305 / CVE-2019-11135

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index b7e1bf8e8b..74e1e35b88 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1920,7 +1920,7 @@ extreme care.**
 An overall boolean value, `spec-ctrl=no`, can be specified to turn off all
 mitigations, including pieces of infrastructure used to virtualise certain
 mitigation features for guests.  This also includes settings which `xpti`,
-`smt`, `pv-l1tf` control, unless the respective option(s) have been
+`smt`, `pv-l1tf`, `tsx` control, unless the respective option(s) have been
 specified earlier on the command line.
 
 Alternatively, a slightly more restricted `spec-ctrl=no-xen` can be used to
@@ -2037,7 +2037,7 @@ Xen version.
     = <bool>
 
     Applicability: x86
-    Default: true
+    Default: false on parts vulnerable to TAA, true otherwise
 
 Controls for the use of Transactional Synchronization eXtensions.
 
@@ -2047,6 +2047,19 @@ a control has been introduced which allows TSX to be turned off.
 On systems with the ability to turn TSX off, this boolean offers system wide
 control of whether TSX is enabled or disabled.
 
+On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
+logic applies:
+
+ * An explicit `tsx=` choice is honoured, even if it is `true` and would
+   result in a vulnerable system.
+
+ * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
+   mitigated by disabling TSX, as this is the lowest overhead option.
+
+ * If the use of TSX is important, the more expensive TAA mitigations can be
+   opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
+   active by default.
+
 ### ucode (x86)
 > `= [<integer> | scan]`
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index b37d40e643..800139d79c 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -96,6 +96,9 @@ static int __init parse_spec_ctrl(const char *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu = 0;
 
+            if ( opt_tsx == -1 )
+                opt_tsx = -3;
+
         disable_common:
             opt_rsb_pv = false;
             opt_rsb_hvm = false;
@@ -306,7 +309,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
     printk("Speculative mitigation facilities:\n");
 
     /* Hardware features which pertain to speculative mitigations. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB)) ? " IBRS/IBPB" : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_STIBP)) ? " STIBP"     : "",
            (_7d0 & cpufeat_mask(X86_FEATURE_L1D_FLUSH)) ? " L1D_FLUSH" : "",
@@ -318,7 +321,9 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (caps & ARCH_CAPS_RSBA)                  ? " RSBA"      : "",
            (caps & ARCH_CAPS_SKIP_L1DFL)            ? " SKIP_L1DFL": "",
            (caps & ARCH_CAPS_SSB_NO)                ? " SSB_NO"    : "",
-           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "");
+           (caps & ARCH_CAPS_MDS_NO)                ? " MDS_NO"    : "",
+           (caps & ARCH_CAPS_TSX_CTRL)              ? " TSX_CTRL"  : "",
+           (caps & ARCH_CAPS_TAA_NO)                ? " TAA_NO"    : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) )
@@ -332,7 +337,7 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
                "\n");
 
     /* Settings for Xen's protection, irrespective of guests. */
-    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s, Other:%s%s%s\n",
+    printk("  Xen settings: BTI-Thunk %s, SPEC_CTRL: %s%s%s, Other:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
@@ -341,6 +346,8 @@ static void __init print_details(enum ind_thunk thunk, uint64_t caps)
            (default_xen_spec_ctrl & SPEC_CTRL_IBRS)  ? "IBRS+" :  "IBRS-",
            !boot_cpu_has(X86_FEATURE_SSBD)           ? "" :
            (default_xen_spec_ctrl & SPEC_CTRL_SSBD)  ? " SSBD+" : " SSBD-",
+           !(caps & ARCH_CAPS_TSX_CTRL)              ? "" :
+           (opt_tsx & 1)                             ? " TSX+" : " TSX-",
            opt_ibpb                                  ? " IBPB"  : "",
            opt_l1d_flush                             ? " L1D_FLUSH" : "",
            opt_md_clear_pv || opt_md_clear_hvm       ? " VERW"  : "");
@@ -862,6 +869,7 @@ void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
     bool use_spec_ctrl = false, ibrs = false, hw_smt_enabled;
+    bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
     if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
@@ -1086,6 +1094,53 @@ void __init init_speculation_mitigations(void)
             "enabled.  Mitigations will not be fully effective.  Please\n"
             "choose an explicit smt=<bool> setting.  See XSA-297.\n");
 
+    /*
+     * Vulnerability to TAA is a little complicated to quantify.
+     *
+     * In the pipeline, it is just another way to get speculative access to
+     * stale load port, store buffer or fill buffer data, and therefore can be
+     * considered a superset of MDS (on TSX-capable parts).  On parts which
+     * predate MDS_NO, the existing VERW flushing will mitigate this
+     * sidechannel as well.
+     *
+     * On parts which contain MDS_NO, the lack of VERW flushing means that an
+     * attacker can still use TSX to target microarchitectural buffers to leak
+     * secrets.  Therefore, we consider TAA to be the set of TSX-capable parts
+     * which have MDS_NO but lack TAA_NO.
+     *
+     * Note: cpu_has_rtm (== hle) could already be hidden by `tsx=0` on the
+     *       cmdline.  MSR_TSX_CTRL will only appear on TSX-capable parts, so
+     *       we check both to spot TSX in a microcode/cmdline independent way.
+     */
+    cpu_has_bug_taa =
+        (cpu_has_rtm || (caps & ARCH_CAPS_TSX_CTRL)) &&
+        (caps & (ARCH_CAPS_MDS_NO | ARCH_CAPS_TAA_NO)) == ARCH_CAPS_MDS_NO;
+
+    /*
+     * On TAA-affected hardware, disabling TSX is the preferred mitigation, vs
+     * the MDS mitigation of disabling HT and using VERW flushing.
+     *
+     * On CPUs which advertise MDS_NO, VERW has no flushing side effect until
+     * the TSX_CTRL microcode is loaded, despite the MD_CLEAR CPUID bit being
+     * advertised, and there isn't a MD_CLEAR_2 flag to use...
+     *
+     * If we're on affected hardware, able to do something about it (which
+     * implies that VERW now works), no explicit TSX choice and traditional
+     * MDS mitigations (no-SMT, VERW) not obviosuly in use (someone might
+     * plausibly value TSX higher than Hyperthreading...), disable TSX to
+     * mitigate TAA.
+     */
+    if ( opt_tsx == -1 && cpu_has_bug_taa && (caps & ARCH_CAPS_TSX_CTRL) &&
+         ((hw_smt_enabled && opt_smt) ||
+          !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
+    {
+        setup_clear_cpu_cap(X86_FEATURE_HLE);
+        setup_clear_cpu_cap(X86_FEATURE_RTM);
+
+        opt_tsx = 0;
+        tsx_init();
+    }
+
     print_details(thunk, caps);
 
     /*
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index a8ec2ccc69..2d202a0d4e 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -5,7 +5,8 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1
+ *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
  * disabled, while identifying various explicit (>=0) and implicit (<0)
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index d5f3899f73..3971b992d3 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -54,6 +54,7 @@
 #define ARCH_CAPS_MDS_NO		(_AC(1, ULL) << 5)
 #define ARCH_CAPS_IF_PSCHANGE_MC_NO	(_AC(1, ULL) << 6)
 #define ARCH_CAPS_TSX_CTRL		(_AC(1, ULL) << 7)
+#define ARCH_CAPS_TAA_NO		(_AC(1, ULL) << 8)
 
 #define MSR_FLUSH_CMD			0x0000010b
 #define FLUSH_CMD_L1D			(_AC(1, ULL) << 0)


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic