[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kvm
Subject:    Re: [PATCH v3 06/28] KVM: x86/mmu: Require mmu_lock be held for write in unyielding root iter
From:       Ben Gardon <bgardon () google ! com>
Date:       2022-02-28 23:26:04
Message-ID: CANgfPd-Y6Z=icq4ajhesu23AOZPNRVq+KNQ-2kyFHyVA6sx5Xg () mail ! gmail ! com
[Download RAW message or body]

On Fri, Feb 25, 2022 at 4:16 PM Sean Christopherson <seanjc@google.com> wrote:
>
> Assert that mmu_lock is held for write by users of the yield-unfriendly
> TDP iterator.  The nature of a shared walk means that the caller needs to
> play nice with other tasks modifying the page tables, which is more or
> less the same thing as playing nice with yielding.  Theoretically, KVM
> could gain a flow where it could legitimately take mmu_lock for read in
> a non-preemptible context, but that's highly unlikely and any such case
> should be viewed with a fair amount of scrutiny.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>

Reviewed-by: Ben Gardon <bgardon@google.com>

> ---
>  arch/x86/kvm/mmu/tdp_mmu.c | 21 +++++++++++++++------
>  1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> index 5994db5d5226..189f21e71c36 100644
> --- a/arch/x86/kvm/mmu/tdp_mmu.c
> +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> @@ -29,13 +29,16 @@ bool kvm_mmu_init_tdp_mmu(struct kvm *kvm)
>         return true;
>  }
>
> -static __always_inline void kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
> +/* Arbitrarily returns true so that this may be used in if statements. */
> +static __always_inline bool kvm_lockdep_assert_mmu_lock_held(struct kvm *kvm,
>                                                              bool shared)
>  {
>         if (shared)
>                 lockdep_assert_held_read(&kvm->mmu_lock);
>         else
>                 lockdep_assert_held_write(&kvm->mmu_lock);
> +
> +       return true;
>  }
>
>  void kvm_mmu_uninit_tdp_mmu(struct kvm *kvm)
> @@ -187,11 +190,17 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm,
>  #define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared)         \
>         __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, ALL_ROOTS)
>
> -#define for_each_tdp_mmu_root(_kvm, _root, _as_id)                             \
> -       list_for_each_entry_rcu(_root, &_kvm->arch.tdp_mmu_roots, link,         \
> -                               lockdep_is_held_type(&kvm->mmu_lock, 0) ||      \
> -                               lockdep_is_held(&kvm->arch.tdp_mmu_pages_lock)) \
> -               if (kvm_mmu_page_as_id(_root) != _as_id) {              \
> +/*
> + * Iterate over all TDP MMU roots.  Requires that mmu_lock be held for write,
> + * the implication being that any flow that holds mmu_lock for read is
> + * inherently yield-friendly and should use the yielf-safe variant above.
> + * Holding mmu_lock for write obviates the need for RCU protection as the list
> + * is guaranteed to be stable.
> + */
> +#define for_each_tdp_mmu_root(_kvm, _root, _as_id)                     \
> +       list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link)     \
> +               if (kvm_lockdep_assert_mmu_lock_held(_kvm, false) &&    \
> +                   kvm_mmu_page_as_id(_root) != _as_id) {              \
>                 } else
>
>  static struct kvm_mmu_page *tdp_mmu_alloc_sp(struct kvm_vcpu *vcpu)
> --
> 2.35.1.574.g5d30c73bfb-goog
>
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic