[prev in list] [next in list] [prev in thread] [next in thread] 

List:       android-virt
Subject:    Re: [kvm-unit-tests RFC PATCH 14/16] lib: arm64: Add support for disabling and re-enabling VHE
From:       Alexandru Elisei <alexandru.elisei () arm ! com>
Date:       2019-08-29 8:36:57
Message-ID: 0b6c5a95-b8ea-f034-0826-ceb67d0a0ff4 () arm ! com
[Download RAW message or body]

On 8/28/19 3:19 PM, Mark Rutland wrote:
> On Wed, Aug 28, 2019 at 02:38:29PM +0100, Alexandru Elisei wrote:
>> Add a function to disable VHE and another one to re-enable VHE. Both
>> functions work under the assumption that the CPU had VHE mode enabled at
>> boot.
>>
>> Minimal support to run with VHE has been added to the TLB invalidate
>> functions and to the exception handling code.
>>
>> Since we're touch the assembly enable/disable MMU code, let's take this
>> opportunity to replace a magic number with the proper define.
>>
>> Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
>> ---
>>  lib/arm/asm/processor.h       |   8 ++
>>  lib/arm64/asm/mmu.h           |  11 ++-
>>  lib/arm64/asm/pgtable-hwdef.h |  53 +++++++++---
>>  lib/arm64/asm/processor.h     |  44 +++++++++-
>>  lib/arm/processor.c           |  11 +++
>>  lib/arm/setup.c               |   2 +
>>  lib/arm64/processor.c         |  67 ++++++++++++++-
>>  arm/cstart64.S                | 186 +++++++++++++++++++++++++++++++++++++++++-
>>  8 files changed, 364 insertions(+), 18 deletions(-)
>> +extern void asm_disable_vhe(void);
>> +void disable_vhe(void)
>> +{
>> +	u64 sp, sp_phys, sp_base, sp_base_phys;
>> +
>> +	assert(current_level() == CurrentEL_EL2 && vhe_enabled());
>> +
>> +	sp = current_stack_pointer;
>> +	sp_phys = __virt_to_phys(sp);
>> +	sp_base = sp & THREAD_MASK;
>> +	sp_base_phys = sp_phys & THREAD_MASK;
>> +
>> +	/*
>> +	 * We will disable, then enable the MMU, make sure the exception
>> +	 * handling code works during the small window of time when the MMU is
>> +	 * off.
>> +	 */
>> +	dcache_clean_inval_range(sp_base, sp_base + THREAD_SIZE);
>> +	dcache_inval_range(sp_base_phys, sp_base_phys + THREAD_SIZE);
>> +	asm volatile(	"mov	sp, %0\n" : :"r" (sp_phys));
>> +
>> +	asm_disable_vhe();
>> +
>> +	dcache_clean_inval_range(sp_base_phys, sp_base_phys + THREAD_SIZE);
>> +	dcache_inval_range(sp_base, sp_base + THREAD_SIZE);
>> +	asm volatile(	"mov	sp, %0\n" : :"r" (sp));
>> +}
> This sequence is not safe. The compiler can spill/reload at any point,
> and the CPU can allocate (clean) lines into the cache while the MMU is
> enabled.
>
> I think you need to move the entire sequence to assembly, and should
> perform any cache maintenance while the MMU is off.

You are correct. I was being extra careful here. The fact is, the stack isn't
used at all when the MMU is off. The only way the stack would get used if an
exception is triggered, and that can only happen if we mess up our code. I'll
remove the stack pointer swap, and make sure that interrupts are disabled.

>
>> +extern void asm_enable_vhe(void);
>> +void enable_vhe(void)
>> +{
>> +	u64 sp, sp_phys, sp_base, sp_base_phys;
>> +
>> +	assert(current_level() == CurrentEL_EL2 && !vhe_enabled());
>> +
>> +	sp = current_stack_pointer;
>> +	sp_phys = __virt_to_phys(sp);
>> +	sp_base = sp & THREAD_MASK;
>> +	sp_base_phys = sp_phys & THREAD_MASK;
>> +
>> +	dcache_clean_inval_range(sp_base, sp_base + THREAD_SIZE);
>> +	dcache_inval_range(sp_base_phys, sp_base_phys + THREAD_SIZE);
>> +	asm volatile(	"mov	sp, %0\n" : :"r" (sp_phys));
>> +
>> +	asm_enable_vhe();
>> +
>> +	dcache_clean_inval_range(sp_base_phys, sp_base_phys + THREAD_SIZE);
>> +	dcache_inval_range(sp_base, sp_base + THREAD_SIZE);
>> +	asm volatile(	"mov	sp, %0\n" : :"r" (sp));
>> +}
> Likewise.
>
>> diff --git a/arm/cstart64.S b/arm/cstart64.S
>> index d4b20267a7a6..dc9e634e2307 100644
>> --- a/arm/cstart64.S
>> +++ b/arm/cstart64.S
>> @@ -104,6 +104,13 @@ exceptions_init:
>>  
>>  .text
>>  
>> +exceptions_init_nvhe:
>> +	adrp	x0, vector_table_nvhe
>> +	add	x0, x0, :lo12:vector_table_nvhe
>> +	msr	vbar_el2, x0
>> +	isb
>> +	ret
>> +
>>  .globl get_mmu_off
>>  get_mmu_off:
>>  	adrp	x0, auxinfo
>> @@ -204,7 +211,7 @@ asm_mmu_enable:
>>  		     TCR_IRGN_WBWA | TCR_ORGN_WBWA |	\
>>  		     TCR_SHARED
>>  	mrs	x2, id_aa64mmfr0_el1
>> -	bfi	x1, x2, #32, #3
>> +	bfi	x1, x2, #TCR_EL1_IPS_SHIFT, #3
>>  	msr	tcr_el1, x1
>>  
>>  	/* MAIR */
>> @@ -229,6 +236,33 @@ asm_mmu_enable:
>>  
>>  	ret
>>  
>> +asm_mmu_enable_nvhe:
>> +	ic      iallu
>> +	tlbi    alle2is
>> +	dsb     ish
> why is the IC local, but the TLBI broadcast?
>
> If this only needs ot be local, a DSB NSH will be sufficient.

That's a good question. This part was directly copied from the existing function
asm_mmu_enable. As far as I can tell, the TLBI and DSB should be local when
enabling the MMU. We should only need the inner shareable versions of the
operations (followed by an isb) when we modify the translation table used by all
CPUs.

>
> Thanks,
> Mark.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic