[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-kernel
Subject:    Re: [PATCH v3 09/19] x86/resctrl: Queue mon_event_read() instead of sending an IPI
From:       Reinette Chatre <reinette.chatre () intel ! com>
Date:       2023-03-31 23:25:40
Message-ID: 5e6a2e0a-6f31-c9b0-5eec-346fd072d286 () intel ! com
[Download RAW message or body]

Hi James,

On 3/20/2023 10:26 AM, James Morse wrote:
> x86 is blessed with an abundance of monitors, one per RMID, that can be
> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC,
> the number implemented is up to the manufacturer. This means when there are
> fewer monitors than needed, they need to be allocated and freed.
> 
> Worse, the domain may be broken up into slices, and the MMIO accesses
> for each slice may need performing from different CPUs.
> 
> These two details mean MPAMs monitor code needs to be able to sleep, and
> IPI another CPU in the domain to read from a resource that has been sliced.
> 
> mon_event_read() already invokes mon_event_count() via IPI, which means
> this isn't possible. On systems using nohz-full, some CPUs need to be
> interrupted to run kernel work as they otherwise stay in user-space
> running realtime workloads. Interrupting these CPUs should be avoided,
> and scheduling work on them may never complete.
> 
> Change mon_event_read() to pick a housekeeping CPU, (one that is not using
> nohz_full) and schedule mon_event_count() and wait. If all the CPUs
> in a domain are using nohz-full, then an IPI is used as the fallback.

It is not clear to me where in this solution an IPI is used as fallback ...
(see below) 

> +	int cpu;
> +
> +	/* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
> +	lockdep_assert_held(&rdtgroup_mutex);
> +
>  	/*
> -	 * setup the parameters to send to the IPI to read the data.
> +	 * setup the parameters to pass to mon_event_count() to read the data.
>  	 */
>  	rr->rgrp = rdtgrp;
>  	rr->evtid = evtid;
> @@ -537,7 +543,16 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
>  	rr->val = 0;
>  	rr->first = first;
>  
> -	smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
> +	cpu = get_cpu();
> +	if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
> +		mon_event_count(rr);
> +		put_cpu();
> +	} else {
> +		put_cpu();
> +
> +		cpu = cpumask_any_housekeeping(&d->cpu_mask);
> +		smp_call_on_cpu(cpu, mon_event_count, rr, false);
> +	}
>  }
>  

... from what I can tell there is no IPI fallback here. As per previous
patch I understand cpumask_any_housekeeping() could still return
a nohz_full CPU and calling smp_call_on_cpu() on it would not send
an IPI but instead queue the work to it. What did I miss?

Reinette
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic