cputlb: Move cpu->pending_tlb_flush to env->tlb_c.pending_flush

Protect it with the tlb_lock instead of using atomics.
The move puts it in or near the same cacheline as the lock;
using the lock means we don't need a second atomic operation
in order to perform the update.  Which makes it cheap to also
update pending_flush in tlb_flush_by_mmuidx_async_work.

Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2018-10-20 13:54:46 -07:00
parent 8ab102667e
commit 60a2ad7d86
3 changed files with 30 additions and 19 deletions

View file

@ -145,8 +145,14 @@ typedef struct CPUIOTLBEntry {
* Data elements that are shared between all MMU modes.
*/
typedef struct CPUTLBCommon {
/* lock serializes updates to tlb_table and tlb_v_table */
/* Serialize updates to tlb_table and tlb_v_table, and others as noted. */
QemuSpin lock;
/*
* Within pending_flush, for each bit N, there exists an outstanding
* cross-cpu flush for mmu_idx N. Further cross-cpu flushes to that
* mmu_idx may be discarded. Protected by tlb_c.lock.
*/
uint16_t pending_flush;
} CPUTLBCommon;
/*