cputlb: Move cpu->pending_tlb_flush to env->tlb_c.pending_flush

Protect it with the tlb_lock instead of using atomics.
The move puts it in or near the same cacheline as the lock;
using the lock means we don't need a second atomic operation
in order to perform the update.  Which makes it cheap to also
update pending_flush in tlb_flush_by_mmuidx_async_work.

Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2018-10-20 13:54:46 -07:00
parent 8ab102667e
commit 60a2ad7d86
3 changed files with 30 additions and 19 deletions

View file

@ -429,12 +429,6 @@ struct CPUState {
struct hax_vcpu_state *hax_vcpu;
/* The pending_tlb_flush flag is set and cleared atomically to
* avoid potential races. The aim of the flag is to avoid
* unnecessary flushes.
*/
uint16_t pending_tlb_flush;
int hvf_fd;
/* track IOMMUs whose translations we've cached in the TCG TLB */