accel/tcg: Add TCGCPUOps.tlb_fill_align

Add a new callback to handle softmmu paging.  Return the page
details directly, instead of passing them indirectly to
tlb_set_page.  Handle alignment simultaneously with paging so
that faults are handled with target-specific priority.

Route all calls of the two hooks through a tlb_fill_align
function local to cputlb.c.

As yet no targets implement the new hook.
As yet cputlb.c does not use the new alignment check.

Reviewed-by: Helge Deller <deller@gmx.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2024-10-07 16:34:06 -07:00
parent e5b063e81f
commit f168808d7d
4 changed files with 67 additions and 25 deletions

View file

@ -13,6 +13,7 @@
#include "exec/breakpoint.h"
#include "exec/hwaddr.h"
#include "exec/memattrs.h"
#include "exec/memop.h"
#include "exec/mmu-access-type.h"
#include "exec/vaddr.h"
@ -131,6 +132,31 @@ struct TCGCPUOps {
* same function signature.
*/
bool (*cpu_exec_halt)(CPUState *cpu);
/**
* @tlb_fill_align: Handle a softmmu tlb miss
* @cpu: cpu context
* @out: output page properties
* @addr: virtual address
* @access_type: read, write or execute
* @mmu_idx: mmu context
* @memop: memory operation for the access
* @size: memory access size, or 0 for whole page
* @probe: test only, no fault
* @ra: host return address for exception unwind
*
* If the access is valid, fill in @out and return true.
* Otherwise if probe is true, return false.
* Otherwise raise an exception and do not return.
*
* The alignment check for the access is deferred to this hook,
* so that the target can determine the priority of any alignment
* fault with respect to other potential faults from paging.
* Zero may be passed for @memop to skip any alignment check
* for non-memory-access operations such as probing.
*/
bool (*tlb_fill_align)(CPUState *cpu, CPUTLBEntryFull *out, vaddr addr,
MMUAccessType access_type, int mmu_idx,
MemOp memop, int size, bool probe, uintptr_t ra);
/**
* @tlb_fill: Handle a softmmu tlb miss
*