mirror of
https://github.com/Motorhead1991/qemu.git
synced 2025-08-05 00:33:55 -06:00
target/arm: Convert the VFP load/store multiple insns to decodetree
Convert the VFP load/store multiple insns to decodetree. This includes tightening up the UNDEF checking for pre-VFPv3 CPUs which only have D0-D15 : they now UNDEF for any access to D16-D31, not merely when the smallest register in the transfer list is in D16-D31. This conversion does not try to share code between the single precision and the double precision versions; this looks a bit duplicative of code, but it leaves the door open for a future refactoring which gets rid of the use of the "F0" registers by inlining the various functions like gen_vfp_ld() and gen_mov_F0_reg() which are hiding "if (dp) { ... } else { ... }" conditionalisation. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
parent
79b02a3b52
commit
fa288de272
3 changed files with 183 additions and 94 deletions
|
@ -78,3 +78,21 @@ VLDR_VSTR_sp ---- 1101 u:1 .0 l:1 rn:4 .... 1010 imm:8 \
|
|||
vd=%vd_sp
|
||||
VLDR_VSTR_dp ---- 1101 u:1 .0 l:1 rn:4 .... 1011 imm:8 \
|
||||
vd=%vd_dp
|
||||
|
||||
# We split the load/store multiple up into two patterns to avoid
|
||||
# overlap with other insns in the "Advanced SIMD load/store and 64-bit move"
|
||||
# grouping:
|
||||
# P=0 U=0 W=0 is 64-bit VMOV
|
||||
# P=1 W=0 is VLDR/VSTR
|
||||
# P=U W=1 is UNDEF
|
||||
# leaving P=0 U=1 W=x and P=1 U=0 W=1 for load/store multiple.
|
||||
# These include FSTM/FLDM.
|
||||
VLDM_VSTM_sp ---- 1100 1 . w:1 l:1 rn:4 .... 1010 imm:8 \
|
||||
vd=%vd_sp p=0 u=1
|
||||
VLDM_VSTM_dp ---- 1100 1 . w:1 l:1 rn:4 .... 1011 imm:8 \
|
||||
vd=%vd_dp p=0 u=1
|
||||
|
||||
VLDM_VSTM_sp ---- 1101 0.1 l:1 rn:4 .... 1010 imm:8 \
|
||||
vd=%vd_sp p=1 u=0 w=1
|
||||
VLDM_VSTM_dp ---- 1101 0.1 l:1 rn:4 .... 1011 imm:8 \
|
||||
vd=%vd_dp p=1 u=0 w=1
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue