mirror of
https://github.com/Motorhead1991/qemu.git
synced 2025-08-09 10:34:58 -06:00
target/i386: reimplement 0x0f 0x38, add AVX
There are several special cases here: 1) extending moves have different widths for the helpers vs. for the memory loads, and the width for memory loads depends on VEX.L too. This is represented by X86_SPECIAL_AVXExtMov. 2) some instructions, such as variable-width shifts, select the vector element size via REX.W. 3) VSIB instructions (VGATHERxPy, VPGATHERxy) are also part of this group, and they have (among other things) two output operands. 3) the macros for 4-operand blends (which are under 0x0f 0x3a) have to be extended to support 2-operand blends. The 2-operand variant actually came a few years earlier, but it is clearer to implement them in the opposite order. X86_TYPE_WM, introduced earlier for unaligned loads, is reused for helpers that accept a Reg* but have a M argument. These three-byte opcodes also include AVX new instructions, for which the helpers were originally implemented by Paul Brook <paul@nowt.org>. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
d4af67a27a
commit
16fc5726a6
6 changed files with 524 additions and 8 deletions
|
@ -142,6 +142,12 @@ typedef enum X86InsnSpecial {
|
|||
X86_SPECIAL_ZExtOp0,
|
||||
X86_SPECIAL_ZExtOp2,
|
||||
|
||||
/*
|
||||
* Register operand 2 is extended to full width, while a memory operand
|
||||
* is doubled in size if VEX.L=1.
|
||||
*/
|
||||
X86_SPECIAL_AVXExtMov,
|
||||
|
||||
/*
|
||||
* MMX instruction exists with no prefix; if there is no prefix, V/H/W/U operands
|
||||
* become P/P/Q/N, and size "x" becomes "q".
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue