Commit graph

120873 commits

Author SHA1 Message Date
Tomita Moeko
1a2623b5c9 vfio/igd: add macro for declaring mirrored registers
igd devices have multipe registers mirroring mmio address and pci
config space, more than a single BDSM register. To support this,
the read/write functions are made common and a macro is defined to
simplify the declaration of MemoryRegionOps.

Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-8-tomitamoeko@gmail.com
[ clg : Fixed conversion specifier on 32-bit platform ]
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
960f62770a vfio/igd: add Alder/Raptor/Rocket/Ice/Jasper Lake device ids
All gen 11 and 12 igd devices have 64 bit BDSM register at 0xC0 in its
config space, add them to the list to support igd passthrough on Alder/
Raptor/Rocket/Ice/Jasper Lake platforms.

Tested legacy mode of igd passthrough works properly on both linux and
windows guests with AlderLake-S GT1 (8086:4680).

Reviewed-by: Corvin Köhne <c.koehne@beckhoff.com>
Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-7-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
183714d8f9 vfio/igd: add Gemini Lake and Comet Lake device ids
Both Gemini Lake and Comet Lake are gen 9 devices. Many user reports
on internet shows legacy mode of igd passthrough works as qemu treats
them as gen 8 devices by default before e433f20897 ("vfio/igd:
return an invalid generation for unknown devices").

Reviewed-by: Corvin Köhne <c.koehne@beckhoff.com>
Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-6-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
1e1eac5f3d vfio/igd: canonicalize memory size calculations
Add helper functions igd_gtt_memory_size() and igd_stolen_size() for
calculating GTT stolen memory and Data stolen memory size in bytes,
and use macros to replace the hardware-related magic numbers for
better readability.

Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-5-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
0bb758e97a vfio/igd: align generation with i915 kernel driver
Define the igd device generations according to i915 kernel driver to
avoid confusion, and adjust comment placement to clearly reflect the
relationship between ids and devices.

The condition of how GTT stolen memory size is calculated is changed
accordingly as GGMS is in multiple of 2 starting from gen 8.

Reviewed-by: Corvin Köhne <c.koehne@beckhoff.com>
Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-4-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
8492a6129c vfio/igd: remove unsupported device ids
Since e433f20897 ("vfio/igd: return an invalid generation for unknown
devices"), the default return of igd_gen() was changed to unsupported.
There is no need to filter out those unsupported devices.

Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Corvin Köhne <c.koehne@beckhoff.com>
Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Link: https://lore.kernel.org/r/20241206122749.9893-3-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Tomita Moeko
03828b00a2 vfio/igd: fix GTT stolen memory size calculation for gen 8+
On gen 8 and later devices, the GTT stolen memory size when GGMS equals
0 is 0 (no preallocated memory) rather than 1MB [1].

[1] 3.1.13, 5th Generation Intel Core Processor Family Datasheet Vol. 2
    https://www.intel.com/content/www/us/en/content-details/330835

Fixes: c4c45e943e ("vfio/pci: Intel graphics legacy mode assignment")

Reported-By: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Tomita Moeko <tomitamoeko@gmail.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20241206122749.9893-2-tomitamoeko@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
2024-12-26 07:23:37 +01:00
Stefan Hajnoczi
a7f77545d4 tcg/optimize: Remove in-flight mask data from OptContext
fpu: Add float*_muladd_scalbn
 fpu: Remove float_muladd_halve_result
 fpu: Add float_round_nearest_even_max
 fpu: Add float_muladd_suppress_add_product_zero
 target/hexagon: Use float32_muladd
 accel/tcg: Move gen_intermediate_code to TCGCPUOps.translate_core
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmdrE7QdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+l2Qf/aECUfMn07wns7WjX
 ebWxzIRKp//ktsIJg9InL8zrCStyRqrBj0VQE9LUfO2Vhvqf8faUdh+uh2ek/Ewa
 f1hfo0kDK7e7oWnCicSbHmdC0FQIrKpg2i+YXIsbd4XWOkmFAhkNenISuQfCrL3k
 3UYAA12seK9uCls+fljvhK6iid3h+4ReDFW7DPg7mumFCCz6CwzYYW/4cnhcAmOn
 qVehtts8W+6SFMjTE04S8NV8OBaMisf8AbCcZf2PedRl1cHGSumLOjvjOxcQU8Hw
 nGUjL8/hYWkEetzU4YzJyfHOe6F9lPJBMnDattwIswwYrTOD/Sq7VbBWFbW0EwUy
 7XIZ8Q==
 =DZgo
 -----END PGP SIGNATURE-----

Merge tag 'pull-tcg-20241224' of https://gitlab.com/rth7680/qemu into staging

tcg/optimize: Remove in-flight mask data from OptContext
fpu: Add float*_muladd_scalbn
fpu: Remove float_muladd_halve_result
fpu: Add float_round_nearest_even_max
fpu: Add float_muladd_suppress_add_product_zero
target/hexagon: Use float32_muladd
accel/tcg: Move gen_intermediate_code to TCGCPUOps.translate_core

# -----BEGIN PGP SIGNATURE-----
#
# iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmdrE7QdHHJpY2hhcmQu
# aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+l2Qf/aECUfMn07wns7WjX
# ebWxzIRKp//ktsIJg9InL8zrCStyRqrBj0VQE9LUfO2Vhvqf8faUdh+uh2ek/Ewa
# f1hfo0kDK7e7oWnCicSbHmdC0FQIrKpg2i+YXIsbd4XWOkmFAhkNenISuQfCrL3k
# 3UYAA12seK9uCls+fljvhK6iid3h+4ReDFW7DPg7mumFCCz6CwzYYW/4cnhcAmOn
# qVehtts8W+6SFMjTE04S8NV8OBaMisf8AbCcZf2PedRl1cHGSumLOjvjOxcQU8Hw
# nGUjL8/hYWkEetzU4YzJyfHOe6F9lPJBMnDattwIswwYrTOD/Sq7VbBWFbW0EwUy
# 7XIZ8Q==
# =DZgo
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 24 Dec 2024 15:04:04 EST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* tag 'pull-tcg-20241224' of https://gitlab.com/rth7680/qemu: (72 commits)
  accel/tcg: Move gen_intermediate_code to TCGCPUOps.translate_core
  target/hexagon: Simplify internal_mpyhh setup
  target/hexagon: Use mulu64 for int128_mul_6464
  target/hexagon: Remove Double
  target/hexagon: Remove Float
  target/hexagon: Expand GEN_XF_ROUND
  target/hexagon: Remove internal_fmafx
  target/hexagon: Use float32_muladd for helper_sffm[as]_lib
  target/hexagon: Use float32_muladd_scalbn for helper_sffma_sc
  target/hexagon: Use float32_muladd for helper_sffms
  target/hexagon: Use float32_muladd for helper_sffma
  target/hexagon: Use float32_mul in helper_sfmpy
  softfloat: Add float_muladd_suppress_add_product_zero
  softfloat: Add float_round_nearest_even_max
  softfloat: Remove float_muladd_halve_result
  target/sparc: Use float*_muladd_scalbn
  target/arm: Use float*_muladd_scalbn
  softfloat: Add float{16,32,64}_muladd_scalbn
  tcg/optimize: Move fold_cmp_vec, fold_cmpsel_vec into alphabetic sort
  tcg/optimize: Move fold_bitsel_vec into alphabetic sort
  ...

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2024-12-25 08:33:33 -05:00
Richard Henderson
e4a8e093dc accel/tcg: Move gen_intermediate_code to TCGCPUOps.translate_core
Convert all targets simultaneously, as the gen_intermediate_code
function disappears from the target.  While there are possible
workarounds, they're larger than simply performing the conversion.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
59abfb444e target/hexagon: Simplify internal_mpyhh setup
Initialize x with accumulated via direct assignment,
rather than multiplying by 1.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
65b4dce393 target/hexagon: Use mulu64 for int128_mul_6464
No need to open-code 64x64->128-bit multiplication.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
8429306c32 target/hexagon: Remove Double
This structure, with bitfields, is incorrect for big-endian.
Use extract64 and deposit64 instead.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
fefc9702e6 target/hexagon: Remove Float
This structure, with bitfields, is incorrect for big-endian.
Use the existing float32_getexp_raw which uses extract32.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
795d6a2c49 target/hexagon: Expand GEN_XF_ROUND
This massive macro is now only used once.
Expand it for use only by float64.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
813437e500 target/hexagon: Remove internal_fmafx
The function is now unused.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
316dca3985 target/hexagon: Use float32_muladd for helper_sffm[as]_lib
There are multiple special cases for this instruction.
(1) The saturate to normal maximum instead of overflow to infinity is
    handled by the new float_round_nearest_even_max rounding mode.
(2) The 0 * n + c special case is handled by the new
    float_muladd_suppress_add_product_zero flag.
(3) The Inf - Inf -> 0 special case can be detected after the fact
    by examining float_flag_invalid_isi.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
904624ab8e target/hexagon: Use float32_muladd_scalbn for helper_sffma_sc
This instruction has a special case that 0 * x + c returns c
without the normal sign folding that comes with 0 + -0.
Use the new float_muladd_suppress_add_product_zero to
describe this.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
2eca1928f9 target/hexagon: Use float32_muladd for helper_sffms
There are no special cases for this instruction.  Since hexagon
always uses default-nan mode, explicitly negating the first
input is unnecessary.  Use float_muladd_negate_product instead.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
655a83cac1 target/hexagon: Use float32_muladd for helper_sffma
There are no special cases for this instruction.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
6e7422dc22 target/hexagon: Use float32_mul in helper_sfmpy
There are no special cases for this instruction.
Remove internal_mpyf as unused.

Reviewed-by: Brian Cain <brian.cain@oss.qualcomm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
82f898f3b6 softfloat: Add float_muladd_suppress_add_product_zero
Certain Hexagon instructions suppress changes to the result
when the product of fma() is a true zero.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
72330260cd softfloat: Add float_round_nearest_even_max
This rounding mode is used by Hexagon.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
6a243913aa softfloat: Remove float_muladd_halve_result
All uses have been convered to float*_muladd_scalbn.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
88d5f550bd target/sparc: Use float*_muladd_scalbn
Use the scalbn interface instead of float_muladd_halve_result.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
912400a362 target/arm: Use float*_muladd_scalbn
Use the scalbn interface instead of float_muladd_halve_result.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
910556bbf4 softfloat: Add float{16,32,64}_muladd_scalbn
We currently have a flag, float_muladd_halve_result, to scale
the result by 2**-1.  Extend this to handle arbitrary scaling.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
29f6586f61 tcg/optimize: Move fold_cmp_vec, fold_cmpsel_vec into alphabetic sort
The big comment just above says functions should be sorted.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
7d3c63aca1 tcg/optimize: Move fold_bitsel_vec into alphabetic sort
The big comment just above says functions should be sorted.
Add forward declarations as needed.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
aa9e0501a4 tcg/optimize: Re-enable sign-mask optimizations
All instances of s_mask have been converted to the new
representation.  We can now re-enable usage.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
a3a88b17c2 tcg/optimize: Remove z_mask, s_mask from OptContext
All mask setting is now done with parameters via fold_masks_*.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
0ae5642889 tcg/optimize: Use finish_folding as default in tcg_optimize
All non-default cases now finish folding within each function.
Do the same with the default case and assert it is done after.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
4fcd14ca64 tcg/optimize: Use finish_folding in fold_bitsel_vec
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
c890fd7179 tcg/optimize: Use fold_masks_zs in fold_xor
Avoid the use of the OptContext slots.  Find TempOptInfo once.
Remove fold_masks as the function becomes unused.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
0fb5b757c3 tcg/optimize: Use finish_folding in fold_tcg_ld_memcopy
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
d33e0f01db tcg/optimize: Use fold_masks_zs in fold_tcg_ld
Avoid the use of the OptContext slots.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
fe1d0074b5 tcg/optimize: Use finish_folding in fold_sub, fold_sub_vec
Duplicate fold_sub_vec into fold_sub instead of calling it,
now that fold_sub_vec always returns true.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
4ed2ba3f4a tcg/optimize: Simplify sign bit test in fold_shift
Merge the two conditions, sign != 0 && !(z_mask & sign),
by testing ~z_mask & sign.   If sign == 0, the logical and
will produce false.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
4e9ce6a2ec tcg/optimize: Use fold_masks_zs, fold_masks_s in fold_shift
Avoid the use of the OptContext slots.  Find TempOptInfo once.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
baff507e50 tcg/optimize: Use fold_masks_zs in fold_sextract
Avoid the use of the OptContext slots.  Find TempOptInfo once.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
210c70b7ac tcg/optimize: Use finish_folding in fold_cmpsel_vec
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
4d20104f9f tcg/optimize: Use finish_folding in fold_cmp_vec
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
a53502c0b4 tcg/optimize: Use fold_masks_z in fold_setcond2
Avoid the use of the OptContext slots.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
081cf08b09 tcg/optimize: Use fold_masks_s in fold_negsetcond
Avoid the use of the OptContext slots.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
2c8a28398d tcg/optimize: Use fold_masks_z in fold_setcond
Avoid the use of the OptContext slots.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
95eb229363 tcg/optimize: Distinguish simplification in fold_setcond_zmask
Change return from bool to int; distinguish between
complete folding, simplification, and no change.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
f9e3934903 tcg/optimize: Use finish_folding in fold_remainder
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
082b3ef919 tcg/optimize: Return true from fold_qemu_st, fold_tcg_st
Stores have no output operands, and so need no further work.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
6813be9b9b tcg/optimize: Use fold_masks_zs in fold_qemu_ld
Avoid the use of the OptContext slots.

Be careful not to call fold_masks_zs when the memory operation
is wide enough to require multiple outputs, so split into two
functions: fold_qemu_ld_1reg and fold_qemu_ld_2reg.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
54e26b292b tcg/optimize: Use fold_masks_zs in fold_orc
Avoid the use of the OptContext slots.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00
Richard Henderson
83b1ba3696 tcg/optimize: Use fold_masks_zs in fold_or
Avoid the use of the OptContext slots.  Find TempOptInfo once.

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2024-12-24 08:32:15 -08:00