mirror of
https://github.com/Motorhead1991/qemu.git
synced 2026-02-07 00:30:43 -07:00
In the same emulated RISC-V host, the 'host' KVM CPU takes 4 times
longer to boot than the 'rv64' KVM CPU.
The reason is an unintended behavior of riscv_cpu_satp_mode_finalize()
when satp_mode.supported = 0, i.e. when cpu_init() does not set
satp_mode_max_supported(). satp_mode_max_from_map(map) does:
31 - __builtin_clz(map)
This means that, if satp_mode.supported = 0, satp_mode_supported_max
wil be '31 - 32'. But this is C, so satp_mode_supported_max will gladly
set it to UINT_MAX (4294967295). After that, if the user didn't set a
satp_mode, set_satp_mode_default_map(cpu) will make
cfg.satp_mode.map = cfg.satp_mode.supported
So satp_mode.map = 0. And then satp_mode_map_max will be set to
satp_mode_max_from_map(cpu->cfg.satp_mode.map), i.e. also UINT_MAX. The
guard "satp_mode_map_max > satp_mode_supported_max" doesn't protect us
here since both are UINT_MAX.
And finally we have 2 loops:
for (int i = satp_mode_map_max - 1; i >= 0; --i) {
Which are, in fact, 2 loops from UINT_MAX -1 to -1. This is where the
extra delay when booting the 'host' CPU is coming from.
Commit
|
||
|---|---|---|
| .. | ||
| alpha | ||
| arm | ||
| avr | ||
| cris | ||
| hexagon | ||
| hppa | ||
| i386 | ||
| loongarch | ||
| m68k | ||
| microblaze | ||
| mips | ||
| nios2 | ||
| openrisc | ||
| ppc | ||
| riscv | ||
| rx | ||
| s390x | ||
| sh4 | ||
| sparc | ||
| tricore | ||
| xtensa | ||
| Kconfig | ||
| meson.build | ||