mirror of
https://github.com/Motorhead1991/qemu.git
synced 2025-08-22 01:21:53 -06:00
qsp: QEMU's Synchronization Profiler
The goal of this module is to profile synchronization primitives (i.e. mutexes, recursive mutexes and condition variables) so that scalability issues can be quickly diagnosed. Sync primitives are profiled by QSP based on the vaddr of the object accessed as well as the call site (file:line_nr). That means the same object called from two different call sites will be tracked in separate entries, which might be reported together or separately (see subsequent commit on call site coalescing). Some perf numbers: Host: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz Command: taskset -c 0 tests/atomic_add-bench -d 5 -m - Before: 54.80 Mops/s - After: 54.75 Mops/s That is, a negligible slowdown due to the now indirect call to qemu_mutex_lock. Note that using a branch instead of an indirect call introduces a more severe slowdown (53.65 Mops/s, i.e. 2% slowdown). Enabling the profiler (with -p, added in this series) is more interesting: - No profiling: 54.75 Mops/s - W/ profiling: 12.53 Mops/s That is, a 4.36X slowdown. We can break down this slowdown by removing the get_clock calls or the entry lookup: - No profiling: 54.75 Mops/s - W/o get_clock: 25.37 Mops/s - W/o entry lookup: 19.30 Mops/s - W/ profiling: 12.53 Mops/s Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
c04649eeea
commit
fe9959a275
9 changed files with 759 additions and 23 deletions
|
@ -6,8 +6,8 @@
|
|||
|
||||
typedef QemuMutex QemuRecMutex;
|
||||
#define qemu_rec_mutex_destroy qemu_mutex_destroy
|
||||
#define qemu_rec_mutex_lock qemu_mutex_lock
|
||||
#define qemu_rec_mutex_trylock qemu_mutex_trylock
|
||||
#define qemu_rec_mutex_lock_impl qemu_mutex_lock_impl
|
||||
#define qemu_rec_mutex_trylock_impl qemu_mutex_trylock_impl
|
||||
#define qemu_rec_mutex_unlock qemu_mutex_unlock
|
||||
|
||||
struct QemuMutex {
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue