migration: Add tracepoints for downtime checkpoints

This patch is inspired by Joao Martin's patch here:

https://lore.kernel.org/r/20230926161841.98464-1-joao.m.martins@oracle.com

Add tracepoints for major downtime checkpoints on both src and dst.  They
share the same tracepoint with a string showing its stage.

Besides the checkpoints in the previous patch, this patch also added
destination checkpoints.

On src, we have these checkpoints added:

  - src-downtime-start: right before vm stops on src
  - src-vm-stopped: after vm is fully stopped
  - src-iterable-saved: after all iterables saved (END sections)
  - src-non-iterable-saved: after all non-iterable saved (FULL sections)
  - src-downtime-stop: migration fully completed

On dst, we have these checkpoints added:

  - dst-precopy-loadvm-completes: after loadvm all done for precopy
  - dst-precopy-bh-*: record BH steps to resume VM for precopy
  - dst-postcopy-bh-*: record BH steps to resume VM for postcopy

On dst side, we don't have a good way to trace total time consumed by
iterable or non-iterable for now.  We can mark it by 1st time receiving a
FULL / END section, but rather than that let's just rely on the other
tracepoints added for vmstates to back up the information.

With this patch, one can enable "vmstate_downtime*" tracepoints and it'll
enable all tracepoints for downtime measurements necessary.

Drop loadvm_postcopy_handle_run_bh() tracepoint alongside, because they
service the same purpose, which was only for postcopy.  We then have
unified prefix for all downtime relevant tracepoints.

Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231030163346.765724-6-peterx@redhat.com>
This commit is contained in:
Peter Xu 2023-10-30 12:33:46 -04:00 committed by Juan Quintela
parent 93bdf888fa
commit 3e5f3bcdc2
3 changed files with 25 additions and 7 deletions

View file

@ -1526,6 +1526,8 @@ int qemu_savevm_state_complete_precopy_iterable(QEMUFile *f, bool in_postcopy)
end_ts_each - start_ts_each);
}
trace_vmstate_downtime_checkpoint("src-iterable-saved");
return 0;
}
@ -1592,6 +1594,8 @@ int qemu_savevm_state_complete_precopy_non_iterable(QEMUFile *f,
json_writer_free(vmdesc);
ms->vmdesc = NULL;
trace_vmstate_downtime_checkpoint("src-non-iterable-saved");
return 0;
}
@ -2133,18 +2137,18 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
Error *local_err = NULL;
MigrationIncomingState *mis = opaque;
trace_loadvm_postcopy_handle_run_bh("enter");
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-enter");
/* TODO we should move all of this lot into postcopy_ram.c or a shared code
* in migration.c
*/
cpu_synchronize_all_post_init();
trace_loadvm_postcopy_handle_run_bh("after cpu sync");
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cpu-synced");
qemu_announce_self(&mis->announce_timer, migrate_announce_params());
trace_loadvm_postcopy_handle_run_bh("after announce");
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-announced");
/* Make sure all file formats throw away their mutable metadata.
* If we get an error here, just don't restart the VM yet. */
@ -2155,7 +2159,7 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
autostart = false;
}
trace_loadvm_postcopy_handle_run_bh("after invalidate cache");
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-cache-invalidated");
dirty_bitmap_mig_before_vm_start();
@ -2169,7 +2173,7 @@ static void loadvm_postcopy_handle_run_bh(void *opaque)
qemu_bh_delete(mis->bh);
trace_loadvm_postcopy_handle_run_bh("return");
trace_vmstate_downtime_checkpoint("dst-postcopy-bh-vm-started");
}
/* After all discards we can start running and asking for pages */