jobs: group together API calls under the same job lock

Now that the API offers also _locked() functions, take advantage
of it and give also the caller control to take the lock and call
_locked functions.

This makes sense especially when we have for loops, because it
makes no sense to have:

for(job = job_next(); ...)

where each job_next() takes the lock internally.
Instead we want

JOB_LOCK_GUARD();
for(job = job_next_locked(); ...)

In addition, protect also direct field accesses, by either creating a
new critical section or widening the existing ones.

Note: at this stage, job_{lock/unlock} and job lock guard macros
are *nop*.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20220926093214.506243-12-eesposit@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit is contained in:
Emanuele Giuseppe Esposito 2022-09-26 05:32:04 -04:00 committed by Kevin Wolf
parent 279ac06e55
commit 880eeec613
6 changed files with 63 additions and 34 deletions

View file

@ -912,9 +912,11 @@ static void run_block_job(BlockJob *job, Error **errp)
int ret = 0;
aio_context_acquire(aio_context);
job_ref(&job->job);
job_lock();
job_ref_locked(&job->job);
do {
float progress = 0.0f;
job_unlock();
aio_poll(aio_context, true);
progress_get_snapshot(&job->job.progress, &progress_current,
@ -923,14 +925,17 @@ static void run_block_job(BlockJob *job, Error **errp)
progress = (float)progress_current / progress_total * 100.f;
}
qemu_progress_print(progress, 0);
} while (!job_is_ready(&job->job) && !job_is_completed(&job->job));
job_lock();
} while (!job_is_ready_locked(&job->job) &&
!job_is_completed_locked(&job->job));
if (!job_is_completed(&job->job)) {
ret = job_complete_sync(&job->job, errp);
if (!job_is_completed_locked(&job->job)) {
ret = job_complete_sync_locked(&job->job, errp);
} else {
ret = job->job.ret;
}
job_unref(&job->job);
job_unref_locked(&job->job);
job_unlock();
aio_context_release(aio_context);
/* publish completion progress only when success */