block/export: wait for vhost-user-blk requests when draining

Each vhost-user-blk request runs in a coroutine. When the BlockBackend
enters a drained section we need to enter a quiescent state. Currently
any in-flight requests race with bdrv_drained_begin() because it is
unaware of vhost-user-blk requests.

When blk_co_preadv/pwritev()/etc returns it wakes the
bdrv_drained_begin() thread but vhost-user-blk request processing has
not yet finished. The request coroutine continues executing while the
main loop thread thinks it is in a drained section.

One example where this is unsafe is for blk_set_aio_context() where
bdrv_drained_begin() is called before .aio_context_detached() and
.aio_context_attach(). If request coroutines are still running after
bdrv_drained_begin(), then the AioContext could change underneath them
and they race with new requests processed in the new AioContext. This
could lead to virtqueue corruption, for example.

(This example is theoretical, I came across this while reading the
code and have not tried to reproduce it.)

It's easy to make bdrv_drained_begin() wait for in-flight requests: add
a .drained_poll() callback that checks the VuServer's in-flight counter.
VuServer just needs an API that returns true when there are requests in
flight. The in-flight counter needs to be atomic.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20230516190238.8401-7-stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit is contained in:
Stefan Hajnoczi 2023-05-16 15:02:24 -04:00 committed by Kevin Wolf
parent 75d33e8525
commit 8f5e9a8ee1
3 changed files with 28 additions and 7 deletions

View file

@ -78,17 +78,23 @@ static void panic_cb(VuDev *vu_dev, const char *buf)
void vhost_user_server_inc_in_flight(VuServer *server)
{
assert(!server->wait_idle);
server->in_flight++;
qatomic_inc(&server->in_flight);
}
void vhost_user_server_dec_in_flight(VuServer *server)
{
server->in_flight--;
if (server->wait_idle && !server->in_flight) {
aio_co_wake(server->co_trip);
if (qatomic_fetch_dec(&server->in_flight) == 1) {
if (server->wait_idle) {
aio_co_wake(server->co_trip);
}
}
}
bool vhost_user_server_has_in_flight(VuServer *server)
{
return qatomic_load_acquire(&server->in_flight) > 0;
}
static bool coroutine_fn
vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
{
@ -192,13 +198,13 @@ static coroutine_fn void vu_client_trip(void *opaque)
/* Keep running */
}
if (server->in_flight) {
if (vhost_user_server_has_in_flight(server)) {
/* Wait for requests to complete before we can unmap the memory */
server->wait_idle = true;
qemu_coroutine_yield();
server->wait_idle = false;
}
assert(server->in_flight == 0);
assert(!vhost_user_server_has_in_flight(server));
vu_deinit(vu_dev);