mirror of
https://github.com/Motorhead1991/qemu.git
synced 2025-08-05 08:43:55 -06:00
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2 iQEcBAABCAAGBQJYE2ULAAoJEMo1YkxqkXHGvvAH/iDPIAiwBXbndL3KhQTneSHn ctd4I3VK1/VVTIBRJIetqETiWiAm/WoRhI9kBc/NrQxBFx3ko+fpSYFS2t6lJYnV EX0vjTKjFhr05tOTQDH/SQtHdU5x/x2M8SsxqrCcTyLm5VDfdPeBlMBfSNMj/L2K bwinANVEwr6LOM0h8weQ0SvOCa5MLII2p5ufGwKQmhUY5tgZvFlyPa+quDVisKoE 7CpLwWHmUQSNxUXSaru90osUJyk90wCcYxPpJN3YO1MHvpH4kG8DpZ8bnFqLAoNw zkRdqIrlfntD+mKDqRU1y0GXxu9I4VK1UDcQyRFoSdMi2oHR+L018sQEjCYTAXo= =n+CF -----END PGP SIGNATURE----- Merge remote-tracking branch 'remotes/famz/tags/for-upstream' into staging # gpg: Signature made Fri 28 Oct 2016 15:47:39 BST # gpg: using RSA key 0xCA35624C6A9171C6 # gpg: Good signature from "Fam Zheng <famz@redhat.com>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6 * remotes/famz/tags/for-upstream: aio: convert from RFifoLock to QemuRecMutex qemu-thread: introduce QemuRecMutex iothread: release AioContext around aio_poll block: only call aio_poll on the current thread's AioContext qemu-img: call aio_context_acquire/release around block job qemu-io: acquire AioContext block: prepare bdrv_reopen_multiple to release AioContext replication: pass BlockDriverState to reopen_backing_file iothread: detach all block devices before stopping them aio: introduce qemu_get_current_aio_context sheepdog: use BDRV_POLL_WHILE nfs: use BDRV_POLL_WHILE nfs: move nfs_set_events out of the while loops block: introduce BDRV_POLL_WHILE qed: Implement .bdrv_drain block: change drain to look only at one child at a time block: add BDS field to count in-flight requests mirror: use bdrv_drained_begin/bdrv_drained_end blockjob: introduce .drain callback for jobs replication: interrupt failover if the main device is closed Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This commit is contained in:
commit
5273a45e75
36 changed files with 520 additions and 481 deletions
|
@ -105,13 +105,10 @@ a BH in the target AioContext beforehand and then call qemu_bh_schedule(). No
|
|||
acquire/release or locking is needed for the qemu_bh_schedule() call. But be
|
||||
sure to acquire the AioContext for aio_bh_new() if necessary.
|
||||
|
||||
The relationship between AioContext and the block layer
|
||||
-------------------------------------------------------
|
||||
The AioContext originates from the QEMU block layer because it provides a
|
||||
scoped way of running event loop iterations until all work is done. This
|
||||
feature is used to complete all in-flight block I/O requests (see
|
||||
bdrv_drain_all()). Nowadays AioContext is a generic event loop that can be
|
||||
used by any QEMU subsystem.
|
||||
AioContext and the block layer
|
||||
------------------------------
|
||||
The AioContext originates from the QEMU block layer, even though nowadays
|
||||
AioContext is a generic event loop that can be used by any QEMU subsystem.
|
||||
|
||||
The block layer has support for AioContext integrated. Each BlockDriverState
|
||||
is associated with an AioContext using bdrv_set_aio_context() and
|
||||
|
@ -122,13 +119,22 @@ Block layer code must therefore expect to run in an IOThread and avoid using
|
|||
old APIs that implicitly use the main loop. See the "How to program for
|
||||
IOThreads" above for information on how to do that.
|
||||
|
||||
If main loop code such as a QMP function wishes to access a BlockDriverState it
|
||||
must first call aio_context_acquire(bdrv_get_aio_context(bs)) to ensure the
|
||||
IOThread does not run in parallel.
|
||||
If main loop code such as a QMP function wishes to access a BlockDriverState
|
||||
it must first call aio_context_acquire(bdrv_get_aio_context(bs)) to ensure
|
||||
that callbacks in the IOThread do not run in parallel.
|
||||
|
||||
Long-running jobs (usually in the form of coroutines) are best scheduled in the
|
||||
BlockDriverState's AioContext to avoid the need to acquire/release around each
|
||||
bdrv_*() call. Be aware that there is currently no mechanism to get notified
|
||||
when bdrv_set_aio_context() moves this BlockDriverState to a different
|
||||
AioContext (see bdrv_detach_aio_context()/bdrv_attach_aio_context()), so you
|
||||
may need to add this if you want to support long-running jobs.
|
||||
Code running in the monitor typically needs to ensure that past
|
||||
requests from the guest are completed. When a block device is running
|
||||
in an IOThread, the IOThread can also process requests from the guest
|
||||
(via ioeventfd). To achieve both objects, wrap the code between
|
||||
bdrv_drained_begin() and bdrv_drained_end(), thus creating a "drained
|
||||
section". The functions must be called between aio_context_acquire()
|
||||
and aio_context_release(). You can freely release and re-acquire the
|
||||
AioContext within a drained section.
|
||||
|
||||
Long-running jobs (usually in the form of coroutines) are best scheduled in
|
||||
the BlockDriverState's AioContext to avoid the need to acquire/release around
|
||||
each bdrv_*() call. The functions bdrv_add/remove_aio_context_notifier,
|
||||
or alternatively blk_add/remove_aio_context_notifier if you use BlockBackends,
|
||||
can be used to get a notification whenever bdrv_set_aio_context() moves a
|
||||
BlockDriverState to a different AioContext.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue