mirror of
https://github.com/Motorhead1991/qemu.git
synced 2025-08-02 23:33:54 -06:00
memory-device,vhost: Support automatic decision on the number of memslots
We want to support memory devices that can automatically decide how many memslots they will use. In the worst case, they have to use a single memslot. The target use cases are virtio-mem and the hyper-v balloon. Let's calculate a reasonable limit such a memory device may use, and instruct the device to make a decision based on that limit. Use a simple heuristic that considers: * A memslot soft-limit for all memory devices of 256; also, to not consume too many memslots -- which could harm performance. * Actually still free and unreserved memslots * The percentage of the remaining device memory region that memory device will occupy. Further, while we properly check before plugging a memory device whether there still is are free memslots, we have other memslot consumers (such as boot memory, PCI BARs) that don't perform any checks and might dynamically consume memslots without any prior reservation. So we might succeed in plugging a memory device, but once we dynamically map a PCI BAR we would be in trouble. Doing accounting / reservation / checks for all such users is problematic (e.g., sometimes we might temporarily split boot memory into two memslots, triggered by the BIOS). We use the historic magic memslot number of 509 as orientation to when supporting 256 memory devices -> memslots (leaving 253 for boot memory and other devices) has been proven to work reliable. We'll fallback to suggesting a single memslot if we don't have at least 509 total memslots. Plugging vhost devices with less than 509 memslots available while we have memory devices plugged that consume multiple memslots due to automatic decisions can be problematic. Most configurations might just fail due to "limit < used + reserved", however, it can also happen that these memory devices would suddenly consume memslots that would actually be required by other memslot consumers (boot, PCI BARs) later. Note that this has always been sketchy with vhost devices that support only a small number of memslots; but we don't want to make it any worse.So let's keep it simple and simply reject plugging such vhost devices in such a configuration. Eventually, all vhost devices that want to be fully compatible with such memory devices should support a decent number of memslots (>= 509). Message-ID: <20230926185738.277351-13-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
This commit is contained in:
parent
cd89c065b0
commit
a2335113ae
5 changed files with 147 additions and 4 deletions
|
@ -14,6 +14,7 @@
|
|||
#define MEMORY_DEVICE_H
|
||||
|
||||
#include "hw/qdev-core.h"
|
||||
#include "qemu/typedefs.h"
|
||||
#include "qapi/qapi-types-machine.h"
|
||||
#include "qom/object.h"
|
||||
|
||||
|
@ -99,6 +100,15 @@ struct MemoryDeviceClass {
|
|||
*/
|
||||
MemoryRegion *(*get_memory_region)(MemoryDeviceState *md, Error **errp);
|
||||
|
||||
/*
|
||||
* Optional: Instruct the memory device to decide how many memory slots
|
||||
* it requires, not exceeding the given limit.
|
||||
*
|
||||
* Called exactly once when pre-plugging the memory device, before
|
||||
* querying the number of memslots using @get_memslots the first time.
|
||||
*/
|
||||
void (*decide_memslots)(MemoryDeviceState *md, unsigned int limit);
|
||||
|
||||
/*
|
||||
* Optional for memory devices that require only a single memslot,
|
||||
* required for all other memory devices: Return the number of memslots
|
||||
|
@ -129,9 +139,31 @@ struct MemoryDeviceClass {
|
|||
MemoryDeviceInfo *info);
|
||||
};
|
||||
|
||||
/*
|
||||
* Traditionally, KVM/vhost in many setups supported 509 memslots, whereby
|
||||
* 253 memslots were "reserved" for boot memory and other devices (such
|
||||
* as PCI BARs, which can get mapped dynamically) and 256 memslots were
|
||||
* dedicated for DIMMs. These magic numbers worked reliably in the past.
|
||||
*
|
||||
* Further, using many memslots can negatively affect performance, so setting
|
||||
* the soft-limit of memslots used by memory devices to the traditional
|
||||
* DIMM limit of 256 sounds reasonable.
|
||||
*
|
||||
* If we have less than 509 memslots, we will instruct memory devices that
|
||||
* support automatically deciding how many memslots to use to only use a single
|
||||
* one.
|
||||
*
|
||||
* Hotplugging vhost devices with at least 509 memslots is not expected to
|
||||
* cause problems, not even when devices automatically decided how many memslots
|
||||
* to use.
|
||||
*/
|
||||
#define MEMORY_DEVICES_SOFT_MEMSLOT_LIMIT 256
|
||||
#define MEMORY_DEVICES_SAFE_MAX_MEMSLOTS 509
|
||||
|
||||
MemoryDeviceInfoList *qmp_memory_device_list(void);
|
||||
uint64_t get_plugged_memory_size(void);
|
||||
unsigned int memory_devices_get_reserved_memslots(void);
|
||||
bool memory_devices_memslot_auto_decision_active(void);
|
||||
void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms,
|
||||
const uint64_t *legacy_align, Error **errp);
|
||||
void memory_device_plug(MemoryDeviceState *md, MachineState *ms);
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue