Problem Description
For Harvester to successfully migrate a virtual machine from one node to another, the source and target nodes must have compatible CPU models and features.
If the CPU model of a virtual machine isn't specified, KubeVirt assigns it the default host-model configuration so that the virtual machine has the CPU model closest to the one used on the host node.
KubeVirt automatically adjusts the node selectors of the associated virt-launcher Pod based on this configuration. If the CPU models and features of the source and target nodes do not match, the live migration may fail.
Let's examine an example.
When a virtual machine is first migrated to another node with the SierraForest CPU model, the following key-value pairs are added to the spec.nodeSelector field in the Pod spec.
spec:
nodeSelector:
cpu-model-migration.node.kubevirt.io/SierraForest: "true"
cpu-feature.node.kubevirt.io/fpu: "true"
cpu-feature.node.kubevirt.io/vme: "true"
The above nodeSelector configuration is retained for subsequent migrations, which may fail if the new target node doesn't have the corresponding features or model.
For example, compare the CPU model and feature labels added by KubeVirt to the following two nodes:
# Node A
labels:
cpu-model-migration.node.kubevirt.io/SierraForest: "true"
cpu-feature.node.kubevirt.io/fpu: "true"
cpu-feature.node.kubevirt.io/vme: "true"
# Node B
labels:
cpu-model-migration.node.kubevirt.io/SierraForest: "true"
cpu-feature.node.kubevirt.io/vme: "true"
This virtual machine will fail to migrate to Node B due to the missing fpu feature. However, if the virtual machine doesn't actually require this feature, this can be frustrating. Therefore, setting up a common CPU model can resolve this issue.
How to Set Up a Common CPU Model
You can define a custom CPU model to ensure that the spec.nodeSelector configuration in the Pod spec is assigned a CPU model that is compatible and common to all nodes in the cluster.
Consider this example.
We have the following node information:
# Node A
labels:
cpu-model.node.kubevirt.io/IvyBridge: "true"
cpu-feature.node.kubevirt.io/fpu: "true"
cpu-feature.node.kubevirt.io/vme: "true"
# Node B
labels:
cpu-model.node.kubevirt.io/IvyBridge: "true"
cpu-feature.node.kubevirt.io/vme: "true"
If we set up IvyBridge as our CPU model in the virtual machine spec, KubeVirt only adds cpu-model.node.kubevirt.io/IvyBridge under spec.nodeSelector in the Pod spec.
# Virtual Machine Spec
spec:
template:
spec:
domain:
cpu:
model: IvyBridge
# Pod spec
spec:
nodeSelector:
cpu-model.node.kubevirt.io/IvyBridge: "true"
With this configuration, your virtual machine can be migrated to any node that has the label cpu-model.node.kubevirt.io/IvyBridge.
Set Up Cluster-Wide Configuration
If your virtual machines run only on a specific CPU model, you can set up a cluster-wide CPU model in the kubevirt resource.
You can edit it with kubectl edit kubevirt kubevirt -n harvester-system, then add the CPU model you want in the following spec:
spec:
configuration:
cpuModel: IvyBridge
Then, when a new virtual machine starts or an existing virtual machine restarts, the cluster-wide setting will be applied. The system follows these priorities when using CPU models if you configure them in both locations:
- CPU model in the virtual machine spec.
- CPU model in the KubeVirt spec.
