Skip to main content

· 4 min read
Vicente Cheng

In earlier versions of Harvester (v1.0.3 and prior), Longhorn volumes may get corrupted during the replica rebuilding process (reference: Analysis: Potential Data/Filesystem Corruption). In Harvester v1.1.0 and later versions, the Longhorn team has fixed this issue. This article covers manual steps you can take to scan the VM's filesystem and repair it if needed.

Stop The VM And Backup Volume

Before you scan the filesystem, it is recommend you back up the volume first. For an example, refer to the following steps to stop the VM and backup the volume.

  • Find the target VM.

finding the target VM

  • Stop the target VM.

Stop the target VM

The target VM is stopped and the related volumes are detached. Now go to the Longhorn UI to backup this volume.

  • Enable Developer Tools & Features (Preferences -> Enable Developer Tools & Features).

Preferences then enable developer mode Enable the developer mode

  • Click the button and select Edit Config to edit the config page of the VM.

goto edit config page of VM

  • Go to the Volumes tab and select Check volume details.

link to longhorn volume page

  • Click the dropdown menu on the right side and select 'Attach' to attach the volume again.

attach this volume again

  • Select the attached node.

choose the attached node

  • Check the volume attached under Volume Details and select Take Snapshot on this volume page.

take snapshot on volume page

  • Confirm that the snapshot is ready.

check the snapshot is ready

Now that you completed the volume backup, you need to scan and repair the root filesystem.

Scanning the root filesystem and repairing

This section will introduce how to scan the filesystem (e.g., XFS, EXT4) using related tools.

Before scanning, you need to know the filesystem's device/partition.

  • Identify the filesystem's device by checking the major and minor numbers of that device.
  1. Obtain the major and minor numbers from the listed volume information.

    In the following example, the volume name is pvc-ea7536c0-301f-479e-b2a2-e40ddc864b58.

    harvester-node-0:~ # ls /dev/longhorn/pvc-ea7536c0-301f-479e-b2a2-e40ddc864b58 -al
    brw-rw---- 1 root root 8, 0 Oct 23 14:43 /dev/longhorn/pvc-ea7536c0-301f-479e-b2a2-e40ddc864b58

    The output indicates that the major and minor numbers are 8:0.

  2. Obtain the device name from the output of the lsblk command.

    harvester-node-0:~ # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
    loop0 7:0 0 3G 1 loop /
    sda 8:0 0 40G 0 disk
    ├─sda1 8:1 0 2M 0 part
    ├─sda2 8:2 0 20M 0 part
    └─sda3 8:3 0 40G 0 part

    The output indicates that 8:0 are the major and minor numbers of the device named sda. Therefore, /dev/sda is related to the volume named pvc-ea7536c0-301f-479e-b2a2-e40ddc864b58.

  • You should now know the filesystem's partition. In the example below, sda3 is the filesystem's partition.
  • Use the Filesystem toolbox image to scan and repair.
# docker run -it --rm --privileged registry.opensuse.org/isv/rancher/harvester/toolbox/main/fs-toolbox:latest -- bash

Then we try to scan with this target device.

XFS

When scanning an XFS filesystem, use the xfs_repair command and specify the problematic partition of the device.

In the following example, /dev/sda3 is the problematic partition.

# xfs_repair -n /dev/sda3

To repair the corrupted partition, run the following command.

# xfs_repair /dev/sda3

EXT4

When scanning a EXT4 filesystem, use the e2fsck command as follows, where the /dev/sde1 is the problematic partition of the device.

# e2fsck -f /dev/sde1

To repair the corrupted partition, run the following command.

# e2fsck -fp /dev/sde1

After using the 'e2fsck' command, you should also see logs related to scanning and repairing the partition. Scanning and repairing the corrupted partition is successful if there are no errors in these logs.

Detach and Start VM again.

After the corrupted partition is scanned and repaired, detach the volume and try to start the related VM again.

  • Detach the volume from the Longhorn UI.

detach volume on longhorn UI

  • Start the related VM again from the Harvester UI.

Start VM again

Your VM should now work normally.

· 2 min read
Kiefer Chang

Harvester replicates volumes data across disks in a cluster. Before removing a disk, the user needs to evict replicas on the disk to other disks to preserve the volumes' configured availability. For more information about eviction in Longhorn, please check Evicting Replicas on Disabled Disks or Nodes.

Preparation

This document describes how to evict Longhorn disks using the kubectl command. Before that, users must ensure the environment is set up correctly. There are two recommended ways to do this:

  1. Log in to any management node and switch to root (sudo -i).
  2. Download Kubeconfig file and use it locally
    • Install kubectl and yq program manually.
    • Open Harvester GUI, click support at the bottom left of the page and click Download KubeConfig to download the Kubeconfig file.
    • Set the Kubeconfig file's path to KUBECONFIG environment variable. For example, export KUBECONFIG=/path/to/kubeconfig.

Evicting replicas from a disk

  1. List Longhorn nodes (names are identical to Kubernetes nodes):

    kubectl get -n longhorn-system nodes.longhorn.io

    Sample output:

    NAME    READY   ALLOWSCHEDULING   SCHEDULABLE   AGE
    node1 True true True 24d
    node2 True true True 24d
    node3 True true True 24d
  2. List disks on a node. Assume we want to evict replicas of a disk on node1:

    kubectl get -n longhorn-system nodes.longhorn.io node1 -o yaml | yq e '.spec.disks'

    Sample output:

    default-disk-ed7af10f5b8356be:
    allowScheduling: true
    evictionRequested: false
    path: /var/lib/harvester/defaultdisk
    storageReserved: 36900254515
    tags: []
  3. Assume disk default-disk-ed7af10f5b8356be is the target we want to evict replicas out of.

    Edit the node:

    kubectl edit -n longhorn-system nodes.longhorn.io node1 

    Update these two fields and save:

    • spec.disks.<disk_name>.allowScheduling to false
    • spec.disks.<disk_name>.evictionRequested to true

    Sample editing:

    default-disk-ed7af10f5b8356be:
    allowScheduling: false
    evictionRequested: true
    path: /var/lib/harvester/defaultdisk
    storageReserved: 36900254515
    tags: []
  4. Wait for all replicas on the disk to be evicted.

    Get current scheduled replicas on the disk:

    kubectl get -n longhorn-system nodes.longhorn.io node1 -o yaml | yq e '.status.diskStatus.default-disk-ed7af10f5b8356be.scheduledReplica'

    Sample output:

    pvc-86d3d212-d674-4c64-b69b-4a2eb1df2272-r-7b422db7: 5368709120
    pvc-b06f0b09-f30c-4936-8a2a-425b993dd6cb-r-bb0fa6b3: 2147483648
    pvc-b844bcc6-3b06-4367-a136-3909251cb560-r-08d1ab3c: 53687091200
    pvc-ea6e0dff-f446-4a38-916a-b3bea522f51c-r-193ca5c6: 10737418240

    Run the command repeatedly, and the output should eventually become an empty map:

    {}

    This means Longhorn evicts replicas on the disk to other disks.

    note

    If a replica always stays in a disk, please open the Longhorn GUI and check if there is free space on other disks.

· 2 min read
Date Huang

NIC Naming Scheme changed after upgrading to v1.0.1

systemd in OpenSUSE Leap 15.3 which is the base OS of Harvester is upgraded to 246.16-150300.7.39.1. In this version, systemd will enable additional naming scheme sle15-sp3 which is v238 with bridge_no_slot. When there is a PCI bridge associated with NIC, systemd will never generate ID_NET_NAME_SLOT and naming policy in /usr/lib/systemd/network/99-default.link will fallback to ID_NET_NAME_PATH. According to this change, NIC names might be changed in your Harvester nodes during the upgrade process from v1.0.0 to v1.0.1-rc1 or above, and it will cause network issues that are associated with NIC names.

Effect Settings and Workaround

Startup Network Configuration

NIC name changes will need to update the name in /oem/99_custom.yaml. You could use migration script to change the NIC names which are associated with a PCI bridge.

tip

You could find an identical machine to test naming changes before applying the configuration to production machines

You could simply execute the script with root account in v1.0.0 via

# python3 udev_v238_sle15-sp3.py

It will output the patched configuration to the screen and you could compare it to the original one to ensure there is no exception. (e.g. We could use vimdiff to check the configuration)

# python3 udev_v238_sle15-spe3.py > /oem/test
# vimdiff /oem/test /oem/99_custom.yaml

After checking the result, we could execute the script with --really-want-to-do to override the configuration. It will also back up the original configuration file with a timestamp before patching it.

# python3 udev_v238_sle15-sp3.py --really-want-to-do

Harvester VLAN Network Configuration

If your VLAN network is associated with NIC name directly without bonding, you will need to migrate ClusterNetwork and NodeNetwork with the previous section together.

note

If your VLAN network is associated with the bonding name in /oem/99_custom.yaml, you could skip this section.

Modify ClusterNetworks

You need to modify ClusterNetworks via

$ kubectl edit clusternetworks vlan

search this pattern

config:
defaultPhysicalNIC: <Your NIC name>

and change to new NIC name

Modify NodeNetworks

You need to modify NodeNetworks via

$ kubectl edit nodenetworks <Node name>-vlan

search this pattern

spec:
nic: <Your NIC name>

and change to new NIC name

· 4 min read
Date Huang

What is the default behavior of a VM with multiple NICs

In some scenarios, you'll setup two or more NICs in your VM to serve different networking purposes. If all networks are setup by default with DHCP, you might get random connectivity issues. And while it might get fixed after rebooting the VM, it still will lose connection randomly after some period.

How-to identify connectivity issues

In a Linux VM, you can use commands from the iproute2 package to identify the default route.

In your VM, execute the following command:

ip route show default
tip

If you get the access denied error, please run the command using sudo

The output of this command will only show the default route with the gateway and VM IP of the primary network interface (eth0 in the example below).

default via <Gateway IP> dev eth0 proto dhcp src <VM IP> metric 100

Here is the full example:

$ ip route show default
default via 192.168.0.254 dev eth0 proto dhcp src 192.168.0.100 metric 100

However, if the issue covered in this KB occurs, you'll only be able to connect to the VM via the VNC or serial console.

Once connected, you can run again the same command as before:

$ ip route show default

However, this time you'll get a default route with an incorrect gateway IP. For example:

default via <Incorrect Gateway IP> dev eth0 proto dhcp src <VM's IP> metric 100

Why do connectivity issues occur randomly

In a standard setup, cloud-based VMs typically use DHCP for their NICs configuration. It will set an IP and a gateway for each NIC. Lastly, a default route to the gateway IP will also be added, so you can use its IP to connect to the VM.

However, Linux distributions start multiple DHCP clients at the same time and do not have a priority system. This means that if you have two or more NICs configured with DHCP, the client will enter a race condition to configure the default route. And depending on the currently running Linux distribution DHCP script, there is no guarantee which default route will be configured.

As the default route might change in every DHCP renewing process or after every OS reboot, this will create network connectivity issues.

How to avoid the random connectivity issues

You can easily avoid these connectivity issues by having only one NIC attached to the VM and having only one IP and one gateway configured.

However, for VMs in more complex infrastructures, it is often not possible to use just one NIC. For example, if your infrastructure has a storage network and a service network. For security reasons, the storage network will be isolated from the service network and have a separate subnet. In this case, you must have two NICs to connect to both the service and storage networks.

You can choose a solution below that meets your requirements and security policy.

Disable DHCP on secondary NIC

As mentioned above, the problem is caused by a race condition between two DHCP clients. One solution to avoid this problem is to disable DHCP for all NICs and configure them with static IPs only. Likewise, you can configure the secondary NIC with a static IP and keep the primary NIC enabled with DHCP.

  1. To configure the primary NIC with a static IP (eth0 in this example), you can edit the file /etc/sysconfig/network/ifcfg-eth0 with the following values:
BOOTPROTO='static'
IPADDR='192.168.0.100'
NETMASK='255.255.255.0'

Alternatively, if you want to reserve the primary NIC using DHCP (eth0 in this example), use the following values instead:

BOOTPROTO='dhcp'
DHCLIENT_SET_DEFAULT_ROUTE='yes'
  1. You need to configure the default route by editing the file /etc/sysconfig/network/ifroute-eth0 (if you configured the primary NIC using DHCP, skip this step):
# Destination  Dummy/Gateway  Netmask  Interface
default 192.168.0.254 - eth0
warning

Do not put other default route for your secondary NIC

  1. Finally, configure a static IP for the secondary NIC by editing the file /etc/sysconfig/network/ifcfg-eth1:
BOOTPROTO='static'
IPADDR='10.0.0.100'
NETMASK='255.255.255.0'

Cloud-Init config

network:
version: 1
config:
- type: physical
name: eth0
subnets:
- type: dhcp
- type: physical
name: eth1
subnets:
- type: static
address: 10.0.0.100/24

Disable secondary NIC default route from DHCP

If your secondary NIC requires to get its IP from DHCP, you'll need to disable the secondary NIC default route configuration.

  1. Confirm that the primary NIC configures its default route in the file /etc/sysconfig/network/ifcfg-eth0:
BOOTPROTO='dhcp'
DHCLIENT_SET_DEFAULT_ROUTE='yes'
  1. Disable the secondary NIC default route configuration by editing the file /etc/sysconfig/network/ifcfg-eth1:
BOOTPROTO='dhcp'
DHCLIENT_SET_DEFAULT_ROUTE='no'

Cloud-Init config

This solution is not available in Cloud-Init. Cloud-Init didn't allow any option for DHCP.

· 16 min read
PoAn Yang

How does Harvester schedule a VM?

Harvester doesn't directly schedule a VM in Kubernetes, it relies on KubeVirt to create the custom resource VirtualMachine. When the request to create a new VM is sent, a VirtualMachineInstance object is created and it creates the corresponding Pod.

The whole VM creation processt leverages kube-scheduler, which allows Harvester to use nodeSelector, affinity, and resources request/limitation to influence where a VM will be deployed.

How does kube-scheduler decide where to deploy a VM?

First, kube-scheduler finds Nodes available to run a pod. After that, kube-scheduler scores each available Node by a list of plugins like ImageLocality, InterPodAffinity, NodeAffinity, etc.

Finally, kube-scheduler calculates the scores from the plugins results for each Node, and select the Node with the highest score to deploy the Pod.

For example, let's say we have a three nodes Harvester cluster with 6 cores CPU and 16G RAM each, and we want to deploy a VM with 1 CPU and 1G RAM (without resources overcommit).

kube-scheduler will summarize the scores, as displayed in Table 1 below, and will select the node with the highest score, harvester-node-2 in this case, to deploy the VM.

kube-scheduler logs
virt-launcher-vm-without-overcommit-75q9b -> harvester-node-0: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 0,
virt-launcher-vm-without-overcommit-75q9b -> harvester-node-1: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 45,
virt-launcher-vm-without-overcommit-75q9b -> harvester-node-2: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5350 memory:5941231616] ,score 46,

virt-launcher-vm-without-overcommit-75q9b -> harvester-node-0: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 4,
virt-launcher-vm-without-overcommit-75q9b -> harvester-node-1: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 34,
virt-launcher-vm-without-overcommit-75q9b -> harvester-node-2: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5350 memory:5941231616] ,score 37,

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="ImageLocality" node="harvester-node-0" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="ImageLocality" node="harvester-node-1" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="ImageLocality" node="harvester-node-2" score=54

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="InterPodAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="InterPodAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="InterPodAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesLeastAllocated" node="harvester-node-0" score=4
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesLeastAllocated" node="harvester-node-1" score=34
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesLeastAllocated" node="harvester-node-2" score=37

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodePreferAvoidPods" node="harvester-node-0" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodePreferAvoidPods" node="harvester-node-2" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodePreferAvoidPods" node="harvester-node-1" score=1000000

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="PodTopologySpread" node="harvester-node-0" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="PodTopologySpread" node="harvester-node-1" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="PodTopologySpread" node="harvester-node-2" score=200

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="TaintToleration" node="harvester-node-0" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="TaintToleration" node="harvester-node-1" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="TaintToleration" node="harvester-node-2" score=100

"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesBalancedAllocation" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesBalancedAllocation" node="harvester-node-1" score=45
"Plugin scored node for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" plugin="NodeResourcesBalancedAllocation" node="harvester-node-2" score=46

"Calculated node's final score for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" node="harvester-node-0" score=1000358
"Calculated node's final score for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" node="harvester-node-1" score=1000433
"Calculated node's final score for pod" pod="default/virt-launcher-vm-without-overcommit-75q9b" node="harvester-node-2" score=1000437

AssumePodVolumes for pod "default/virt-launcher-vm-without-overcommit-75q9b", node "harvester-node-2"
AssumePodVolumes for pod "default/virt-launcher-vm-without-overcommit-75q9b", node "harvester-node-2": all PVCs bound and nothing to do
"Attempting to bind pod to node" pod="default/virt-launcher-vm-without-overcommit-75q9b" node="harvester-node-2"

Table 1 - kube-scheduler scores example

harvester-node-0harvester-node-1harvester-node-2
ImageLocality545454
InterPodAffinity000
NodeResourcesLeastAllocated43437
NodeAffinity000
NodePreferAvoidPods100000010000001000000
PodTopologySpread200200200
TaintToleration100100100
NodeResourcesBalancedAllocation04546
Total100035810004331000437

Why VMs are distributed unevenly with overcommit?

With resources overcommit, Harvester modifies the resources request. By default, the overcommit configuration is {"cpu": 1600, "memory": 150, "storage": 200}. This means that if we request a VM with 1 CPU and 1G RAM, its resources.requests.cpu will become 62m.

!!! note The unit suffix m stands for "thousandth of a core."

To explain it, let's take the case of CPU overcommit. The default value of 1 CPU is equal to 1000m CPU, and with the default overcommit configuration of "cpu": 1600, the CPU resource will be 16x smaller. Here is the calculation: 1000m * 100 / 1600 = 62m.

Now, we can see how overcommitting influences kube-scheduler scores.

In this example, we use a three nodes Harvester cluster with 6 cores and 16G RAM each. We will deploy two VMs with 1 CPU and 1G RAM, and we will compare the scores for both cases of "with-overcommit" and "without-overcommit" resources.

The results of both tables Table 2 and Table 3 can be explained as follow:

In the "with-overcommit" case, both VMs are deployed on harvester-node-2, however in the "without-overcommit" case, the VM1 is deployed on harvester-node-2, and VM2 is deployed on harvester-node-1.

If we look at the detailed scores, we'll see a variation of Total Score for harvester-node-2 from 1000459 to 1000461 in the "with-overcommit" case, and 1000437 to 1000382 in the "without-overcommit case". It's because resources overcommit influences request-cpu and request-memory.

In the "with-overcommit" case, the request-cpu changes from 4412m to 4474m. The difference between the two numbers is 62m, which is what we calculated above. However, in the "without-overcommit" case, we send real requests to kube-scheduler, so the request-cpu changes from 5350m to 6350m.

Finally, since most plugins give the same scores for each node except NodeResourcesBalancedAllocation and NodeResourcesLeastAllocated, we'll see a difference of these two scores for each node.

From the results, we can see the overcommit feature influences the final score of each Node, so VMs are distributed unevenly. Although the harvester-node-2 score for VM 2 is higher than VM 1, it's not always increasing. In Table 4, we keep deploying VM with 1 CPU and 1G RAM, and we can see the score of harvester-node-2 starts decreasing from 11th VM. The behavior of kube-scheduler depends on your cluster resources and the workload you deployed.

kube-scheduler logs for vm1-with-overcommit
virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-0: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9022 memory:14807289856] ,score 0,
virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-1: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4622 memory:5992960000] ,score 58,
virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-2: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4412 memory:5581918208] ,score 59,

virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-0: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9022 memory:14807289856] ,score 5,
virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-1: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4622 memory:5992960000] ,score 43,
virt-launcher-vm1-with-overcommit-ljlmq -> harvester-node-2: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4412 memory:5581918208] ,score 46,

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="InterPodAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="InterPodAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="InterPodAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-0" score=5
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-1" score=43
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-2" score=46

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodePreferAvoidPods" node="harvester-node-0" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodePreferAvoidPods" node="harvester-node-1" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodePreferAvoidPods" node="harvester-node-2" score=1000000

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="PodTopologySpread" node="harvester-node-0" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="PodTopologySpread" node="harvester-node-1" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="PodTopologySpread" node="harvester-node-2" score=200

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="TaintToleration" node="harvester-node-0" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="TaintToleration" node="harvester-node-1" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="TaintToleration" node="harvester-node-2" score=100

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-1" score=58
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-2" score=59

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="ImageLocality" node="harvester-node-0" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="ImageLocality" node="harvester-node-1" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" plugin="ImageLocality" node="harvester-node-2" score=54

"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" node="harvester-node-0" score=1000359
"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" node="harvester-node-1" score=1000455
"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" node="harvester-node-2" score=1000459

AssumePodVolumes for pod "default/virt-launcher-vm1-with-overcommit-ljlmq", node "harvester-node-2"
AssumePodVolumes for pod "default/virt-launcher-vm1-with-overcommit-ljlmq", node "harvester-node-2": all PVCs bound and nothing to do
"Attempting to bind pod to node" pod="default/virt-launcher-vm1-with-overcommit-ljlmq" node="harvester-node-2"
kube-scheduler logs for vm2-with-overcommit
virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-0: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9022 memory:14807289856] ,score 0,
virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-1: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4622 memory:5992960000] ,score 58,
virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-2: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4474 memory:6476701696] ,score 64,

virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-0: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9022 memory:14807289856] ,score 5,
virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-1: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4622 memory:5992960000] ,score 43,
virt-launcher-vm2-with-overcommit-pwrx4 -> harvester-node-2: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:4474 memory:6476701696] ,score 43,

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodePreferAvoidPods" node="harvester-node-0" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodePreferAvoidPods" node="harvester-node-1" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodePreferAvoidPods" node="harvester-node-2" score=1000000

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="PodTopologySpread" node="harvester-node-0" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="PodTopologySpread" node="harvester-node-1" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="PodTopologySpread" node="harvester-node-2" score=200

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="TaintToleration" node="harvester-node-0" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="TaintToleration" node="harvester-node-1" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="TaintToleration" node="harvester-node-2" score=100

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesBalancedAllocation" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesBalancedAllocation" node="harvester-node-1" score=58
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesBalancedAllocation" node="harvester-node-2" score=64

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="ImageLocality" node="harvester-node-0" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="ImageLocality" node="harvester-node-1" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="ImageLocality" node="harvester-node-2" score=54

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="InterPodAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="InterPodAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="InterPodAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesLeastAllocated" node="harvester-node-0" score=5
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesLeastAllocated" node="harvester-node-1" score=43
"Plugin scored node for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" plugin="NodeResourcesLeastAllocated" node="harvester-node-2" score=43

"Calculated node's final score for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" node="harvester-node-0" score=1000359
"Calculated node's final score for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" node="harvester-node-1" score=1000455
"Calculated node's final score for pod" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" node="harvester-node-2" score=1000461

AssumePodVolumes for pod "default/virt-launcher-vm2-with-overcommit-pwrx4", node "harvester-node-2"
AssumePodVolumes for pod "default/virt-launcher-vm2-with-overcommit-pwrx4", node "harvester-node-2": all PVCs bound and nothing to do
"Attempting to bind pod to node" pod="default/virt-launcher-vm2-with-overcommit-pwrx4" node="harvester-node-2"
kube-scheduler logs for vm1-without-overcommit
virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-0: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 0,
virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-1: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 45,
virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-2: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5350 memory:5941231616] ,score 46,

virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-0: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 4,
virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-1: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 34,
virt-launcher-vm1-with-overcommit-6xqmq -> harvester-node-2: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5350 memory:5941231616] ,score 37,

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="InterPodAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="InterPodAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="InterPodAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-0" score=4
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-1" score=34
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesLeastAllocated" node="harvester-node-2" score=37

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodePreferAvoidPods" node="harvester-node-0" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodePreferAvoidPods" node="harvester-node-1" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodePreferAvoidPods" node="harvester-node-2" score=1000000

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="PodTopologySpread" node="harvester-node-0" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="PodTopologySpread" node="harvester-node-1" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="PodTopologySpread" node="harvester-node-2" score=200

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="TaintToleration" node="harvester-node-0" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="TaintToleration" node="harvester-node-1" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="TaintToleration" node="harvester-node-2" score=100

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-1" score=45
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="NodeResourcesBalancedAllocation" node="harvester-node-2" score=46

"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="ImageLocality" node="harvester-node-0" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="ImageLocality" node="harvester-node-1" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" plugin="ImageLocality" node="harvester-node-2" score=54

"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" node="harvester-node-0" score=1000358
"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" node="harvester-node-1" score=1000433
"Calculated node's final score for pod" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" node="harvester-node-2" score=1000437

AssumePodVolumes for pod "default/virt-launcher-vm1-with-overcommit-6xqmq", node "harvester-node-2"
AssumePodVolumes for pod "default/virt-launcher-vm1-with-overcommit-6xqmq", node "harvester-node-2": all PVCs bound and nothing to do
"Attempting to bind pod to node" pod="default/virt-launcher-vm1-with-overcommit-6xqmq" node="harvester-node-2"
kube-scheduler logs for vm2-without-overcommit
virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-0: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 0,
virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-1: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 45,
virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-2: NodeResourcesBalancedAllocation, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:6350 memory:7195328512] ,score 0,

virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-0: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:9960 memory:15166603264] ,score 4,
virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-1: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:5560 memory:6352273408] ,score 34,
virt-launcher-vm2-without-overcommit-mf5vk -> harvester-node-2: NodeResourcesLeastAllocated, map of allocatable resources map[cpu:6000 memory:16776437760], map of requested resources map[cpu:6350 memory:7195328512] ,score 28,

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="PodTopologySpread" node="harvester-node-0" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="PodTopologySpread" node="harvester-node-1" score=200
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="PodTopologySpread" node="harvester-node-2" score=200

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="TaintToleration" node="harvester-node-0" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="TaintToleration" node="harvester-node-1" score=100
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="TaintToleration" node="harvester-node-2" score=100

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesBalancedAllocation" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesBalancedAllocation" node="harvester-node-1" score=45
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesBalancedAllocation" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="ImageLocality" node="harvester-node-0" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="ImageLocality" node="harvester-node-1" score=54
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="ImageLocality" node="harvester-node-2" score=54

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="InterPodAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="InterPodAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="InterPodAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesLeastAllocated" node="harvester-node-0" score=4
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesLeastAllocated" node="harvester-node-1" score=34
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeResourcesLeastAllocated" node="harvester-node-2" score=28

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeAffinity" node="harvester-node-0" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeAffinity" node="harvester-node-1" score=0
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodeAffinity" node="harvester-node-2" score=0

"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodePreferAvoidPods" node="harvester-node-0" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodePreferAvoidPods" node="harvester-node-1" score=1000000
"Plugin scored node for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" plugin="NodePreferAvoidPods" node="harvester-node-2" score=1000000

"Calculated node's final score for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" node="harvester-node-0" score=1000358
"Calculated node's final score for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" node="harvester-node-1" score=1000433
"Calculated node's final score for pod" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" node="harvester-node-2" score=1000382

AssumePodVolumes for pod "default/virt-launcher-vm2-without-overcommit-mf5vk", node "harvester-node-1"
AssumePodVolumes for pod "default/virt-launcher-vm2-without-overcommit-mf5vk", node "harvester-node-1": all PVCs bound and nothing to do
"Attempting to bind pod to node" pod="default/virt-launcher-vm2-without-overcommit-mf5vk" node="harvester-node-1"

Table 2 - With Overcommit

VM 1 / VM 2harvester-node-0harvester-node-1harvester-node-2
request-cpu (m)9022 / 90224622 / 46224412 / 4474
request-memory14807289856 / 148072898565992960000 / 59929600005581918208 / 6476701696
NodeResourcesBalancedAllocation Score0 / 058 / 5859 / 64
NodeResourcesLeastAllocated Score5 / 543 / 4346 / 43
Other Scores1000354 / 10003541000354 / 10003541000354 / 1000354
Total Score1000359 / 10003591000455 / 10004551000459 / 1000461

Table 3 - Without Overcommit

VM 1 / VM 2harvester-node-0harvester-node-1harvester-node-2
request-cpu (m)9960 / 99605560 / 55605350 / 6350
request-memory15166603264 / 151666032646352273408 / 63522734085941231616 / 7195328512
NodeResourcesBalancedAllocation Score0 / 045 / 4546 / 0
NodeResourcesLeastAllocated Score4 / 434 / 3437 / 28
Other Scores1000354 / 10003541000354 / 10003541000354 / 1000354
Total Score1000358 / 10003581000358 / 10004331000437 / 1000382

Table 4

Scoreharvester-node-0harvester-node-1harvester-node-2
VM 1100035910004551000459
VM 2100035910004551000461
VM 3100035910004551000462
VM 4100035910004551000462
VM 5100035910004551000463
VM 6100035910004551000465
VM 7100035910004551000466
VM 8100035910004551000467
VM 9100035910004551000469
VM 10100035910004551000469
VM 11100035910004551000465
VM 12100035910004551000457

How to avoid uneven distribution of VMs?

There are many plugins in kube-scheduler which we can use to influence the scores. For example, we can add the podAntiAffinity plugin to avoid VMs with the same labels being deployed on the same node.

  affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: harvesterhci.io/creator
operator: Exists
topologyKey: kubernetes.io/hostname
weight: 100

How to see scores in kube-scheduler?

kube-scheduler is deployed as a static pod in Harvester. The file is under /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml in each Management Node. We can add - --v=10 to the kube-scheduler container to show score logs.

kind: Pod
metadata:
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
# ...
- --v=10