Node Pool Manifest Reference

A node pool manifest is a Kubernetes manifest applied to a set of worker nodes.

Node Pool Manifest

To create a node pool manifest, you need to create a YAML file following this structure:

Node Pool Manifest Structure
apiVersion: oks.dev/v1beta2
kind: NodePool
metadata:
  name: application-pool2-a
spec:
  desiredNodes: 2
  nodeType: tinav6.c2r4p2
  zones:
    - eu-west-2a
  upgradeStrategy:
    maxUnavailable: 1
    maxSurge: 0
    autoUpgradeEnabled: true
    autoUpgradeMaintenance:
      durationHours: 1
      startHour: 12
      weekDay: Tue
  autoHealing: true

Volumes Manifest

You can specify volumes for your worker nodes, under the volumes section of the node pool manifest.

If the volumes are valid, they will be created with the VM for each worker node managed by the node pool, and mounted in the system. A filesystem will be created on the volumes. If the volumes are not valid, they will cause an error and no VM will be created.

This setting is limited by your current OKS plan and OUTSCALE account quotas.

In this example, we are adding a dedicated 300GB gp2 volume to use as a longhorn volume with synchronisation between nodes/Subregions and store application (or any other) data:

Volumes Manifest Sample: App Data
...
spec:
  volumes:
  - size: 300
    type: "gp2"
    dir: "/var/lib/longhorn"

You can use a dedicated small standard volume for the filebeat:

Volumes Manifest Sample: Logs
...
spec:
  volumes:
  - device: xvdl
    size: 2
    type: "standard"
    dir: /var/spool/filebeat

You can boost the booting performance of your pods by switching the disk of the worker node to io1 3000 IOPS:

Volumes Manifest Sample: Root
...
spec:
  volumes:
  - device: root
    size: 100
    iops: 3000
    type: "io1"

Or place the Kubernetes related data on a dedicated fast volume:

Volumes Manifest Sample: kubelet
...
spec:
  volumes:
  - device: xvdl
    size: 100
    iops: 3000
    type: "io1"
    dir: /var/lib/kubelet

This file contains the following parameters that you need to specify:

  • (optional) device: The /dev/XXX device where the volume will be located. It can be either root, xvdX or xvdXY, where "X" is a letter between B and Z, and "Y" is a letter between A and Z. If not specified, it will be generated automatically.

  • type: The type of volume, among the volume types supported by OUTSCALE (standard = default, gp2 or io1).

  • size: The size of the volume, in gibibytes (GiB). This parameter is required and is set to 100 by default for root volume.

  • (optional) iops: The number of I/O operations per second (IOPS). This parameter only needs to be specified if you create an io1 volume. The maximum number of IOPS allowed for io1 volumes is 13000 with a maximum performance ratio of 300 IOPS per gibibyte.

  • dir: The mount path for the volume. It is not applicable for root volumes but required for other volume types.

For more information about volume types and parameters, see About Volumes > Volume Types and IOPS.

Zones Manifest

You can specify multiple worker node zones for higher availability, under the zones section of the node pool manifest.

In the case where multiple zones have been specified, the desiredNodes parameter controls VM creation in each specified zone.

Zones Manifest Sample
...
spec:
  zones:
  - eu-west-2a
  - eu-west-2b
...

Physical Placement Manifest

Node pools support physical placement options. These options are not strict and are mutually exclusive. They are listed in the table below.

Option Description

nodeRepulseServer

Places nodes with the same value on different servers.

nodeAttractServer

Places nodes with the same value on the same server.

nodeRepulseCluster

Places nodes with the same value on different Cisco UCS clusters.

nodeAttractCluster

Places nodes with the same value on the same Cisco UCS cluster.

The YAML file must follow this structure:

Placement Manifest Sample
...
spec:
  physicalPlacement:
    nodeRepulseServer: application_2
...

Autohealing Manifest

Autohealing (autoHealing: VALUE) controls the management of NotReady nodes.

  • If true, the VM will be force rebooted 5 minutes after the last heartbeat time of the node. If the node is not in the ready state 5 minutes after reboot, it is replaced.

  • If false, no automatic actions are taken.

Upgrade Strategy Manifest

You can control the behavior of worker nodes in case of setting changes that influence type, volumes, and placement of the VMs, under the upgradeStrategy section of the node pool manifest. upgradeStrategy serves the same function as in the ReplicaSet resource. The default value for both parameters is 25%.

UpgradeStrategy Manifest Sample
...
spec:
  upgradeStrategy:
    maxUnavailable: 1
    maxSurge: 0

Auto Upgrades is not currently in service, thus all related settings are ignored.

GPU Support Manifest

Node pools allow worker nodes to allocate, attach, and use the flexible GPUs. Currently, OKS supports attaching 1 GPU per node. For more information about GPUs, see About Flexible GPUs.

The current CUDA driver version used with GPUs is cuda_12.6.2_560.35.03.

You can configure GPU support under the fgpu section of the node pool manifest.

GPU Manifest Sample
spec:
  fgpu:
    model: "nvidia-p6"
    k8s-operator: true

This file contains the following parameters that you need to specify:

  • model: The GPU model to allocate.

  • k8s-operator: The official Nvidia GPU operator in the gpu-operator namespace. Deletion of the node pool does not uninstall the operator.

Related Pages