Tutorial: Autoscaling Your Node Pools (Beta)
You can now enable the OKS cluster autoscaler to automatically manage and scale your node pools based on resource usage. Autoscaling ensures your cluster scales dynamically and cost-effectively.
This tutorial is divided into two parts:
-
Updating existing clusters to support autoscaling.
-
Creating node pools that can be managed by the autoscaler.
|
This feature is currently in beta. For more information about beta services, see the Glossary and our General Terms and Conditions of Sale. |
Updating Existing Clusters to Support Autoscaling Using OKS CLI
-
Run the cluster update command to manually enable autoscaling support:
$ oks-cli cluster update \ --profile NAME_OF_PROFILE \ --project-name NAME_OF_PROJECT \ --cluster-name NAME_OF_CLUSTER
This command contains the following options that you need to specify:
-
profile
: The named profile you used to create the cluster you want to update. -
project-name
: The name of the project containing the cluster you want to update. -
cluster-name
: The name of the cluster you want to update.
-
-
Run the cluster upgrade command to deploy the autoscaler components on your cluster:
$ oks-cli cluster upgrade \ --profile NAME_OF_PROFILE \ --project-name NAME_OF_PROJECT \ --cluster-name NAME_OF_CLUSTER
This command contains the following options that you need to specify:
-
profile
: The named profile you used to create the cluster you want to upgrade. -
project-name
: The name of the project containing the cluster you want to upgrade. -
cluster-name
: The name of the cluster you want to upgrade.
-
You are then prompted to confirm the cluster upgrade. Enter |
Creating Node Pools Managed by the Autoscaler
Once your cluster supports the autoscaler, you can configure node pools to be automatically scaled based on workload demand.
Updating Your Node Pool Manifest
To create a node pool that can be managed by the autoscaler, you need to attach the required annotations to your node pool manifest, including:
-
Setting a minimum and maximum number of nodes.
-
Setting the autoscaling flag to
true
.
The following manifest sample defines a node pool named autoscaling-pool
with autoscaling enabled between 2 and 10 nodes. It uses the tinav6.c2r4p3
node type in the eu-west-2a
Region, starts with 2 nodes, supports auto-upgrade and auto-healing, and limits upgrades to 1 unavailable node at a time with no surge:
apiVersion: oks.dev/v1beta2
kind: NodePool
metadata:
name: autoscaling-pool
spec:
minNodes: 2
maxNodes: 10
desiredNodes: 2
nodeType: tinav6.c2r4p3
autoscaling: true
zones:
- eu-west-2a
upgradeStrategy:
maxUnavailable: 1
maxSurge: 0
autoUpgradeEnabled: true
autoHealing: true
For more information on how to write a manifest, see Node Pool Manifest Reference.
Applying the Node Pool Using Kubectl
Once your manifest is ready, you can apply it to the cluster using the following command:
$ kubectl apply -f NAME_OF_MANIFEST.yaml
The autoscaler now manages the number of nodes based on workload demands, scaling up or down within the minNodes
and maxNodes
limits defined in your manifest.
The |
Once the node pool is active, the autoscaler continuously monitors the cluster:
-
If a pod remains in
Pending
state due to lack of available resources: The autoscaler adds nodes to accommodate it. -
If a node is underexploited: The autoscaler marks it for removal. After a 10-minute delay, the node is removed to reduce unused capacity.
Related Pages