Edit Autoscaling Settings
This section explains how to edit autoscaling settings for workloads.
The autoscaling feature allows the system to automatically adjust the pod replica count based on the target CPU and memory usage of all replicas in the workload.
Prerequisites
You should join a cluster and have the Application Workload Management permission within the cluster. For more information, refer to "Cluster Members" and "Cluster Roles".
Steps
-
Log in to the KubeSphere web console with a user who has the Application Workload Management permission, and access your cluster.
-
Click Application Workloads > Workloads in the left navigation pane.
-
On the Workloads page, click Deployments, StatefulSets, or DaemonSets, then click the name of a workload in the list to open its details page.
-
On the top left of the workload details page, select More > Edit Autoscaling.
-
In the Autoscaling dialog, set the autoscaling parameters for the workload, then click OK.
Parameter Description Target CPU Usage
Target CPU usage for all pod replicas in the workload, measured in percentage. When the actual CPU usage is higher/lower than the target value, the system automatically decreases/increases the replica count.
Target Memory Usage
Target memory usage for all pod replicas in the workload, measured in MiB. When the actual memory usage is higher/lower than the target value, the system automatically decreases/increases the replica count.
Minimum Replicas
The minimum allowed pod replica count, with a default value of 1.
Maximum Replicas
The maximum allowed pod replica count, with a default value of 1.
Feedback
Was this page Helpful?
Receive the latest news, articles and updates from KubeSphere
Thanks for the feedback. If you have a specific question about how to use KubeSphere, ask it on Slack. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.