Add Cluster Nodes
This section describes how to add KubeSphere cluster nodes.
The open-source tool KubeKey will be used during the process. For more information about KubeKey, please visit GitHub KubeKey repository.
Note |
---|
The node addition method described in this section is only applicable to Kubernetes installed through KubeKey. If your Kubernetes is not installed via KubeKey, please refer to Kubernetes Documentation to add nodes. |
Prerequisites
-
The operating system and version of the cluster nodes must be Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 22.04, Debian 9, Debian 10, CentOS 7, CentOS Stream, RHEL 7, RHEL 8, SLES 15, or openSUSE Leap 15. The operating systems of multiple servers can be different. For support of other operating systems and versions, please contact KubeSphere technical support.
-
In a production environment, to ensure the cluster has sufficient computing and storage resources, it is recommended that each cluster node be configured with at least 8 CPU cores, 16 GB of memory, and 200 GB of disk space. In addition, it is recommended to mount an additional 200 GB of disk space in the /var/lib/docker (for Docker) or /var/lib/containerd (for containerd) directory of each cluster node for storing container runtime data.
-
In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the Configure High Availability.
-
You should get the installation configuration file config-sample.yaml and transfer it to the cluster node where you will perform this action. For more information, refer to Install Kubernetes and KubeSphere.
Warning |
---|
|
Steps
-
If you are accessing GitHub/Googleapis from a restricted location, please log in to any cluster node and run the following command to set the download region:
export KKZONE=cn
-
Run the following command to download the latest version of KubeKey:
curl -sfL https://get-kk.kubesphere.io | sh -
After the download is complete, a KubeKey binary file kk will be generated in the current directory.
Note If the cluster node used to perform the operations cannot connect to the internet, you can manually download KubeKey on a device with internet access and then transfer it to the cluster node.
-
Add execute permission to the KubeKey binary file kk:
sudo chmod +x kk
-
Transfer the installation configuration file config-sample.yaml to the current directory.
-
Execute the following command to edit the installation configuration file config-sample.yaml:
vi config-sample.yaml
-
Configure the information of the new node under the hosts parameter in config-sample.yaml.
Parameter Description name
User-defined server name.
address
The SSH login IP address of the server.
internalAddress
The IP address of the server within the subnet.
port
The SSH port number of the server. This parameter does not need to be set if using the default port 22.
user
The SSH login user name of the server, which must be the root user or another user with sudo permissions. If you use root user, you can not set this parameter.
password
The server’s SSH login password. This parameter does not need to be set if privateKeyPath has been set.
privateKeyPath
The path to the server’s SSH login key. This parameter does not need to be set if password has been set.
arch
The server architecture. If the server’s hardware architecture is Arm64, please set this parameter to arm64, otherwise do not set this parameter. By default, the installation package only supports scenarios where all cluster nodes are x86_64 or arm64 architecture. If the hardware architecture of each cluster node is not exactly the same, please contact the KubeSphere technical support team.
Warning Please do not modify the information of the original node. Otherwise, the cluster may encounter errors after adding nodes.
-
Configure the role of the new node in the cluster under the roleGroups parameter in config-sample.yaml.
Parameter Description etcd
Nodes for installing the etcd database. Set the cluster control plane nodes under this parameter.
control-plane
Cluster control plane nodes. If you have configured high availability for the cluster, you can set multiple control plane nodes.
worker
Cluster worker nodes.
registry
Server used for creating a private image registry. This server is not used as a cluster node. During the installation or upgrade of KubeSphere, if the cluster nodes cannot connect to the Internet, you need to set the server used for creating a private image registry under this parameter. Otherwise, you can comment out this parameter.
Warning Please do not modify the role of the original node. Otherwise, the cluster may encounter errors after adding nodes.
-
If a new control plane node is added and the current cluster is not configured for high availability, configure the high availability information under the controlPlaneEndpoint parameter in config-sample.yaml.
Parameter Description internalLoadBalancer
Type of internal load balancer. If using local load balancer configuration, set this parameter to haproxy. Otherwise, you can comment out this parameter.
domain
Internal access domain for the load balancer. Set this parameter to lb.kubesphere.local.
address
IP address of the load balancer. If using local load balancer configuration, leave this parameter empty. If using a dedicated load balancer, set this parameter to the IP address of the load balancer. If using a generic server as the load balancer, set this parameter to the floating IP address of the load balancer.
port
Port number that the load balancer listens on, which is the port number of the apiserver service. Set this parameter to 6443.
Warning -
If the current cluster has been configured with high availability, do not modify the high availability information in the config-sample.yaml file. Otherwise, the cluster may encounter errors after adding nodes.
-
If the current cluster uses local load balancing to achieve high availability, you do not need to perform any operations on cluster high availability; if the current cluster uses a load balancer to achieve high availability, you only need to configure the load balancer to listen on port 6443 of all control plane nodes. For more information, see Configure High Availability.
-
-
Save the configuration file and execute the following command to start adding nodes:
 ./kk add nodes -f config-sample.yaml
-
Execute the following command to view the nodes of the current cluster:
kubectl get node
If it displays the information about the new node, it means the node is added successfully.
Feedback
Was this page Helpful?
Receive the latest news, articles and updates from KubeSphere
Thanks for the feedback. If you have a specific question about how to use KubeSphere, ask it on Slack. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.