Containerization has become the gold standard for modern application development and deployment. And Kubernetes (K8s) is the leading platform for automating the deployment, scaling, and management of containerized applications. If you’re already using a VPS or planning to rent a server for your project, deploying a Kubernetes cluster on your own VPS is a powerful and flexible alternative to cloud solutions.
In this guide, we’ll walk you through how to set up a Kubernetes cluster on VPS servers — which services to choose, what resources are required, and what to keep in mind during the process.
What Is Kubernetes and Why Use It?
Kubernetes is an open-source platform that orchestrates containers, automating resource management, updates, failover recovery, and more. Its main purpose is to make microservice deployment and scaling fast, efficient, and reliable.
Key features of Kubernetes:
- Automatic scaling of application components.
- Zero-downtime deployments.
- Precise resource allocation (CPU, RAM) for each pod.
- Self-healing in case of failures.
- Seamless updates without service interruption.
This makes Kubernetes ideal for SaaS platforms, CI/CD pipelines, test environments, startups, and microservice hosting. And you don’t need to use a public cloud — a VPS from Server.ua can run your cluster just as well.
Required Resources for Running Kubernetes on a VPS
Before setting up Kubernetes, determine how many nodes your cluster will have and what roles they will perform:
- 1 node (single VPS): suitable for local development or testing.
- 3+ VPS: minimum setup for production — 1 master node + at least 2 worker nodes.
Minimum recommended specs per node:
- CPU: at least 2 cores.
- RAM: 2 GB minimum (4 GB or more recommended).
- SSD storage: 20 GB or more.
- OS: Ubuntu 20.04+ (also supports Debian, CentOS).
Choose a VPS with scalable resources to add new nodes later without migrations.
Preparing VPS Nodes for Kubernetes Installation
Each VPS node requires some basic setup:
- Update the system:
bash
sudo apt update && sudo apt upgrade -y
- Set the hostname:
bash
sudo hostnamectl set-hostname master-node
- Install Docker:
bash
sudo apt install docker.io -y
sudo systemctl enable docker
sudo systemctl start docker
- Install kubeadm, kubelet, and kubectl:
bash
sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
- Disable swap:
bash
sudo swapoff -a
Master Node Initialization
On the master node, initialize the Kubernetes cluster:
bash
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
After initialization, you’ll see a kubeadm join command to connect worker nodes. Copy it — you’ll need it later.
Set up kubectl access:
bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Adding Worker Nodes
On each additional VPS, repeat the following steps:
- Install Docker, kubeadm, kubelet, kubectl.
- Disable swap.
- Run the kubeadm join command received from the master node.
Installing the Pod Network
To enable communication between pods, install a pod network plugin. For example, Flannel:
bash
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Checking Cluster Status
Use the following commands:
- View nodes:
bash
kubectl get nodes
- View pod status:
bash
kubectl get pods --all-namespaces
- System component status:
bash
kubectl get componentstatuses
Deploying a Test Application
To verify everything is working, deploy a simple nginx service:
bash
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc
You can now access nginx via the IP address of any node and the assigned port.
Conclusion
Deploying Kubernetes on a VPS is very achievable if you follow a structured approach. It’s a great way to build flexible container infrastructure without relying on large cloud providers. This method gives you more control over your resources, helps manage costs, and allows you to scale your project efficiently.
With Server.ua virtual servers, you can launch your own orchestration platform and manage CI/CD pipelines, microservices, and application deployments on your terms.
Start building your Kubernetes cluster today — and take your project to the next level.
Leave a Reply