Deploying K3s on Linode
Updated by Linode Written by Rajakavitha Kodhandapani
K3s is a lightweight, easy-to-install Kubernetes distribution. Built for the edge, K3s includes an embedded SQLite database as the default datastore and supports external datastore such as PostgreSQL, MySQL, and etcd. K3s includes a command line cluster controller, a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller. It also automates and manages complex cluster operations such as distributing certificates. With K3s, you can run a highly available, certified Kubernetes distribution designed for production workloads on resource-light machines like Nanodes.
Note
- While you can deploy a K3s cluster on just about any flavor of Linux, K3s is officially tested on Ubuntu 16.04 and Ubuntu 18.04. If you are deploying K3s on CentOS where SELinux is enabled by default, then you must ensure that proper SELinux policies are installed. For more information, see Rancher’s documentation on SELinux support.
- Nanode instances are suitable for low-duty workloads where performance isn’t critical. Depending on your requirements, you can choose to use Linodes with greater resources for your K3s cluster.
Before You Begin
Familiarize yourself with our Getting Started guide.
Create two Linodes in the same region that are running Ubuntu 18.04.
Complete the steps for setting the hostname and timezone for both Linodes. When setting hostnames, it may be helpful to identify one Linode as a server and the other as an agent.
Follow our Securing Your Server guide to create a standard user account, harden SSH access, remove unnecessary network services, and create firewall rules to allow all outgoing traffic and deny all incoming traffic except SSH traffic on both Linodes.
Note
This guide is written for a non-root user. Commands that require elevated privileges are prefixed with
sudo
. If you’re not familiar with thesudo
command, visit our Users and Groups guide.All configuration files should be edited with elevated privileges. Remember to include
sudo
before running your text editor.Ensure that your Linodes are up to date:
sudo apt update && sudo apt upgrade
Install K3s Server
First, you will install the K3s server on a Linode, from which you will manage your K3s cluster.
Connect to the Linode where you want to install the K3s server.
Open port 6443/tcp on your firewall to make it accessible by other nodes in your cluster:
sudo ufw allow 6443/tcp
Open port 8472/udp on your firewall to enable Flannel VXLAN:
Note
Replace
192.0.2.1
with the IP address of your K3s Agent Linode.As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.
sudo ufw allow from 192.0.2.1 to any port 8472 proto udp
(Optional) Open port 10250/tcp on your firewall to utilize the metrics server:
sudo ufw allow 10250/tcp
Set environment variables used for installing the K3s server:
export K3S_KUBECONFIG_MODE="644" export K3S_NODE_NAME="k3s-server-1"
Execute the following command to install K3s server:
curl -sfL https://get.k3s.io | sh -
Verify the status of the K3s server:
sudo systemctl status k3s
Retrieve the access token to connect a K3s Agent Linode to your K3s Server Linode:
sudo cat /var/lib/rancher/k3s/server/node-token
The expected output is similar to:
abcdefABCDEF0123456789::server:abcdefABCDEF0123456789
Copy the access token and save it in a secure location.
Install K3s Agent
Next you will install the K3s agent on a Linode.
Connect to the Linode where you want to install the K3s agent.
Open port 8472/udp on your firewall to enable Flannel VXLAN:
Note
Replace
192.0.2.0
with the IP address of your K3s Server Linode.As detailed in Rancher’s Installation Requirements, port 8472 should not be accessible outside of your cluster for security reasons.
sudo ufw allow from 192.0.2.0 to any port 8472 proto udp
(Optional) Open port 10250 on your firewall to utilize the metrics server:
sudo ufw allow 10250/tcp
Set environment variables used for installing the K3s agent:
Note
Replace192.0.2.0
with the IP address of your K3s Server Linode andabcdefABCDEF0123456789::server:abcdefABCDEF0123456789
with the its access token.export K3S_KUBECONFIG_MODE="644" export K3S_NODE_NAME="k3s-agent-1" export K3S_URL="https://192.0.2.0:6443" export K3S_TOKEN="abcdefABCDEF0123456789::server:abcdefABCDEF0123456789"
Execute the following command to install a K3s server:
curl -sfL https://get.k3s.io | sh -
Verify the status of the K3s agent:
sudo systemctl status k3s-agent
Manage K3s
Your K3s installation includes kubectl, a command-line interface for managing Kubernetes clusters.
From your K3s Server Linode, use kubectl
to get the details of the nodes in your K3s cluster.
kubectl get nodes
The expected output is similar to:
NAME STATUS ROLES AGE VERSION
k3s-server-1 Ready master 95s v1.18.2+k3s1
k3s-agent-1 Ready <none> 21s v1.18.2+k3s1
NoteTo manage K3s from outside the cluster, copy the contents of/etc/rancher/k3s/k3s.yaml
from your K3s Server Linode to~/.kube/config
on an external machine where you have installed kubectl, replacing127.0.0.1
with the IP address of your K3s Server Linode.
Test K3s
Here, you will test your K3s cluster with a simple NGINX website deployment.
On your K3s Server Linode, create a manifest file labeled
nginx.yaml
, open it with a text editor, and add the following text that describes a single-instance deployment of NGINX that is exposed to the public using a K3s service load balancer:- nginx.yaml
-
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
--- apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - protocol: TCP port: 8081 targetPort: 80 selector: app: nginx type: LoadBalancer
Save and close the
nginx.yaml
file.Deploy the NGINX website on your K3s cluster:
kubectl apply -f ./nginx.yaml
The expected output is similar to:
deployment.apps/nginx created service/nginx created
Verify that the pods are running:
kubectl get pods
The expected output is similar to:
NAME READY STATUS RESTARTS AGE svclb-nginx-c6rvg 1/1 Running 0 21s svclb-nginx-742gb 1/1 Running 0 21s nginx-cc7df4f8f-2q7vf 1/1 Running 0 22s
Verify that your deployment is ready:
kubectl get deployments
The expected output is similar to:
NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 57s
Verify that the load balancer service is running:
kubectl get services nginx
The expected output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.0.0.89 192.0.2.1 8081:31809/TCP 33m
In a web browser navigation bar, type the IP address listed under
EXTERNAL_IP
from your output and append the port number:8081
to reach the default NGINX welcome page.Delete your test NGINX deployment:
kubectl delete -f ./nginx.yaml
Tear Down K3s
To uninstall your K3s cluster:
Connect to your K3s Agent Linode and run the following commands:
sudo /usr/local/bin/k3s-agent-uninstall.sh sudo rm -rf /var/lib/rancher
Connect to your K3s Server Linode and run the following commands:
sudo /usr/local/bin/k3s-uninstall.sh sudo rm -rf /var/lib/rancher
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
Join our Community
Find answers, ask questions, and help others.
This guide is published under a CC BY-ND 4.0 license.