Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

Install kubectl

macOS:

Install via Homebrew:

brew install kubernetes-cli

Linux:

  1. Download the latest kubectl release:

    curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  2. Make the downloaded file executable:

    chmod +x ./kubectl
  3. Move the command into your PATH:

    sudo mv ./kubectl /usr/local/bin/kubectl

Windows:

Visit the Kubernetes documentation for a link to the most recent Windows release.

Create an LKE Cluster

  1. Log into your Linode Cloud Manager account.

  2. From the Linode dashboard, click the Create button in the top right-hand side of the screen and select Kubernetes from the dropdown menu.

  3. The Create a Kubernetes Cluster page appears. At the top of the page, you are required to select the following options:

    • In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name is how you identify your cluster in the Cloud Manager’s Dashboard.

    • From the Region dropdown menu, select the Region where you would like your cluster to reside.

    • From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.

  4. In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. To the right of each plan, select the plus + and minus - to add or remove a Linode to a node pool one at time.

  5. Once you’re satisfied with the number of nodes in a node pool, select Add to include it in your configuration. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

  6. Once a pool has been added to your configuration, it is listed in the Cluster Summary on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost. Additional pools can be added before finalizing the cluster creation process by repeating the previous step for each additional pool.

  7. When you are satisfied with the configuration of your cluster, click the Create Cluster button on the right hand side of the screen. Your cluster’s detail page appears, and your Node Pools are listed on this page. From this page, you can edit your existing Node Pools, access your Kubeconfig file, and view an overview of your cluster’s resource details.

Access and Download your kubeconfig

  1. To access your cluster’s kubeconfig, log in to your Cloud Manager account and navigate to the Kubernetes section.

  2. From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file is saved to your computer’s Downloads folder.

  3. Open a terminal shell and save your kubeconfig file’s path to the $KUBECONFIG environment variable. In the example command, the kubeconfig file is located in the Downloads folder, but you should alter this line with this folder’s location on your computer:

    export KUBECONFIG=~/Downloads/kubeconfig.yaml
  4. View your cluster’s nodes using kubectl.

    kubectl get nodes

General Network and Firewall Information

In an LKE cluster, some entities and services are only accessible from within that cluster while others are publicly accessible (reachable from the internet).

Private (accessible only within the cluster)

  • Pod IPs, which use a per-cluster virtual network in the range 10.2.0.0/16
  • ClusterIP Services, which use a per-cluster virtual network in the range 10.128.0.0/16

Public (accessible over the internet)

  • NodePort Services, which listen on all Nodes with ports in the range 30000-32768.
  • LoadBalancer Services, which automatically deploy and configure a NodeBalancer.
  • Any manifest which uses hostNetwork: true and specifies a port.
  • Most manifests which use hostPort and specify a port.

Exposing workloads to the public internet through the above methods can be convenient, but this can also carry a security risk. You may wish to manually install firewall rules on your cluster nodes. The following policies are needed to allow communication between the node pools and the control plane and block unwanted traffic:

  • Allow kubelet health checks: TCP port 10250 from 192.168.128.0/17 Accept
  • Allow Wireguard tunneling for kubectl proxy: UDP port 51820 from 192.168.128.0/17 Accept
  • Allow Calico BGP traffic: TCP port 179 from 192.168.128.0/17 Accept
  • Allow NodePorts for workload services: TCP/UDP port 30000 - 32767 192.168.128.0/17 Accept
  • Block all other TCP traffic: TCP All Ports All IPv4/All IPv6 Drop
  • Block all other UDP traffic: UDP All Ports All IPv4/All IPv6 Drop
  • Block all ICMP traffic: ICMP All Ports All IPv4/All IPv6 Drop
  • IPENCAP for IP ranges 192.168.128.0/17 for internal communication between node pools and control plane.

For additional information, please see this community post. Future LKE release may allow greater flexibility for the network endpoints of these types of workloads.

Please note, at this time, nodes should be removed from the Cloud Firewall configuration before removing/recycling of node pools within the Kubernetes configuration. Also, when adding node pools to the Kubernetes cluster, Cloud Firewall must be updated with the new node pool(s). Failure to add the new nodes creates a security risk.

Note
All new LKE clusters create a service named Kubernetes in the default namespace designed to ease interactions with the control plane. This is a standard service for LKE clusters.

Next Steps

Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.