preloader

K0s vs K3s vs K8s: Comparing Kubernetes Distributions

k0s and k3s are CNCF-certified, lightweight Kubernetes distributions. Let’s look at how they compare to each other, and where traditional k8s comes into play.

/images/blog/cover-images/k0s-k3s-k8s.png
Comparing k0s, k3s, and k8s

by on

Right now, k0s and k3s are two of the most popular lightweight Kubernetes distributions, and for good reason. Some projects need less complex config and orchestration than traditional k8s requires, which make k0s and k3s really appealing, especially for pre-production and rapid prototyping. Let’s look at how they compare to each other, and see how they stack up to traditional “stock” Kubernetes.

What is k0s?

k0s is a lightweight Kubernetes distribution from the team behind Lens. The “zero” in k0s aptly represents the distro’s zero compromises, dependencies, or downtime issues.

k0s is easy to run anywhere – bare metal, on-prem, locally, and on any cloud provider. It doesn’t have any dependencies and is distributed in a single binary. With k0s, you don’t have to worry excessively about config (unlike many k8s options), and can get a cluster spun up within minutes — all important considerations for local dev or other lightweight use cases.

What is k3s?

Rancher’s k3s is a lightweight yet highly configurable Kubernetes distribution. k3s’ name reflects its status as the smaller cousin of traditional k8s, and thus has half the characters represented (ten total letters versus five). However, unlike k8s, there is no “unabbreviated” word form of k3s.

k3s is also distributed as a dependency-free, single binary. It helps engineers achieve a close approximation of production infrastructure while only needing a fraction of the compute, config, and complexity, which all result in faster runtimes.

K0s vs K3s

k0s and k3s are both CNCF-certified k8s distributions, and meet all the benchmarks/requirements for standard k8s clusters. They’re both good options for teams looking for lighter-weight and easy to configure cluster solutions.

Cluster architecture

k3s supports both single and multi-node clusters. Its control plane defaults to SQLite for its embedded datastore on all cluster types, and multi-node clusters can be configured to use MySQL, PostgreSQL, and etcd.

k0s also accommodates single and multi-node clusters. Its datastore defaults to SQLite for single-node clusters, and to etcd for multi-node clusters. The datastore can also be configured to use PostgreSQL and MySQL.

Like standard k8s, k0s has a distinct separation between worker and control planes, which can be distributed across multiple nodes.

Both distros use containerd for their container runtimes. k0s ships without a built-in ingress controller; stock k3s comes with Traefik.

Configuration

Both k0s and k3s can operate without any external dependencies.

k3s clusters can be configured by env vars and flags passed during installation or server commands. These options are further configurable from the config.yaml file. Users can define additional configuration files by using the YAML extension.

An example k3s config.yaml from the docs:

write-kubeconfig-mode: "0644"
tls-san:
  - "foo.local"
node-label:
  - "foo=bar"
  - "something=amazing"
cluster-init: true

k0s can be easily configured via k0sctl — running the init command will generate a YAML file with the required options preconfigured, and users will customize SSH and IP at minimum.

An example k0s.yaml from the docs:

apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s
spec:
  api:
    address: 192.168.68.106
    sans:
    - my-k0s-control.my-domain.com
  network:
    podCIDR: 10.244.0.0/16
    serviceCIDR: 10.96.0.0/12
extensions:
  helm:
    repositories:
    - name: prometheus-community
      url: https://prometheus-community.github.io/helm-charts
    charts:
    - name: prometheus-stack
      chartname: prometheus-community/prometheus
      version: "11.16.8"
      namespace: default

Both distros require relatively minimal configuration versus traditional k8s, which is often a primary reason that users favor them: they can get clusters up and running with a shorter lead time.

Resource usage

k3s sizes in at 50-100 MB (depending on the release version) and ships as a binary.

k0s also ships as a binary, and is slightly larger at 160-300 MB.

Both distros are described as incredibly lightweight (when compared to traditional k8s), and are often used for IoT and edge, as well as traditional non-production deployments.

Here are k3s’ ballpark vCPU and memory requirements for deployments on the smaller side:

Nodes vCPUs RAM
Up to 10 2 4 GB
Up to 100 4 8 GB


And for comparison, here are the same ballpark stats for k0s:

Nodes vCPUs RAM
Up to 10 1-2 vCPU 1-2 GB
Up to 100 2-4 vCPU 4-8 GB


Benchmarking efforts have shown that the two distros have very similar compute requirements, at least for single-node clusters.

Use cases

k0s and k3s, as far as lightweight Kubernetes distros go, are pretty similar. One of their biggest distinguishers is that k0s is designed with ease-of-use and simplicity first, and k3s is designed with a lighter footprint in mind.

k0s and k3s are both recommended for use cases like CI clusters, IoT devices, bare metal, and edge deployments. However, as fully-certified CNCF distributions, they can also substitute for traditional k8s for pre-production and sometimes even production deployments.

k3s has more tooling and versatility than k0s, which in turn requires additional configuration. For many medium to large deployments, k3s is a better option because of its extensibility. This means that k0s tends to be favored for tasks that require quicker lead time and simpler config, and k3s acts as a resource-efficient alternative to stock k8s for more traditional orchestration needs.

Community

When choosing a distro, the size and activity of a community can be the difference between a smooth and a frustrating experience. This includes availability of forum answers and active maintainers to help with setup and troubleshooting.

k3s is actively maintained, with pushes to main happening daily to weekly. Over 50 contributors have upwards of three contributions. Questions are encouraged in k3s’ GitHub Discussions, and users can get support in the Rancher Slack group’s #k3s channel (over 5.5k members).

k0s receives multiple pushes to main daily, and over 25 contributors have pushed more than three contributions. The Lens team asks that users visit the Lens Forum for k0s-related questions, and hosts regular k0s community office hours.

Where does k8s come in?

Traditional Kubernetes is leveraged for complex production applications. k8s handles scale efficiently, and can support clusters with upwards of hundreds or thousands of nodes. Because of its modularity, individual k8s components can be scaled individually and be optimized for each application’s needs.

A tradeoff of its ability to handle scale and complex integrations is that stock k8s is very resource-intensive, and not the best option for smaller deployments. It is notoriously requires high levels of configuration, and thus takes longer to define and deploy clusters than distros like k0s or k3s.

Many production instances will use a managed Kubernetes service, like EKS or GKE, which will automate some time-intensive cluster maintenance, operations, and management tasks.

Traditional k8s is typically overkill for local development (or even pre-production/testing). Since all CNCF-certified distributions can fulfill standard Kubernetes cluster tasks and workflows, traditional k8s is often reserved for production or highly-complex workloads.

Conclusion

When you have a non-production or lower complexity Kubernetes use case, k3s and k0s serve as highly-capable, fully CNCF-certified Kubernetes distributions fit for the task. They excel at quick onboarding and both require significantly less compute, making them optimal for local dev, pre-production, and CI use cases.

Want automated full-stack preview environments without the k8s config? Shipyard takes your Docker Compose file, transpiles it to k8s, and deploys isolated copies of your app on every PR.

Try Shipyard today

Get isolated, full-stack ephemeral environments on every PR.

What is Shipyard?

Shipyard is the Ephemeral Environment Self-Service Platform.

Automated review environments on every pull request for Developers, Product, and QA teams.

Stay connected

Latest Articles

Shipyard Newsletter
Stay in the (inner) loop

Hear about the latest and greatest in cloud native, container orchestration, DevOps, and more when you sign up for our monthly newsletter.