Kubernetes vs Docker Swarm – Comparing Containerization Platforms

Kubernetes vs Docker Swarm. There are people who will tell you that the community has made up its mind when it comes to container orchestration. The reality could not be further from the truth.

A recent survey, of over 500 respondents, addressing questions about DevOps, microservices and the public cloud revealed a three-way orchestration race between Docker Swarm, Google Kubernetes, and Amazon EC2 Container Service (ECS).

Kubernetes vs Docker Swarm

Kubernetes

Docker

Most mature solution in the market.

Docker offers good features but limited by its API.

Kubernetes is also the most popular solution in the market.

Docker’s market is relatively weaker compared to Kubernetes.

Kubernetes is hard to setup and configure.

Docker’s setup and installation is easy.

Kubernetes offers inbuilt logging and monitoring tools.

Docker only supports 3rd party monitoring and logging tools.

CPU utilization is a big factor in autoscaling.

It is possible to scale services manually.

Kubernetes vs Docker
Kubernetes vs Docker

When you think about which orchestration tool is right for your environment, we believe the following three key things must be considered:

  • Performance: How fast can I get containers up and running at scale? How responsive is the system when under load?
  • Simplicity: What’s the learning curve to set up and ongoing burden to maintain? How many moving parts are there?
  • Flexibility: Does it integrate with my current environment and workflows? Will my applications seamlessly move from dev to test to production? Will I be locked into a specific platform?

Docker Swarm leads in all three areas.

Kubernetes vs Docker – Performance at Scale

We released the first beta of Swarm just over year ago, and since then we’ve made remarkable progress. In less than a year, we introduced Swarm 1.0 (November 2015) and made clear that Swarm can scale to support 1,000 nodes running in a production environment, and our internal testing proves that.

Kubernetes previously released their own blog detailing performance testing on a 100 node cluster. The problem for customers is that there was no way to really compare the results between these two efforts as the test methodologies were fundamentally different

In order to accurately assess performance across orchestration tools, there needs to be a unified framework.

To that end Docker engaged Jeff Nickoloff, an independent technology consultant, to help create this framework, to make it available to the larger container community for use in their own evaluations.

Today Jeff released the results of his independent study comparing the performance of Docker Swarm to Google Kubernetes at scale. The study and article, commissioned by Docker, tested the performance of both platforms while running 30,000 containers across 1,000 node clusters.

The tests were designed to measure two things:

  1. Container startup time: How quickly can a new container actually be brought online versus simply scheduling it to start.
  2. System responsiveness under load: How quickly does the system respond to operational requests under load (in this case listing all the running containers)

The test harness looks at both of these measurements as the cluster is built. A fully loaded cluster is 1,000 nodes running 30,000 containers (30 containers per node).

As nodes are added to the cluster, the harness will stop and measure container startup time, and system responsiveness. These breakpoints happened when the cluster was 10%, 50%, 90%, 99%, and 100% full. At each of these load levels 1,000 test iterations are executed.

What this means is that, for instance, when the cluster is 10% full (100 nodes, and 3,000 containers), the harness will pause adding new nodes. It will instead measure the time it takes to startup a new container (in this case the 3,001st container), and how long it takes to list all the running containers (3,001). It does this particular sequence 1,000 times. The 3,001st container is created, the startup and list times are measured, and the container is removed 1,000 times.

The results show that Swarm is on average 5X faster in terms of container startup time and 7X faster in delivering operational insights necessary to run a cluster at scale in production.

Looking more closely at the results for container startup time, there is a clear performance advantage for Swarm regardless of cluster load level.

From Jeff’s blog:

Half the time Swarm will start a container in less than .5 seconds as long as the cluster is not more than 90% full. Kubernetes will start a container in over 2 seconds half of the time if the cluster is 50% full or more.

Kubernetes vs Docker – Which is faster?

Kubernetes vs Docker Speed Test
Kubernetes vs Docker Speed Test

One important thing to note is that this test isn’t about container scheduling, it’s about getting containers running and doing work.

The reality is nobody cares if a container was “scheduled” to run, what they care about is that the container is actually running. I think about it like this: If I go out to eat, taking my order and handing it off to the kitchen is great, but what’s really important is how long it takes to actually get my meal prepared and delivered to my table.

One of the promises of containers is agility and responsiveness. A 5X delay in container startup time absolutely wreaks havoc on distributed applications that need near real-time responsiveness. Even in cases where real-time responsiveness isn’t needed, taking all that extra time to bring up infrastructure is painful – think about using orchestration as part of a continuous integration workflow, longer container startup times directly correspond to longer test cycle times.

It’s one thing to scale a cluster to 30,000 containers, and it’s a completely different thing to be able to be able to efficiently manage that environment. System responsiveness under load is critical to effective management. In a world where containers may only live for a few minutes, having a significant delay in gathering real-time insight into the state of the environment means you never really know what’s happening in your infrastructure at any particular moment in time.

In order to gauge system responsiveness under load, the test harness measured the time it took to list out all the running containers at various levels of cluster load.

The result: Compared to Swarm, Kubernetes took up to 7x longer to list all the running containers as the cluster approached full load – taking over 2 minutes to list out the running containers. Furthermore, Kubernetes had a 98X increase in response time (that’s not a typo it’s 98X not 98%) as the cluster went from 10% to 100% full.

Comparing Kubernetes vs Docker Speed Test
Kubernetes vs Docker Speed Test

Kubernetes vs Docker – Simplicity

So why exactly is Kubernetes so much slower and less responsive than Swarm? It really comes down to system architecture. A quick glance at the diagrams from Jeff’s testing environments shows that Swarm has fewer moving parts than Kubernetes.

Kubernetes vs Docker Swarm
Kubernetes vs Docker Swarm

All of these components introduce a high degree of complexity to the setup process, inject latency in executing commands and makes troubleshooting and remediation difficult.. The diagram below depicts the number of component level interactions in Kubernetes compared to Swarm.

The 8X more “hops” to complete a command like run or list add latency and result in a 7X slower system for critical orchestration functions. Another impact of these many interactions is that when a command fails to complete, it is difficult to deduce at which point the failure occurred.

Kubernetes vs Docker Swarm
Kubernetes vs Docker Swarm

Kubernetes was born out of Google’s internal Borg project, so people assume it’s designed to perform well at “cloud scale”. The test results are one proof point that Kubernetes is fairly divergent from Borg. However, it does share one thing in common with Borg: being overly complex and needing teams of cloud engineers to implement and manage it day to day.

Swarm, on the other hand, shares in a core Docker discipline of democratizing complex cloud technologies. Swarm has been built from day one with the intent of being the best way to orchestrate containers for organizations of all sizes without requiring an army of engineers.

With an easy to use experience that is the same whether you are testing a small cluster on your laptop, setting up some test servers in a datacenter or your production cloud infrastructure.

As Jeff said, “Docker Swarm is quantitatively easier to adopt and support than Kubernetes clustering components.”

Some might argue that Kubernetes is more complicated because it does more. But “doing more” does not bring any value to the table if the “more” isn’t anything you care about. And, in reality, it can actually end up being a detriment as “more” can introduce additional points of failure, increased support costs, and unnecessary infrastructure investments.

Or as Jeff describes it:

“…Kubernetes is a larger project, with more moving parts, more facets to learn, and more opportunities for failure. Even though the architecture implemented by Kubernetes can prevent a few known weaknesses in the Swarm architecture it creates opportunities for more esoteric problems and nuances.”

Kubernetes vs Docker Swarm – Flexibility

As I stated at the outset of this post, performance and simplicity are only two factors when considering an orchestration tool. The third critical element is flexibility and flexibility itself means many things.

The previously mentioned survey results show that there are three main orchestration tools companies are using or considering include: Docker Swarm, Google Kubernetes, and Amazon EC2 Container Service (ECS).

Of those three, only Docker is fully committed to ensure that your application runs unfettered across the full gamut of infrastructure: From your developers to your test environment, to a production deployment on the platform of your choosing. On a laptop, in your private datacenter, or on the cloud provider of your choosing. Docker Swarm allows you to cluster hosts and orchestrate containers anywhere.

Beyond offering true portability of your workloads across public and private infrastructure, Docker features a plugin based architecture. These plugins ensure that your Dockerized applications will work with your existing technology investments across networking, storage, and compute and can be moved to a different network or storage provider without any change to your application code.

In the end a compelling orchestration tool is a necessary part of any Container as a Service (CaaS) environment. The reality is that orchestration is not the platform but only one piece of a much larger technology stack.

We know this because the same survey previously mentioned also tells us that users want tools that address the full application lifecycle, feature integrated tooling for both their developers and operations engineers, as well supporting the widest range of developer tools.

Kubernetes vs Docker Swarm - Platform Requirements
Kubernetes vs Docker Swarm

Docker Swarm allows organizations to leverage the full power of the native Docker CLI and APIs. It allows developers to work in a consistent way, regardless of where their applications are developed or where they will run. Docker works with the infrastructure investments you have today and smooths your transition to different providers. Our design philosophy puts you – the user – and your applications first.

RELATED KUBERNETES TUTORIALS

What is Kubernetes?

Kubernetes Components

Kubernetes Objects

Kubernetes Names And Namespaces

Kubernetes API

Kubernetes API Extentions

Kubernetes Architecture

Kubernetes Cluster

Kubernetes Container Images

Kubernetes Container Environment Variables

Kubernetes Interview Questions

Want to learn Kubernetes from industry experts?

Get register for a FREE demo on Kubernetes Training @ Contact us.

 

Kubernetes Container Environment Variables Tutorial

Kubernetes Container Environment Variables Tutorial. Here Coding compiler sharing a tutorial on Kubernetes Container Environment Variables. In this tutorial, we will discuss on Kubernetes containers, Container Environment Variables, and Kubernetes Container Lifecycle Hooks. Let’s start learning about Kubernetes. Happy learning.

Container Environment Variables

The Kubernetes Container environment provides several important resources to Containers:

  • A filesystem, which is a combination of an image and one or more volumes.
  • Information about the Container itself.
  • Information about other objects in the cluster.

Container information

The hostname of a Container is the name of the Pod in which the Container is running. It is available through the hostname command or the gethostname function call in libc.

The Pod name and namespace are available as environment variables through the downward API.

User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image.

Cluster information

A list of all services that were running when a Container was created is available to that Container as environment variables. Those environment variables match the syntax of Docker links.

For a service named foo that maps to a container named bar, the following variables are defined:

FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>

Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.

Container Lifecycle Hooks Overview

Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular, Kubernetes provides Containers with lifecycle hooks. The hooks enable Containers to be aware of events in their management lifecycle and run code implemented in a handler when the corresponding lifecycle hook is executed.

Container hooks

There are two hooks that are exposed to Containers:

PostStart

This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.

PreStop

This hook is called immediately before a container is terminated. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.

A more detailed description of the termination behavior can be found in Termination of Pods.

Hook handler implementations

Containers can access a hook by implementing and registering a handler for that hook. There are two types of hook handlers that can be implemented for Containers:

  • Exec – Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
  • HTTP – Executes an HTTP request against a specific endpoint on the Container.

Hook handler execution

When a Container lifecycle management hook is called, the Kubernetes management system executes the handler in the Container registered for that hook.

Hook handler calls are synchronous within the context of the Pod containing the Container. This means that for a PostStart hook, the Container ENTRYPOINT and hook fire asynchronously. However, if the hook takes too long to run or hangs, the Container cannot reach a running state.

The behavior is similar for a PreStop hook. If the hook hangs during execution, the Pod phase stays in a Terminating state and is killed after terminationGracePeriodSeconds of pod ends. If a PostStart or PreStop hook fails, it kills the Container.

Users should make their hook handlers as lightweight as possible. There are cases, however, when long running commands make sense, such as when saving state prior to stopping a Container.

Hook delivery guarantees

Hook delivery is intended to be at least once, which means that a hook may be called multiple times for any given event, such as for PostStart or PreStop. It is up to the hook implementation to handle this correctly.

Generally, only single deliveries are made. If, for example, an HTTP hook receiver is down and is unable to take traffic, there is no attempt to resend. In some rare cases, however, double delivery may occur. For instance, if a kubelet restarts in the middle of sending a hook, the hook might be resent after the kubelet comes back up.

Debugging Hook handlers

The logs for a Hook handler are not exposed in Pod events. If a handler fails for some reason, it broadcasts an event. For PostStart, this is the FailedPostStartHook event, and for PreStop, this is the FailedPreStopHook event. You can see these events by running kubectl describe pod <pod_name>.

RELATED KUBERNETES TUTORIALS

What is Kubernetes?

Kubernetes Components

Kubernetes Objects

Kubernetes Names And Namespaces

Kubernetes API

Kubernetes API Extentions

Kubernetes Architecture

Kubernetes Cluster

Kubernetes Container Images

Kubernetes Interview Questions

Want to learn Kubernetes from industry experts?

Get register for a FREE demo on Kubernetes Training @ Contact us.

 

Kubernetes Images – Kubernetes Tutorial

Kubernetes Images – Kubernetes Tutorial. Here Coding compiler sharing a tutorial on Kubernetes Images. In this tutorial, we will discuss on Updating Images, Using a Private Registry, Using Google Container Registry, Using AWS EC2 Container Registry, Using Azure Container Registry (ACR). Let’s start learning about Kubernetes. Happy learning.

Kubernetes Images

You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.

The image property of a container supports the same syntax as the docker command does, including private registries and tags.

Updating Images

The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:

  • set the imagePullPolicy of the container to Always;
  • use :latest as the tag for the image to use;
  • enable the AlwaysPullImages admission controller.

If you did not specify tag of your image, it will be assumed as :latest, with pull image policy of Always correspondingly.

Kubernetes Images – Using a Private Registry

Private registries may require keys to read images from them. Credentials can be provided in several ways:

  • Using Google Container Registry
    • Per-cluster
    • automatically configured on Google Compute Engine or Google Kubernetes Engine
    • all pods can read the project’s private registry
  • Using AWS EC2 Container Registry (ECR)
    • use IAM roles and policies to control access to ECR repositories
    • automatically refreshes ECR login credentials
  • Using Azure Container Registry (ACR)
  • Configuring Nodes to Authenticate to a Private Registry
    • all pods can read any configured private registries
    • requires node configuration by the cluster administrator
  • Pre-pulling Images
    • all pods can use any images cached on a node
    • requires root access to all nodes to setup
  • Specifying ImagePullSecrets on a Pod
    • only pods which provide own keys can access the private registry Each option is described in more detail below.

Using Google Container Registry

Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag).

All pods in a cluster will have read access to images in this registry.

The kubelet will authenticate to GCR using the instance’s Google service account. The service account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so it can pull from the project’s GCR, but not push.

Using AWS EC2 Container Registry

Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.

Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod definition.

All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.

The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:

  1. ecr:GetAuthorizationToken
  2. ecr:BatchCheckLayerAvailability
  3. ecr:GetDownloadUrlForLayer
  4. ecr:GetRepositoryPolicy
  5. ecr:DescribeRepositories
  6. ecr:ListImages
  7. ecr:BatchGetImage

Requirements:

  • You must be using kubelet version v1.2.0 or newer. (e.g. run /usr/bin/kubelet --version=true).
  • If your nodes are in region A and your registry is in a different region B, you need version v1.3.0 or newer.
  • ECR must be offered in your region

Troubleshooting:

  • Verify all requirements above.
  • Get $REGION (e.g. us-west-2) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work?
  • Verify kubelet is running with --cloud-provider=aws.
  • Check kubelet logs (e.g. journalctl -u kubelet) for log lines like:
    • plugins.go:56] Registering credential provider: aws-ecr-key
    • provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider

Using Azure Container Registry (ACR)

When using Azure Container Registry you can authenticate using either an admin user or a service principal. In either case, authentication is done via standard Docker authentication. These instructions assume the azure-cli command line tool.

You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.

Once you have created your container registry, you will use the following credentials to login:

  • DOCKER_USER : service principal, or admin username
  • DOCKER_PASSWORD: service principal password, or admin user password
  • DOCKER_REGISTRY_SERVER${some-registry-name}.azurecr.io
  • DOCKER_EMAIL${some-email-address}

Once you have those variables filled in you can configure a Kubernetes Secret and use it to deploy a Pod.

Configuring Nodes to Authenticate to a Private Repository

Note: if you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.

Note: if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.

Note: this approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.

Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put this in the $HOME of user root on a kubelet, then docker will use it.

Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:

  1. Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
  2. View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
  3. Get a list of your nodes, for example:
    • if you want the names: nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
    • if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')
  4. Copy your local .docker/config.json to the home directory of root on each node.
    • for example: for n in $nodes; do scp ~/.docker/config.json [email protected]$n:/root/.docker/config.json; done

Verify by creating a pod that uses a private image, e.g.:

$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pod "private-image-test-1" created
$

If everything is working, then, after a few moments, you should see:

$ kubectl logs private-image-test-1
SUCCESS

If it failed, then you will see:

$ kubectl describe pods/private-image-test-1 | grep "Failed"
  Fri, 26 Jun 2015 15:36:13 -0700    Fri, 26 Jun 2015 15:39:13 -0700    19    {kubelet node-i2hq}    spec.containers{uses-private-image}    failed        Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found

You must ensure all nodes in the cluster have the same .docker/config.json. Otherwise, pods will run on some nodes and fail to run on others. For example, if you use node autoscaling, then each instance template needs to include the .docker/config.json or mount a drive that contains it.

All pods will have read access to images in any private registry once private registry keys are added to the .docker/config.json.

RELATED KUBERNETES TUTORIALS

What is Kubernetes?

Kubernetes Components

Kubernetes Objects

Kubernetes Names And Namespaces

Kubernetes API

Kubernetes API Extentions

Kubernetes Architecture

Kubernetes Cluster

Kubernetes Interview Questions

Want to learn Kubernetes from industry experts?

Get register for a FREE demo on Kubernetes Training @ Contact us.

Kubernetes API Extensions – Kubernetes Tutorial

Kubernetes API Extensions – Kubernetes Tutorial. Here Coding compiler sharing a tutorial on Kubernetes API extensions. Let’s start learning about Kubernetes. Happy learning.

Kubernetes API Extensions

There are many types of Kubernetes API Extensions are there. They are:

User-Defined Types

Consider adding a Custom Resource to Kubernetes if you want to define new controllers, application configuration objects or other declarative APIs, and to manage them using Kubernetes tools, such as kubectl.

Do not use a Custom Resource as data storage for application, user, or monitoring data.

Read Related Article: What is Kubernetes?

Combining New APIs with Automation

Often, when you add a new API, you also add a control loop that reads and/or writes the new APIs. When the combination of a Custom API and a control loop is used to manage a specific, usually stateful, application, this is called the Operator pattern. Custom APIs and control loops can also be used to control other resources, such as storage, policies, and so on.

Related Article: Kubernetes Architecture

Changing Built-in Resources

When you extend the Kubernetes API by adding custom resources, the added resources always fall into a new API Groups. You cannot replace or change existing API groups. Adding an API does not directly let you affect the behavior of existing APIs (e.g. Pods), but API Access Extensions do.

Related Article For You: Kubernetes Components

API Access Extensions

When a request reaches the Kubernetes API Server, it is first Authenticated, then Authorized, then subject to various types of Admission Control. See [Accessing the API] for more on this flow.

Each of these steps offers extension points.

Kubernetes has several built-in authentication methods that it supports. It can also sit behind an authenticating proxy, and it can send a token from an Authorization header to a remote service for verification (a webhook). All of these methods are covered in the Authentication documentation.

Related Article: Kubernetes Objects

Authentication

Authentication maps headers or certificates in all requests to a username for the client making the request.

Kubernetes provides several built-in authentication methods, and an Authentication webhook method if those don’t meet your needs.

Related Article: Kubernetes Names And Namespaces

Authorization

Authorization determines whether specific users can read, write, and do other operations on API resources. It just works at the level of whole resources – it doesn’t discriminate based on arbitrary object fields. If the built-in authorization options don’t meet your needs, and Authorization webhook allows calling out to user-provided code to make an authorization decision.

Related Article: Kubernetes API Tutorial

Dynamic Admission Control

After a request is authorized, if it is a write operation, it also goes through Admission Control steps. In addition to the built-in steps, there are several extensions:

  • The Image Policy webhook restricts what images can be run in containers.
  • To make arbitrary admission control decisions, a general Admission webhook can be used. Admission Webhooks can reject creations or updates.
  • Initializers are controllers that can modify objects before they are created. Initializers can modify initial object creations but cannot affect updates to objects. Initializers can also reject objects.

Infrastructure Extensions

There are several infrastructure extensions are there in Kubernetes. They are:

Related Article: Kubernetes Cluster Tutorial

Storage Plugins

Flex Volumes allow users to mount volume types without built-in support by having the Kubelet call a Binary Plugin to mount the volume.

Device Plugins

Device plugins allow a node to discover new Node resources (in addition to the builtin ones like cpu and memory) via a Device Plugin.

Network Plugins

Different networking fabrics can be supported via node-level Network Plugins.

Scheduler Extensions

The scheduler is a special type of controller that watches pods and assigns pods to nodes. The default scheduler can be replaced entirely while continuing to use other Kubernetes components, or multiple schedulers can run at the same time.

This is a significant undertaking, and almost all Kubernetes users find they do not need to modify the scheduler.

The scheduler also supports a webhook that permits a webhook backend (scheduler extension) to filter and prioritize the nodes chosen for a pod.

Related KUBERNETES TUTORIALS

What is Kubernetes?

Kubernetes Components

Kubernetes Objects

Kubernetes Names And Namespaces

Kubernetes API

Kubernetes Architecture

Kubernetes Cluster

Kubernetes Interview Questions

Want to learn Kubernetes from industry experts?

Get register for a FREE demo on Kubernetes Training @ Contact us.

Kubernetes Cluster – Kubernetes Tutorial

Kubernetes Cluster – Kubernetes Tutorial. Here Coding compiler sharing a tutorial on Kubernetes cluster, Kubernetes cluster overview, Kubernetes cluster configuration, and Kubernetes extensions. Let’s start learning about Kubernetes. Happy learning.

Kubernetes Cluster

Kubernetes is highly configurable and extensible. As a result, there is rarely a need to fork or submit patches to the Kubernetes project code.

This guide describes the options for customizing a Kubernetes cluster. It is aimed at Cluster Operators who want to understand how to adapt their Kubernetes cluster to the needs of their work environment.

Developers who are prospective Platform Developers or Kubernetes Project Contributors will also find it useful as an introduction to what extension points and patterns exist, and their trade-offs and limitations.

Read Related Article: What is Kubernetes?

Kubernetes Cluster Overview

Customization approaches can be broadly divided into the configuration, which only involves changing flags, local configuration files, or API resources; and extensions, which involve running additional programs or services. This document is primarily about extensions.

Related Article For You: Kubernetes Components

Kubernetes Cluster Configuration

Configuration files and flags are documented in the Reference section of the online documentation, under each binary:

  • kubelet
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

Flags and configuration files may not always be changeable in a hosted Kubernetes service or a distribution with the managed installation. When they are changeable, they are usually only changeable by the cluster administrator.

Also, they are subject to change in future Kubernetes versions, and setting them may require restarting processes. For those reasons, they should be used only when there are no other options.

Related Article: Kubernetes Objects

Kubernetes Cluster Extensions

Extensions are software components that extend and deeply integrate with Kubernetes. They adapt it to support new types and new kinds of hardware.

Most cluster administrators will use a hosted or distribution instance of Kubernetes. As a result, most Kubernetes users will need to install extensions and fewer will need to author new ones.

Related Article: Kubernetes Names And Namespaces

Kubernetes Cluster Extension Patterns

Kubernetes is designed to be automated by writing client programs. Any program that reads and/or writes to the Kubernetes API can provide useful automation. Automation can run on the cluster or off it.

By following the guidance in this doc you can write highly available and robust automation. Automation generally works with any Kubernetes cluster, including hosted clusters and managed installations.

Related Article: Kubernetes API Tutorial

There is a specific pattern for writing client programs that work well with Kubernetes called the Controller pattern. Controllers typically read an object’s .spec, possibly do things, and then update the object’s .status.

A controller is a client of Kubernetes. When Kubernetes is the client and calls out to a remote service, it is called a Webhook. The remote service is called a Webhook Backend. Like Controllers, Webhooks do add a point of failure.

In the webhook model, Kubernetes makes a network request to a remote service. In the Binary Plugin model, Kubernetes executes a binary (program). Binary plugins are used by the kubelet.

Related Article: Kubernetes Architecture Tutorial

Below is a diagram showing how the extensions points interact with the Kubernetes control plane.

Kubernetes control plane
Kubernetes control plane

Kubernetes Cluster Extension Points

This diagram shows the extension points in a Kubernetes system.

Kubernetes Extension Points
Kubernetes Extension Points
  1. Users often interact with the Kubernetes API using kubectlKubectl plugins extend the kubectl binary. They only affect the individual user’s local environment, and so cannot enforce site-wide policies.
  2. The apiserver handles all requests. Several types of extension points in the apiserver allow authenticating requests, or blocking them based on their content, editing content, and handling deletion. These are described in the API Access Extensions section.
  3. The apiserver serves various kinds of resourcesBuilt-in resource kinds, like pods, are defined by the Kubernetes project and can’t be changed. You can also add resources that you define, or that other projects have defined, called Custom Resources, as explained in the Custom Resources section. Custom Resources are often used with API Access Extensions.
  4. The Kubernetes scheduler decides which nodes to place pods on. There are several ways to extend scheduling. These are described in the Scheduler Extensions section.
  5. Much of the behavior of Kubernetes is implemented by programs called Controllers which are clients of the API-Server. Controllers are often used in conjunction with Custom Resources.
  6. The kubelet runs on servers, and helps pods appear like virtual servers with their own IPs on the cluster network. Network Plugins allow for different implementations of pod networking.
  7. The kubelet also mounts and unmounts volumes for containers. New types of storage can be supported via Storage Plugins.

If you are unsure where to start, this flowchart can help. Note that some solutions may involve several types of extensions.

Types of Kubernetes Extensions - Flowchart
Types of Kubernetes Extensions – Flowchart

OTHER KUBERNETES TUTORIALS

What is Kubernetes?

Kubernetes Components

Kubernetes Objects

Kubernetes Names And Namespaces

Kubernetes API

Kubernetes Architecture

Kubernetes Interview Questions

Want to learn Kubernetes from industry experts?

Get register for a FREE demo on Kubernetes Training @ Contact us.