Kubernetes Images – Kubernetes Tutorial. Here Coding compiler sharing a tutorial on Kubernetes Images. In this tutorial, we will discuss on Updating Images, Using a Private Registry, Using Google Container Registry, Using AWS EC2 Container Registry, Using Azure Container Registry (ACR). Let’s start learning about Kubernetes. Happy learning.
Kubernetes Images
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The image property of a container supports the same syntax as the docker command does, including private registries and tags.
Updating Images
The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
- set the imagePullPolicy of the container to Always;
- use :latest as the tag for the image to use;
- enable the AlwaysPullImages admission controller.
If you did not specify tag of your image, it will be assumed as :latest, with pull image policy of Always correspondingly.
Kubernetes Images – Using a Private Registry
Private registries may require keys to read images from them. Credentials can be provided in several ways:
- Using Google Container Registry
- Per-cluster
- automatically configured on Google Compute Engine or Google Kubernetes Engine
- all pods can read the project’s private registry
- Using AWS EC2 Container Registry (ECR)
- use IAM roles and policies to control access to ECR repositories
- automatically refreshes ECR login credentials
- Using Azure Container Registry (ACR)
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries
- requires node configuration by the cluster administrator
- Pre-pulling Images
- all pods can use any images cached on a node
- requires root access to all nodes to setup
- Specifying ImagePullSecrets on a Pod
- only pods which provide own keys can access the private registry Each option is described in more detail below.
Using Google Container Registry
Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE). If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag).
All pods in a cluster will have read access to images in this registry.
The kubelet will authenticate to GCR using the instance’s Google service account. The service account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only
, so it can pull from the project’s GCR, but not push.
Using AWS EC2 Container Registry
Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.
Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag
) in the Pod definition.
All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.
The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:
ecr:GetAuthorizationToken
ecr:BatchCheckLayerAvailability
ecr:GetDownloadUrlForLayer
ecr:GetRepositoryPolicy
ecr:DescribeRepositories
ecr:ListImages
ecr:BatchGetImage
Requirements:
- You must be using kubelet version
v1.2.0
or newer. (e.g. run/usr/bin/kubelet --version=true
). - If your nodes are in region A and your registry is in a different region B, you need version
v1.3.0
or newer. - ECR must be offered in your region
Troubleshooting:
- Verify all requirements above.
- Get $REGION (e.g.
us-west-2
) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work? - Verify kubelet is running with
--cloud-provider=aws
. - Check kubelet logs (e.g.
journalctl -u kubelet
) for log lines like:plugins.go:56] Registering credential provider: aws-ecr-key
provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider
Using Azure Container Registry (ACR)
When using Azure Container Registry you can authenticate using either an admin user or a service principal. In either case, authentication is done via standard Docker authentication. These instructions assume the azure-cli command line tool.
You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.
Once you have created your container registry, you will use the following credentials to login:
DOCKER_USER
: service principal, or admin usernameDOCKER_PASSWORD
: service principal password, or admin user passwordDOCKER_REGISTRY_SERVER
:${some-registry-name}.azurecr.io
DOCKER_EMAIL
:${some-email-address}
Once you have those variables filled in you can configure a Kubernetes Secret and use it to deploy a Pod.
Configuring Nodes to Authenticate to a Private Repository
Note: if you are running on Google Kubernetes Engine, there will already be a .dockercfg
on each node with credentials for Google Container Registry. You cannot use this approach.
Note: if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: this approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Docker stores keys for private registries in the $HOME/.dockercfg
or $HOME/.docker/config.json
file. If you put this in the $HOME
of user root
on a kubelet, then docker will use it.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
- Run
docker login [server]
for each set of credentials you want to use. This updates$HOME/.docker/config.json
. - View
$HOME/.docker/config.json
in an editor to ensure it contains just the credentials you want to use. - Get a list of your nodes, for example:
- if you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
- if you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')
- if you want the names:
- Copy your local
.docker/config.json
to the home directory of root on each node.- for example:
for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done
- for example:
Verify by creating a pod that uses a private image, e.g.:
$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
metadata:
name: private-image-test-1
spec:
containers:
- name: uses-private-image
image: $PRIVATE_IMAGE_NAME
imagePullPolicy: Always
command: [ "echo", "SUCCESS" ]
EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pod "private-image-test-1" created
$
If everything is working, then, after a few moments, you should see:
$ kubectl logs private-image-test-1
SUCCESS
If it failed, then you will see:
$ kubectl describe pods/private-image-test-1 | grep "Failed"
Fri, 26 Jun 2015 15:36:13 -0700 Fri, 26 Jun 2015 15:39:13 -0700 19 {kubelet node-i2hq} spec.containers{uses-private-image} failed Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found
You must ensure all nodes in the cluster have the same .docker/config.json
. Otherwise, pods will run on some nodes and fail to run on others. For example, if you use node autoscaling, then each instance template needs to include the .docker/config.json
or mount a drive that contains it.
All pods will have read access to images in any private registry once private registry keys are added to the .docker/config.json
.
RELATED KUBERNETES TUTORIALS
Kubernetes Names And Namespaces
Kubernetes Interview Questions
Want to learn Kubernetes from industry experts?
Get register for a FREE demo on Kubernetes Training @ Contact us.