Kubernetes之容器

Images

You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.

The image property of a container supports the same syntax as the docker command does, including private registries and tags.

Updating Images

The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:

  • set the imagePullPolicy of the container to Always;
  • use :latest as the tag for the image to use;
  • enable the AlwaysPullImages admission controller.

If you did not specify tag of your image, it will be assumed as :latest, with pull image policy of Always correspondingly.

Note that you should avoid using :latest tag, see Best Practices for Configuration for more information.

Using a Private Registry

Private registries may require keys to read images from them. Credentials can be provided in several ways:

  • Using Google Container Registry

    • Per-cluster
    • automatically configured on Google Compute Engine or Google Kubernetes Engine
    • all pods can read the project’s private registry
  • Using AWS EC2 Container Registry (ECR)
    • use IAM roles and policies to control access to ECR repositories
    • automatically refreshes ECR login credentials
  • Using Azure Container Registry (ACR)
  • Configuring Nodes to Authenticate to a Private Registry
    • all pods can read any configured private registries
    • requires node configuration by cluster administrator
  • Pre-pulling Images
    • all pods can use any images cached on a node
    • requires root access to all nodes to setup
  • Specifying ImagePullSecrets on a Pod
    • only pods which provide own keys can access the private registry Each option is described in more detail below.

Using Google Container Registry

Kubernetes has native support for the Google Container Registry (GCR), when running on Google Compute Engine (GCE).

If you are running your cluster on GCE or Google Kubernetes Engine, simply use the full image name (e.g. gcr.io/my_project/image:tag).

All pods in a cluster will have read access to images in this registry.

The kubelet will authenticate to GCR using the instance’s Google service account. The service account on the instance will have a https://www.googleapis.com/auth/devstorage.read_only, so it can pull from the project’s GCR, but not push.

Using AWS EC2 Container Registry

Kubernetes has native support for the AWS EC2 Container Registry, when nodes are AWS EC2 instances.

Simply use the full image name (e.g. ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod definition.

All users of the cluster who can create pods will be able to run pods that use any of the images in the ECR registry.

The kubelet will fetch and periodically refresh ECR credentials. It needs the following permissions to do this:

  • ecr:GetAuthorizationToken
  • ecr:BatchCheckLayerAvailability
  • ecr:GetDownloadUrlForLayer
  • ecr:GetRepositoryPolicy
  • ecr:DescribeRepositories
  • ecr:ListImages
  • ecr:BatchGetImage

Requirements:

  • You must be using kubelet version v1.2.0 or newer. (e.g. run /usr/bin/kubelet --version=true).
  • If your nodes are in region A and your registry is in a different region B, you need version v1.3.0 or newer.
  • ECR must be offered in your region

Troubleshooting:

  • Verify all requirements above.
  • Get $REGION (e.g. us-west-2) credentials on your workstation. SSH into the host and run Docker manually with those creds. Does it work?
  • Verify kubelet is running with --cloud-provider=aws.
  • Check kubelet logs (e.g. journalctl -u kubelet) for log lines like:
    • plugins.go:56] Registering credential provider: aws-ecr-key
    • provider.go:91] Refreshing cache for provider: *aws_credentials.ecrProvider

Using Azure Container Registry (ACR)

When using Azure Container Registry you can authenticate using either an admin user or a service principal.

In either case, authentication is done via standard Docker authentication.

These instructions assume the azure-cli command line tool.

You first need to create a registry and generate credentials, complete documentation for this can be found in the Azure container registry documentation.

Once you have created your container registry, you will use the following credentials to login:

  • DOCKER_USER : service principal, or admin username
  • DOCKER_PASSWORD: service principal password, or admin user password
  • DOCKER_REGISTRY_SERVER${some-registry-name}.azurecr.io
  • DOCKER_EMAIL${some-email-address}

Once you have those variables filled in you can configure a Kubernetes Secret and use it to deploy a Pod.

Configuring Nodes to Authenticate to a Private Repository

Note: if you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.

Note: if you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.

Note: this approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.

Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put this in the $HOME of user root on a kubelet, then docker will use it.

Here are the recommended steps to configuring your nodes to use a private registry.

In this example, run these on your desktop/laptop:

  1. Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
  2. View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
  3. Get a list of your nodes, for example:
    • if you want the names: nodes=$(kubectl get nodes -o jsonpath=‘{range.items[*].metadata}{.name} {end}‘)
    • if you want to get the IPs: nodes=$(kubectl get nodes -o jsonpath=‘{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}‘)
  4. Copy your local .docker/config.json to the home directory of root on each node.
    • for example: for n in $nodes; do scp ~/.docker/config.json [email protected]$n:/root/.docker/config.json; done

Verify by creating a pod that uses a private image, e.g.:

$ cat <<EOF > /tmp/private-image-test-1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: private-image-test-1
spec:
  containers:
    - name: uses-private-image
      image: $PRIVATE_IMAGE_NAME
      imagePullPolicy: Always
      command: [ "echo", "SUCCESS" ]
EOF
$ kubectl create -f /tmp/private-image-test-1.yaml
pod "private-image-test-1" created
$

If everything is working, then, after a few moments, you should see:

$ kubectl logs private-image-test-1
SUCCESS

If it failed, then you will see:

$ kubectl describe pods/private-image-test-1 | grep "Failed"
  Fri, 26 Jun 2015 15:36:13 -0700    Fri, 26 Jun 2015 15:39:13 -0700    19    {kubelet node-i2hq}    spec.containers{uses-private-image}    failed        Failed to pull image "user/privaterepo:v1": Error: image user/privaterepo:v1 not found

You must ensure all nodes in the cluster have the same .docker/config.json.

Otherwise, pods will run on some nodes and fail to run on others.

For example, if you use node autoscaling, then each instance template needs to include the .docker/config.json or mount a drive that contains it.

All pods will have read access to images in any private registry once private registry keys are added to the .docker/config.json.

This was tested with a private docker repository as of 26 June with Kubernetes version v0.19.3.

It should also work for a private registry such as quay.io, but that has not been tested.

Pre-pulling Images

Note: if you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.

Note: this approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.

By default, the kubelet will try to pull each image from the specified registry.

However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).

If you want to rely on pre-pulled images as a substitute for registry authentication, you must ensure all nodes in the cluster have the same pre-pulled images.

This can be used to preload certain images for speed or as an alternative to authenticating to a private registry.

All pods will have read access to any pre-pulled images.

Specifying ImagePullSecrets on a Pod

Note: This approach is currently the recommended approach for Google Kubernetes Engine, GCE, and any cloud-providers where node creation is automated.

Kubernetes supports specifying registry keys on a pod.

Creating a Secret with a Docker Config

Run the following command, substituting the appropriate uppercase values:

$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.

If you need access to multiple registries, you can create one secret for each registry.

Kubelet will merge any imagePullSecrets into a single virtual .docker/config.json when pulling images for your Pods.

Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.

Bypassing kubectl create secrets

If for some reason you need multiple items in a single .docker/config.json or need control not given by the above command, then you can create a secret using json or yaml.

Be sure to:

  • set the name of the data item to .dockerconfigjson
  • base64 encode the docker file and paste that string, unbroken as the value for field data[".dockerconfigjson"]
  • set type to kubernetes.io/dockerconfigjson
apiVersion: v1
kind: Secret
metadata:
  name: myregistrykey
  namespace: awesomeapps
data:
  .dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson

If you get the error message error: no objects passed to create, it may mean the base64 encoded string is invalid.

If you get an error message like Secret "myregistrykey" is invalid: data[.dockerconfigjson]: invalid value ..., it means the data was successfully un-base64 encoded, but could not be parsed as a .docker/config.json file.

Referring to an imagePullSecrets on a Pod

Now, you can create pods which reference that secret by adding an imagePullSecrets section to a pod definition.

apiVersion: v1
kind: Pod
metadata:
  name: foo
  namespace: awesomeapps
spec:
  containers:
    - name: foo
      image: janedoe/awesomeapp:v1
  imagePullSecrets:
    - name: myregistrykey

This needs to be done for each pod that is using a private registry.

However, setting of this field can be automated by setting the imagePullSecrets in a serviceAccount resource.

You can use this in conjunction with a per-node .docker/config.json. The credentials will be merged. This approach will work on Google Kubernetes Engine.

Use Cases

There are a number of solutions for configuring private registries. Here are some common use cases and suggested solutions.

  1. Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.

    • Use public images on the Docker hub.

      • No configuration required.
      • On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.
  2. Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
    • Use a hosted private Docker registry.

      • It may be hosted on the Docker Hub, or elsewhere.
      • Manually configure .docker/config.json on each node as described above.
    • Or, run an internal private registry behind your firewall with open read access.
      • No Kubernetes configuration is required.
    • Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
      • It will work better with cluster autoscaling than manual node configuration.
    • Or, on a cluster where changing the node configuration is inconvenient, use imagePullSecrets.
  3. Cluster with a proprietary images, a few of which require stricter access control.
    • Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.
    • Move sensitive data into a “Secret” resource, instead of packaging it in an image.
  4. A multi-tenant cluster where each tenant needs own private registry.
    • Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods of all tenants potentially have access to all images.
    • Run a private registry with authorization required.
    • Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.
    • The tenant adds that secret to imagePullSecrets of each namespace.

Container Environment Variables

This page describes the resources available to Containers in the Container environment.

Container environment

The Kubernetes Container environment provides several important resources to Containers:

  • A filesystem, which is a combination of an image and one or more volumes.
  • Information about the Container itself.
  • Information about other objects in the cluster.

Container information

The hostname of a Container is the name of the Pod in which the Container is running. It is available through the hostname command or the gethostname function call in libc.

The Pod name and namespace are available as environment variables through the downward API.

User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image.

Cluster information

A list of all services that were running when a Container was created is available to that Container as environment variables. Those environment variables match the syntax of Docker links.

For a service named foo that maps to a container port named bar, the following variables are defined:

FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>

Services have dedicated IP addresses and are available to the Container via DNS, if DNS addon is enabled.  

Container Lifecycle Hooks

This page describes how kubelet managed Containers can use the Container lifecycle hook framework to run code triggered by events during their management lifecycle.

Overview

Analogous to many programming language frameworks that have component lifecycle hooks, such as Angular, Kubernetes provides Containers with lifecycle hooks.

The hooks enable Containers to be aware of events in their management lifecycle and run code implemented in a handler when the corresponding lifecycle hook is executed.

Container hooks

There are two hooks that are exposed to Containers:

PostStart

This hook executes immediately after a container is created.

However, there is no guarantee that the hook will execute before the container ENTRYPOINT.

No parameters are passed to the handler.

PreStop

This hook is called immediately before a container is terminated.

It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent.

No parameters are passed to the handler.

A more detailed description of the termination behavior can be found in Termination of Pods.

Hook handler implementations

Containers can access a hook by implementing and registering a handler for that hook.

There are two types of hook handlers that can be implemented for Containers:

  • Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
  • HTTP - Executes an HTTP request against a specific endpoint on the Container.

Hook handler execution

When a Container lifecycle management hook is called, the Kubernetes management system executes the handler in the Container registered for that hook.

Hook handler calls are synchronous within the context of the Pod containing the Container.

This means that for a PostStart hook, the Container ENTRYPOINT and hook fire asynchronously.

However, if the hook takes too long to run or hangs, the Container cannot reach a running state.

The behavior is similar for a PreStop hook. If the hook hangs during execution, the Pod phase stays in a Terminating state and is killed after terminationGracePeriodSeconds of pod ends. If a PostStart or PreStop hook fails, it kills the Container.

Users should make their hook handlers as lightweight as possible.

There are cases, however, when long running commands make sense, such as when saving state prior to stopping a Container.

Hook delivery guarantees

Hook delivery is intended to be at least once, which means that a hook may be called multiple times for any given event, such as for PostStart or PreStop. It is up to the hook implementation to handle this correctly.

Generally, only single deliveries are made.If, for example, an HTTP hook receiver is down and is unable to take traffic, there is no attempt to resend.

In some rare cases, however, double delivery may occur. For instance, if a kubelet restarts in the middle of sending a hook, the hook might be resent after the kubelet comes back up.

Debugging Hook handlers

The logs for a Hook handler are not exposed in Pod events.

If a handler fails for some reason, it broadcasts an event.

For PostStart, this is the FailedPostStartHook event, and for PreStop, this is the FailedPreStopHook event.

You can see these events by running kubectl describe pod <pod_name>.

Here is some example output of events from running this command:

Events:
  FirstSeen    LastSeen    Count    From                            SubobjectPath        Type        Reason        Message
  ---------    --------    -----    ----                            -------------        --------    ------        -------
  1m        1m        1    {default-scheduler }                                Normal        Scheduled    Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd
  1m        1m        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Pulling        pulling image "test:1.0"
  1m        1m        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Created        Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined]
  1m        1m        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Pulled        Successfully pulled image "test:1.0"
  1m        1m        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Started        Started container with docker id 5c6a256a2567
  38s        38s        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Killing        Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1
  37s        37s        1    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Normal        Killing        Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1
  38s        37s        2    {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}                Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "main" with RunContainerError: "PostStart handler: Error executing in Docker Container: 1"
  1m         22s         2     {kubelet gke-test-cluster-default-pool-a07e5d30-siqd}    spec.containers{main}    Warning        FailedPostStartHook

  

原文地址:https://www.cnblogs.com/panpanwelcome/p/8120072.html

时间: 2024-08-06 13:00:58

Kubernetes之容器的相关文章

三小时学会Kubernetes:容器编排详细指南

三小时学会Kubernetes:容器编排详细指南 如果谁都可以在三个小时内学会Kubernetes,银行为何要为这么简单的东西付一大笔钱? 如果你心存疑虑,我建议你不妨跟着我试一试!在完成本文的学习后,你就能在Kubernetes集群上运行基于微服务的应用程序.我之所以能保证这一点,是因为我就是这么向客户介绍Kubernetes的. 这份指南与其他文章有何不同之处? 相当多!大多数指南是从Kubernetes概念和kubectl命令这类简单的东西开始的.它们假定读者熟悉应用程序开发.微服务和Do

[5.19 线下活动]Docker Meetup杭州站—拥抱Kubernetes,容器深度实践

对本次线下活动感兴趣的朋友,欢迎点击此处报名,领取免费票. 今年3月,Docker刚刚过完5岁生日,五年期间,Docker也逐渐在技术和实践方面趋于成熟,更是在去年年底主动拥抱Kubernetes. 5月19日,网易云将联合Docker官方主办Docker Meetup,邀请业界Docker深度实践企业,分享在容器.微服务和Kubernetes等方面的深度实践,更有Docker技术专家深度解读Docker对Kubernetes的拥抱! 当日议程安排: 主要议题包括: 议题1:深度解读,Docke

中国东信基于Kubernetes的容器云PaaS平台

"中国-东盟信息港"是按照国家"一带一路"倡议总体布局要求.建设更为紧密的中国-东盟命运共同体.21世纪海上丝绸之路的一个信息平台:http://www.caih.com.东信基于Rancher Kubernetes架构和建设了他们的容器云PaaS平台,在云原生.容器化.微服务.CICD.DevOps等方面的都有了相关实践和应用. 6月28日,负责中国东信容器云PaaS 平台的研发和建设.中国-东盟信息港的研发总监王志雄出席了Rancher Labs举办的Conta

Docker应用:Kubernetes(容器集群)

原文:Docker应用:Kubernetes(容器集群) 阅读目录: Docker应用:Hello World Docker应用:Docker-compose(容器编排) Docker应用:Kubernetes(容器集群) 前言: 终于出第三篇了,上个月就已经弄好了,一直没弄上来,步入正题之前有3个建议给想要学习Kubernetes的同学. 1.在国内因为防火墙的原因,你是不可能在Docker上在线开启的Kubernetes功能的.所以如果你会fan qiang上网,那万事大吉. 2.如果你离香

Kubernetes(k8s)容器运行时(CRI)

Kubernetes节点的底层由一个叫做"容器运行时"的软件进行支撑,它负责比如启停容器这样的事情.最广为人知的容器运行时当属Docker,但它不是唯一的.事实上,容器运行时这个领域发展迅速.为了使Kubernetes的扩展变得更容易,我们一直在打磨支持容器运行时的K8s插件API:容器运行时接口(Container Runtime Interface, CRI). CRI是什么? 每种容器运行时各有所长,许多用户都希望Kubernetes支持更多的运行时.在Kubernetes 1.

从 Kubernetes 谈容器网络

Pod 首先,Kubernetes 中的基本单元是 Pod,而非 Docker 容器. Pod 是一组共享了下面资源的容器: 进程命名空间 网络命名空间 IPC 命名空间 UTS 命名空间 简单的讲,一个 Pod 是一个小型的"虚拟机",里面运行若干个不同的进程,每个进程实际上就是一个容器. Kubernetes 要干的事情是要把这些 Pod 给互相连接起来,是不是联想到了什么了? 设计理念 其实 Docker 默认采用 NAT 的方式已经组成了简单的网络了.但Kubernetes 认

Kubernetes &amp; Docker 容器网络终极之战

与 Docker 默认的网络模型不同,Kubernetes 形成了一套自己的网络模型,该网络模型更加适应传统的网络模式,应用能够平滑的从非容器环境迁移到 Kubernetes 环境中. 自从 Docker 容器出现,容器的网络通信一直是众人关注的焦点,而容器的网络方案又可以分为两大部分: 单主机的容器间通信: 跨主机的容器间通信. 一.单主机 Docker 网络通信 利用 Net Namespace 可以为 Docker 容器创建隔离的网络环境,容器具有完全独立的网络栈,与宿主机隔离.也可以使

Kubernetes &amp; Docker 容器网络终极之战(十四)

目录 一.单主机 Docker 网络通信 1.1.host 模式 1.2 Bridge 模式 1.3 Container 模式 1.4.None 模式 二.跨主机 Docker 网络通信分类 2.1 通信方案 2.2.容器网络规范 2.3.网络通信实现方案 2.4.Kubernetes 网络模型 三.跨主机 Docker 网络 3.1 Flannel 网络方案 3.2.Calico 网络方案 3.3.Canal 网络方案 3.4.Docker overlay 网络方案 3.5.Docker ma

kubernetes命令式容器应用编排/部署应用/探查应用详情/部署service对象/扩缩容/修改删除对象

部署Pod应用 创建delpoyment控制器对象 [[email protected] ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --port=80 --replicas=1 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-p

项目环境搭建【Docker+k8s】九 || kubernetes创建容器

目录 创建容器 查看全部Pods状态 查看已部署的服务 发布服务 查看已发布的服务 查看服务详情 验证是否成功 停止服务 创建容器 命令中--replicas=2 启动2个实例,--port=80 运行在k8s的80端口上,没有进行映射端口 [[email protected] ~]# kubectl run nginx --image=nginx --replicas=2 --port=80 #输出如下: kubectl run --generator=deployment/apps.v1 i