使用kubernetes创建容器一直处于ContainerCreating状态的原因查找与解决

运行容器的时候,发现一直处于ContainerCreating状态,悲了个催,刚入手就遇到了点麻烦,下面来讲讲如何查找问题及解决的

运行容器命令:

[[email protected]149 ~]# kubectl run my-alpine --image=alpine --replicas=2 ping www.baidu.com

查看pods状态

1 [[email protected]149 ~]# kubectl get pods
2 NAME                         READY     STATUS              RESTARTS   AGE
3 my-alpine-2150523991-knzcx   0/1       ContainerCreating   0          6m
4 my-alpine-2150523991-lmvv5   0/1       ContainerCreating   0          6m

一直处于ContainerCreating状态,开始查找原因
执行如下命令:

 1 [[email protected]149 ~]# kubectl describe pod my-alpine
 2 Name:        my-alpine-2150523991-knzcx
 3 Namespace:    default
 4 Node:        node-150/192.168.10.150
 5 Start Time:    Sat, 19 Nov 2016 18:20:52 +0800
 6 Labels:        pod-template-hash=2150523991,run=my-alpine
 7 Status:        Pending
 8 IP:
 9 Controllers:    ReplicaSet/my-alpine-2150523991
10 Containers:
11   my-alpine:
12     Container ID:
13     Image:        alpine
14     Image ID:
15     Port:
16     Args:
17       ping
18       www.baidu.com
19     QoS Tier:
20       cpu:        BestEffort
21       memory:        BestEffort
22     State:        Waiting
23       Reason:        ContainerCreating
24     Ready:        False
25     Restart Count:    0
26     Environment Variables:
27 Conditions:
28   Type        Status
29   Ready     False
30 No volumes.
31 Events:
32   FirstSeen    LastSeen    Count    From            SubobjectPath    Type        Reason        Message
33   ---------    --------    -----    ----            -------------    --------    ------        -------
34   7m        7m        1    {default-scheduler }            Normal        Scheduled    Successfully assigned my-alpine-2150523991-knzcx to node-150
35   6m        6m        1    {kubelet node-150}            Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 64.233.189.82:443: getsockopt: connection refused\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 64.233.189.82:443: getsockopt: connection refused)"
36
37   4m    47s    3    {kubelet node-150}        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 74.125.204.82:443: getsockopt: connection refused\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused)"
38
39   4m    8s    6    {kubelet node-150}        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause:2.0\""
40
41
42 Name:        my-alpine-2150523991-lmvv5
43 Namespace:    default
44 Node:        node-150/192.168.10.150
45 Start Time:    Sat, 19 Nov 2016 18:20:52 +0800
46 Labels:        pod-template-hash=2150523991,run=my-alpine
47 Status:        Pending
48 IP:
49 Controllers:    ReplicaSet/my-alpine-2150523991
50 Containers:
51   my-alpine:
52     Container ID:
53     Image:        alpine
54     Image ID:
55     Port:
56     Args:
57       ping
58       www.baidu.com
59     QoS Tier:
60       cpu:        BestEffort
61       memory:        BestEffort
62     State:        Waiting
63       Reason:        ContainerCreating
64     Ready:        False
65     Restart Count:    0
66     Environment Variables:
67 Conditions:
68   Type        Status
69   Ready     False
70 No volumes.
71 Events:
72   FirstSeen    LastSeen    Count    From            SubobjectPath    Type        Reason        Message
73   ---------    --------    -----    ----            -------------    --------    ------        -------
74   7m        7m        1    {default-scheduler }            Normal        Scheduled    Successfully assigned my-alpine-2150523991-lmvv5 to node-150
75   5m        1m        3    {kubelet node-150}            Warning        FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 74.125.204.82:443: getsockopt: connection refused\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused)"
76
77   3m    1m    4    {kubelet node-150}        Warning    FailedSync    Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/pause:2.0\""
78 其中:
79
80 Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 74.125.204.82:443: getsockopt: connection refused\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused)

不难看出,无法访问到gcr.io
有如下解决办法:

    1. FQ
    2. 修改hosts文件(这里我用的是“61.91.161.217  gcr.io”,但是可能会失效)
    3. 从其他源下载容器“pause:2.0”,然后打tag为“gcr.io/google_containers/pause:2.0”
时间: 2024-10-23 12:07:14

使用kubernetes创建容器一直处于ContainerCreating状态的原因查找与解决的相关文章

kubernetes创建yaml,pod服务一直处于 ContainerCreating状态的原因查找与解决

最近刚刚入手研究kubernetes,运行容器的时候,发现一直处于ContainerCreating状态,悲了个催,刚入手就遇到了点麻烦,下面来讲讲如何查找问题及解决的 运行容器命令: kubectl -f create redis.yaml kubectl get pod redis NAME                 READY     STATUS              RESTARTS   AGEredis-master-6jgsl   0/1       ContainerC

Kubernetes创建pod一直处于ContainerCreating排查和解决

用k8s创建完pod后,发现无法访问demo应用,查了一下pods状态,发现都在containercreationg状态中. 百度了一下,根据网上的方法,查了一下mysql-jn6f2这个pods的详情 其中最主要的问题是:details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory) 解决方案: 查看/etc/docker/certs.d/regist

Kubernetes强制删除一直处于Terminating状态的pod。

在dashboard界面删除容器,发现无法删除.使用命令查看发现该pod一直处于terminating的状态Kubernetes强制删除一直处于Terminating状态的pod. 1.使用命令获取pod的名字kubectl get po -n NAMESPACE |grep Terminating2.使用kubectl中的强制删除命令kubectl delete pod podName -n NAMESPACE --force --grace-period=0 原文地址:https://blo

解决k8s出现pod服务一直处于ContainerCreating状态的问题的过程

参考于: https://blog.csdn.net/learner198461/article/details/78036854 https://liyang.pro/solve-k8s-pod-containercreating/ https://blog.csdn.net/golduty2/article/details/80625485 根据实际情况稍微做了修改和说明. 在创建Dashborad时,查看状态总是ContainerCreating [[email protected] k8

项目环境搭建【Docker+k8s】九 || kubernetes创建容器

目录 创建容器 查看全部Pods状态 查看已部署的服务 发布服务 查看已发布的服务 查看服务详情 验证是否成功 停止服务 创建容器 命令中--replicas=2 启动2个实例,--port=80 运行在k8s的80端口上,没有进行映射端口 [[email protected] ~]# kubectl run nginx --image=nginx --replicas=2 --port=80 #输出如下: kubectl run --generator=deployment/apps.v1 i

点击ViewGroup时其子控件也变成pressed状态的原因分析及解决办法

这个问题,当初在分析touch事件处理的时候按理应该分析到的,可是由于我当时觉得这块代码和touch的主题不是那么紧密, 就这么忽略掉了,直到后来在这上面遇到了问题.其实这个现象做Android开发的应该或多或少的都遇到过,我在我们自己的app中 也发现了这一现象,当初是百思不得其解,因为按照我自己的研究.分析,只有在一个view接受按下的touch事件时,才会调到view 自己的setPressed方法,从而改变background状态啊.这里的case明显没有按下这个子view啊,按下的是V

k8s创建容器pod一直处于ContainerCreating,

刚刚在自学过程中发现创建pod之后,一直处于ContainerCreating状态: 之后我用kubectl describe pod nginx,发现报错:open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory.去文件夹查看之后,发现redhar-ca.crt文件存在,不过用命令ll查看,发现其链接文件/etc/rhsm/ca/redhat-uep.pem,不存在,

k8s删除pod一直处于terminating状态

用的nfs挂载卷,当删除pv后再删除pod时,pod一直处于terminating状态. 如下图: 解决方法: 可使用kubectl中的强制删除命令 # 删除POD kubectl delete pod [pod name] --force --grace-period=0 -n [namespace] # 删除NAMESPACE kubectl delete namespace NAMESPACENAME --force --grace-period=0 若以上方法无法删除,可使用第二种方法,

Kubernetes创建挂载共享存储的容器

原文链接:https://www.58jb.com/html/135.html 在上一次的Mysql容器中,已经使用过了配置宿主机目录挂载的方式,这样虽然方便但是不够安全:一般都是把数据存储在远程服务器上如:NFS,GlusterFS,ceph等:一般目前主流的还是使用ceph.GlusterFS; 本次实验使用最简单的方式NFS来配置一个通过挂载共享存储的nginx容器: 两台机器: kubernetes:  10.0.10.135  [Centos7.2] nfs: 10.0.10.31