【kubernetes/k8s概念】k8s 坑问题汇总

矫情吗;* 2022-04-02 09:10 2737阅读 0赞

1. Pod始终处于Pending状态

  1. 如果Pod保持在`Pending`的状态,意味着无法被正常的调度到节点上。由于某种系统资源无法满足Pod运行的需求
  • 系统没有足够的资源:已经用尽了集群中所有的CPU或内存资源。需要清理一些不在需要的Pod,调整它们所需的资源量,或者向集群中增加新的节点。
  • 用户指定了hostPort:通过hostPort用户能够将服务暴露到指定的主机端口上,会限制Pod能够被调度运行的节点。

2. Pod始终处于Waiting状态

  1. Pod处在`Waiting`的状态,说明已经被调度到了一个工作节点,却无法在那个节点上运行。可以使用kubectl describe 含有更详细的错误信息。最经常导致Pod始终`Waiting`的原因是无法下载镜像

3. Pod 处于 CrashLoopBackOff 状态

  1. CrashLoopBackOff 状态说明容器曾经启动了,但又异常退出了。此时 Pod RestartCounts 通常是大于 0
  • 容器进程退出
  • 健康检查失败退出
  • OOMKilled

5. Pod一直崩溃或运行不正常

  1. 可以使用kubectl describe以及kubectl logs排查问题,但是这个一般也不确定
  2. 情况有:健康检测失败,OOM情况,或者容器运行生命周期结束

6. 集群雪崩需给Kubelet预留资源

  1. [从一次集群雪崩看Kubelet资源预留的正确姿势 - WaltonWang's Blog - OSCHINA - 中文开源技术交流社区][Kubelet_ - WaltonWang_s Blog - OSCHINA -]
  • Node Allocatable Resource = Node Capacity - Kube-reserved - system-reserved - eviction-threshold

--eviction-hard=memory.available<1024Mi,nodefs.available<10%,nodefs.inodesFree<5% \

--system-reserved=cpu=0.5,memory=1G \ —kube-reserved=cpu=0.5,memory=1G \

--kube-reserved-cgroup=/system.slice/kubelet.service \

--system-reserved-cgroup=/system.slice \

--enforce-node-allocatable=pods,kube-reserved,system-reserved \

7. nfs挂载错误wrong fs type, bad option, bad superblock

根据错误提示,查看/sbin/mount.文件,果然发现没有/sbin/mount.nfs的文件,安装nfs-utils即可

8. kube-apiserver accept4: too many open files

  1. http: Accept error: accept tcp 0.0.0.0:6443: accept4: too many open files; retrying in 1s
  2. 查看apiserver进程,lsof -p $pid,发现占用65540个,查看cat /proc/$pid/limits发现限制在65536个,查看占用的一大堆10250的某个kubelet,发现如下错误
  1. perationExecutor.UnmountVolume started for volume "makepool1-web3" (UniqueName: "kubernetes.io/nfs/7be05590-3a46-11e9-906c-20040fedf0bc-makepool1-web3") pod "7be05590-3a46-11e9-906c-20040fedf0bc" (UID: "7be05590-3a46-11e9-906c-20040fedf0bc")
  2. nestedpendingoperations.go:263\] Operation for "\\"kubernetes.io/nfs/7be05590-3a46-11e9-906c-20040fedf0bc-makepool1-web3\\" (\\"7be05590-3a46-11e9-906c-20040fedf0bc\\")" failed. No retries permitted until 2019-03-07 12:31:28.78976568 +0800 CST m=+7328011.532812666 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume \\"makepool1-web3\\" (UniqueName: \\"kubernetes.io/nfs/7be05590-3a46-11e9-906c-20040fedf0bc-makepool1-web3\\") pod \\"7be05590-3a46-11e9-906c-20040fedf0bc\\" (UID: \\"7be05590-3a46-11e9-906c-20040fedf0bc\\") : Unmount failed: exit status 16\\nUnmounting arguments: /var/lib/kubelet/pods/7be05590-3a46-11e9-906c-20040fedf0bc/volumes/kubernetes.io~nfs/makepool1-web3\\nOutput: umount.nfs: /var/lib/kubelet/pods/7be05590-3a46-11e9-906c-20040fedf0bc/volumes/kubernetes.io~nfs/makepool1-web3: device is busy\\n\\n"
  1. 目前解决方案:
  2. kubectl delete --grace-period=0 --force
  3. [https://github.com/kubernetes/kubernetes/issues/51835][https_github.com_kubernetes_kubernetes_issues_51835]

9. Kubernetes Pod无法删除,Docker: Device is busy问题的解决

参考: https://fav.snadn.cn/article/snapshot?id=131#问题发现

查看证书过期时间

  1. openssl x509 -in xxx.crt -text -noout

Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=master-node-ca@1574650458
Validity
Not Before: Nov 25 01:54:18 2019 GMT
Not After : Nov 24 01:54:18 2020 GMT

10. k8s 证书过期,一年时间,

  1. openssl x509 -in /etc/kubernetes/ssl/kubernetes.csr -noout -text |grep ' Not '

2. 自动轮换 kubelet 证书

注:kubelet证书分为server和client两种, k8s 1.9默认启用了client证书的自动轮换,但server证书自动轮换需要用户开启。方法是:

2.1 增加 kubelet 参数

--feature-gates=RotateKubeletServerCertificate=true

2.2 增加 controller-manager 参数

--experimental-cluster-signing-duration=87600h0m0s
--feature-gates=RotateKubeletServerCertificate=true

2.3 创建 rbac 对象

创建rbac对象,允许节点轮换kubelet server证书:

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRole
  3. metadata:
  4. annotations:
  5. rbac.authorization.kubernetes.io/autoupdate: "true"
  6. labels:
  7. kubernetes.io/bootstrapping: rbac-defaults
  8. name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
  9. rules:
  10. - apiGroups:
  11. - certificates.k8s.io
  12. resources:
  13. - certificatesigningrequests/selfnodeserver
  14. verbs:
  15. - create
  16. ---
  17. apiVersion: rbac.authorization.k8s.io/v1
  18. kind: ClusterRoleBinding
  19. metadata:
  20. name: kubeadm:node-autoapprove-certificate-server
  21. roleRef:
  22. apiGroup: rbac.authorization.k8s.io
  23. kind: ClusterRole
  24. name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
  25. subjects:
  26. - apiGroup: rbac.authorization.k8s.io
  27. kind: Group
  28. name: system:nodes
  1. 第二种方法:修改源码kubeadm编译
  2. < kubernetes - 1.14 版本,修改文件 staging/src/k8s.io/client-go/util/cert/cert.go
  3. >= kubernetes - 1.14 版本,修改文件 cmd/kubeadm/app/util/pkiutill/pki-helpers.go

// NewSignedCert creates a signed certificate using the given CA certificate and key
func NewSignedCert(cfg *certutil.Config, key crypto.Signer, caCert *x509.Certificate, caKey crypto.Signer) (*x509.Certificate, error) {

const durationTen = time.Hour * 24 * 355 * 10

  1. certTmpl := x509.Certificate\{
  2. Subject: pkix.Name\{
  3. CommonName: cfg.CommonName,
  4. Organization: cfg.Organization,
  5. \},
  6. DNSNames: cfg.AltNames.DNSNames,
  7. IPAddresses: cfg.AltNames.IPs,
  8. SerialNumber: serial,
  9. NotBefore: caCert.NotBefore,
  10. // NotAfter: time.Now().Add(kubeadmconstants.CertificateValidity).UTC(),
  11. NotAfter: time.Now().Add(durationTen).UTC(),
  12. KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
  13. ExtKeyUsage: cfg.Usages,
  14. \}
  1. 编译 make WHAT=cmd/kubeadm GOFLAGES=-v
  2. kubeadm alpha certs renew all --config=/usr/local/install-k8s/core/kubeadm-config.yaml

11. k8s无法删除namespace 提示 Terminating

解决方法:

kubectl get ns ns-xxx-zhangzhonglin-444c6833 -o json > ns-delete.json

删除文件中spec.finalizers字段

“spec”: {
},

注:在执行命令前,要先克隆一个新会话,执行 kubectl proxy —port=8081

curl -k -H “Content-Type: application/json” -X PUT —data-binary @ns-delete.json http://127.0.0.1:8081/api/v1/namespaces/ns-xxx-zhangzhonglin-444c6833/finalize

12. Kubernetes: No Route to Host

  1. Error getting server version: Get https://10.200.0.1:443/version?timeout=32s: dial tcp 10.200.0.1:443: connect: no route to host
  2. 解决方法: iptables -F

13. kubeadm kube-controller-manager does not have ceph rbd binary anymore

  1. Error: "failed to create rbd image: executable file not found in $PATH, command output: "

https://github.com/kubernetes/kubernetes/issues/56990

yum install -y ceph-common

14. monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2019-05-06 02:00:09.601676)

  1. ceph osd问题,主要是时钟不同步问题

15. helm报这个错误 Helm: Error: no available release name found

  1. 因为 tiller没有正确的角色权限

kubectl create serviceaccount —namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule —clusterrole=cluster-admin —serviceaccount=kube-system:tiller

kubectl patch deploy —namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:”tiller”}}}}‘

16. attachdetach-controller Multi-Attach error for volume “pvc-d0fde86c-8661-11e9-b873-0800271c9f15” Volume is already used by pod

  1. [The controller-managed attachment and detachment is not able to detach a rbd volume from a lost node \#62061][The controller-managed attachment and detachment is not able to detach a rbd volume from a lost node_62061]
  2. [https://github.com/kubernetes/kubernetes/issues/70349][https_github.com_kubernetes_kubernetes_issues_70349]
  3. [https://github.com/kubernetes/kubernetes/pull/45346][https_github.com_kubernetes_kubernetes_pull_45346]
  4. [https://github.com/kubernetes/kubernetes/issues/53059][https_github.com_kubernetes_kubernetes_issues_53059]
  5. [https://github.com/kubernetes/kubernetes/pull/40148][https_github.com_kubernetes_kubernetes_pull_40148]

Vsphere Cloud Provider: failed to detach volume from shutdown node #75342

Don’t try to attach volumes which are already attached to other nodes #45346

Pods with volumes stuck in ContainerCreating after cluster node is deleted from OpenStack #50200

Don’t try to attach volumes which are already attached to other nodes#40148

Pods with volumes stuck in ContainerCreating after cluster node is powered off in vCenter #50944

Pod mount Ceph RDB volumn failed due to timeout. “timeout expired waiting for volumes to attach or mount for pod” #75492 (没人跟帖)

  1. kubelet 挂掉,csi-rbdplugin 依然建在(statefuleset)

18. k8s pv无法删除问题

pv始终处于“Terminating”状态,而且delete不掉

`删除k8s中的记录kubectl patch pv xxx -p ‘{“metadata”:{“finalizers”:null}}’`

19. Volumes fail to clean up when kubelet restart due to race between actual and desired state #75345

Fix race condition between actual and desired state in kublet volume manager #75458

Pod is stuck in Terminating status forever after Kubelet restart #72604

20. when using ValidatingWebhookConfiguration for deployment subresource(scale) validation. Internal error occurred: converting (extensions.Deployment).Replicas to (v1beta1.Scale).Replicas: Selector not present in src

该问题已经修复,v15版本

https://github.com/kubernetes/kubernetes/pull/76849/commits

#

21. Error from server: Get https://master-node:10250/containerLogs/default/csi-hostpathplugin-0/node-driver-registrar: dial tcp: lookup master-node on 114.114.114.114:53: no such host

  1. 解决方法,在 /etc/hosts 添加 192.168.X.X master-node
  1. calico/node is not ready: BIRD is not ready: BGP not established with

    主要原因是 calico 傻,没有识别到网卡

    modified calico.yaml file to include:

    • name: IP_AUTODETECTION_METHOD
      1. value: "interface=ens.*"

23. client-go@v11.0.0+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher

可以尝试手动替换k8s.io/apimachinery@v0.17.0为k8s.io/apimachinery@release-1.14来解决。在终端执行# go mod download -json k8s.io/apimachinery@release-1.14

24. 无法删除image报rbd: error: image still has watchers解决方法

  1. 参考. [无法删除imagerbd: error: image still has watchers解决方法][image_rbd_ error_ image still has watchers]

解决思路:

在Ceph集群日常运维中,管理员可能会遇到有的image删除不了的情况:
1) 由于image下有快照信息,只需要先将快照信息清除,然后再删除该image即可
2) 该image仍旧被一个客户端在访问,具体表现为该image中有watcher。如果该客户端异常了,那么就会出现无法删除该image的情况

对于第一种情况,很好解决,下面要说的是第二种情况该如何解决。解决之前先科普一下watcher相关的知识:
Ceph中有一个watch/notify机制(粒度是object),它用来在不同客户端之间进行消息通知,使得各客户端之间的状态保持一致,而每一个进行watch的客户端,对于Ceph集群来说都是一个watcher。

解决方法:

1. 查看当前image上的watcher

查看方法一:

  1. [root@node3 ~]# rbd status foo
  2. watcher=192.168.197.157:0/1135656048 client.4172 cookie=1

这种查看方法简单快捷,值得推荐

查看方法二:

1) 首先找到image的header对象

  1. [root@node3 ~]# rbd info foo
  2. rbd image 'foo':
  3. size 1024 MB in 256 objects
  4. order 22 (4096 kB objects)
  5. block_name_prefix: rbd_data.1041643c9869
  6. format: 2
  7. features: layering
  8. flags:
  9. create_timestamp: Tue Oct 17 10:20:50 2017

由该image的block_name_prefix为 rbd_data.1041643c9869,可知该image的header对象为rbd_header.1041643c9869,得到了header对象后,查看watcher信息

2) 查看该image的header对象上的watcher信息

  1. [root@node3 ~]# rados -p rbd listwatchers rbd_header.1041643c9869
  2. watcher=192.168.197.157:0/1135656048 client.4172 cookie=1

2. 删除image上的watcher

2.1 把该watcher加入黑名单:

  1. [root@node3 ~]# ceph osd blacklist add 192.168.197.157:0/1135656048
  2. blacklisting 192.168.197.157:0/1135656048 until 2017-10-18 12:04:19.103313 (3600 sec)

2.2 查看占用该image的watcher:

  1. [root@node3 ~]# rados -p rbd listwatchers rbd_header.1041643c9869
  2. [root@node3 ~]#

异常客户端的watcher信息已经不存在了,之后我们就可以对该image进行删除操作了

2.3 删除该image:

  1. [root@node3 ~]# rbd rm foo
  2. Removing image: 100% complete...done.

3. 后续操作

实际上做完上面的已经解决了问题,不过最好还是把加入黑名单的客户端移除,下面是有关黑名单的相关操作

3.1 查询黑名单列表:

  1. [root@node3 ~]# ceph osd blacklist ls
  2. listed 1 entries
  3. 192.168.197.157:0/1135656048 2017-10-18 12:04:19.103313

3.2 从黑名单移出一个客户端:

  1. [root@node3 ~]# ceph osd blacklist rm 192.168.197.157:0/1135656048
  2. un-blacklisting 192.168.197.157:0/1135656048

3.3 清空黑名单:

  1. [root@node3 ~]# ceph osd blacklist clear
  2. removed all blacklist entries

参考文献

删除 Ceph 的image报rbd: error: image still has watchers

25. x509: subject with cn=metrics-client is not in the allowed list: [aggregator]

  1. 请求头部标识不正确,在kube-apiserver中增加配置
  2. --requestheader-allowed-names=aggregator,metrics-client

26. metrics-server 安装问题

  1. unable to fully collect metrics: unable to fully scrape metrics from source kubelet\_summary:master-node: unable to fetch metrics from Kubelet master-node (master-node): Get https://master-node:10250/stats/summary/: dial tcp: lookup master-node on 10.200.254.254:53: no such host
  1. - name: tmp-dir
  2. emptyDir: \{\}
  3. containers:
  4. - name: metrics-server
  5. image: zhangzhonglin/metrics-server-amd64:v0.3.6
  6. imagePullPolicy: IfNotPresent
  7. args:

** - —kubelet-preferred-address-types=InternalIP,Hostname

  1. - --kubelet-insecure-tls**
  2. - --cert-dir=/tmp
  3. - --secure-port=4443
  1. metrics-server这个容器不能通过CoreDNS 10.200.254.254 解析各Node的主机名,metrics-server连节点时默认是连接节点的主机名,需要加个参数,让它连接节点的IP,同时因为10250https端口,连接它时需要提供证书,所以加上–kubelet-insecure-tls,表示不验证客户端证书

27. Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy)

# kubectl logs aws-node-rvz95 -nkube-system
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log aws-node-rvz95)

解决方法:

# kubectl create clusterrolebinding kubernetes —clusterrole=cluster-admin —user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kubernetes created

28. etcd报错:failed to send out heartbeat on time etcdserver: server is likely overloaded

  1. 心跳检测报错主要与磁盘速度、cpu和网络有关

disk运行过慢导致的,leader一般会在心跳包里附带一些metadata,leader需要先把这些数据固化到磁盘上,然后才能发送。写磁盘过程可能要与其他应用竞争,或者因为磁盘是一个虚拟的或者是SATA类型的导致运行过慢,此时只有更好更快磁盘硬件才能解决问题。

CPU计算能力不足

网速过慢,根据机房间的RTT调整heartbeat-interval,而参数election-timeout则至少是heartbeat-interval的5倍

29. etcdserver: applying raft message exceeded backend quota

  1. etcdserver: mvcc: database space exceeded
  2. took (489ns) to execute, err is etcdserver: no space
  3. 默认etcd空间配额大小为 2G,超过 2G 将不再写入数据。通过给etcd配置 --quota-backend-bytes 参数增大空间配额,最大支持 8G
  1. -quota-backend-bytes 8589934592
  1. # get current revision
  2. $ rev=$(ETCDCTL_API=3 etcdctl --endpoints=:2379 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9]*')
  3. # compact away all old revisions
  4. $ ETCDCTL_API=3 etcdctl compact $rev
  5. compacted revision 1516
  6. # defragment away excessive space
  7. $ ETCDCTL_API=3 etcdctl defrag
  8. Finished defragmenting etcd member[127.0.0.1:2379]
  9. # disarm alarm
  10. $ ETCDCTL_API=3 etcdctl alarm disarm
  11. memberID:13803658152347727308 alarm:NOSPACE
  12. # test puts are allowed again
  13. $ ETCDCTL_API=3 etcdctl put newkey 123
  14. OK

30. etcdserver: too many requests

  1. 如果 Raft 模块已提交的索引(committed index)比已应用到状态机的索引(applied index)多于 5000,就返回 "etcdserver: too many requests" client
  2. 提交到 Raft 模块的请求,都会做一些简单的限速判断

31. 无法访问 gcr.io

  • 地址1registry.aliyuncs.com/google_containers
  • 地址2registry.cn-hangzhou.aliyuncs.com/google_containers

32. (as uid:107, gid:107): Permission denied

libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: Cannot access storage file ‘/root/-image-build/-image.qcow2’ (as uid:107, gid:107): Permission denied [code=38 int1=13]

/etc/libvirt/qemu.conf user=root

33. metrics-server 报错 error: metrics not available yet

发表评论

表情:
评论列表 (有 0 条评论,2737人围观)

还没有评论,来说两句吧...

相关阅读

    相关 k8s基本概念

    是什么? 基于容器技术的分布式架构,支持自动化部署,大规模可伸缩,应用容器化管理。复制多个容器(集装箱docker),集群。 做什么? 负载均衡,容器自动化复制和

    相关 k8s概念

    k8s中大部分概念如node/pod等都可以看做资源对象,k8s其实是一个高度自动化的资源控制系统,通过跟踪比对etcd里的资源期望状态和当前环境里的实际资源状态差异来实现自动