接上文继续往下走
11、master 节点kubelet 部署
1 2 3 4 5 |
本文档介绍部署 kubelet 的步骤。 说明:kube-apiserver 高可用用采用的是 kubelet 启动静态pod 模式提供给其它组件访问kube-apiserver api 所以就需要先master 节点部署kubelet 注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。 |
11.1 kubelet 生成bootstrap
生成bootstrap配置文件
1 2 3 4 5 |
export TOKEN_ID=$(head -c 30 /dev/urandom | od -An -t x | tr -dc a-f3-9|cut -c 3-8) export TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16) export BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET} |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
cat > /opt/k8s/work/yaml/bootstrap-secret.yaml << EOF --- apiVersion: v1 kind: Secret metadata: # Name MUST be of form "bootstrap-token-<token id>" name: bootstrap-token-${TOKEN_ID} namespace: kube-system # Type MUST be 'bootstrap.kubernetes.io/token' type: bootstrap.kubernetes.io/token stringData: # Human readable description. Optional. description: "The default bootstrap token generated by 'kubelet '." # Token ID and secret. Required. token-id: ${TOKEN_ID} token-secret: ${TOKEN_SECRET} # Allowed usages. usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" # Extra groups to authenticate the token as. Must start with "system:bootstrappers:" auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress EOF |
创建bootstrap 授权文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
cat > /opt/k8s/work/yaml/kubelet-bootstrap-rbac.yaml << EOF --- # 允许 system:bootstrappers 组用户创建 CSR 请求 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers --- # 自动批准 system:bootstrappers 组用户 TLS bootstrapping 首次申请证书的 CSR 请求 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-client-auto-approve-csr roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers --- # 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-client-auto-renew-crt roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- # 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-server-auto-renew-crt roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- EOF |
生成集群组件授权
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
cat > /opt/k8s/work/yaml/kube-api-rbac.yaml << EOF --- # kube-controller-manager 绑定 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: controller-node-clusterrolebing roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-controller-manager subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:kube-controller-manager --- # 创建kube-scheduler 绑定 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: scheduler-node-clusterrolebing roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-scheduler subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:kube-scheduler --- # 创建kube-controller-manager 到auth-delegator 绑定 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: controller-manager:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:kube-controller-manager --- #授予 kubernetes 证书访问 kubelet API 的权限 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kube-system-cluster-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:serviceaccount:kube-system:default --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubelet-node-clusterbinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kube-apiserver:kubelet-apis roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kubelet-api-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF |
11.2 创建kubelet bootstrap kubeconfig
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cd /opt/k8s/ # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials system:bootstrap:${TOKEN_ID} \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=system:bootstrap:${TOKEN_ID} \ --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=/opt/k8s/kubeconfig/bootstrap.kubeconfig |
server 使用kubelet 静态pod 启动 所以使用127.0.0.1访问api
分发到所有master节点:
1 2 3 |
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.0.12:/apps/k8s/conf/ scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.0.13:/apps/k8s/conf/ scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.0.14:/apps/k8s/conf/ |
11.3 提交bootstrap yaml 到集群
1 2 3 4 5 6 |
# 提交bootstrap secret 到集群 kubectl apply -f /opt/k8s/work/yaml/bootstrap-secret.yaml # bootstrap授权 kubectl apply -f /opt/k8s/work/yaml/kubelet-bootstrap-rbac.yaml # 提交组件授权 kubectl apply -f /opt/k8s/work/yaml/kube-api-rbac.yaml |
11.4 生成kubelet 配置文件
192.168.0.12节点:在k8s-master-1 节点上执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat > /apps/k8s/conf/kubelet <<EOF KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \ --node-ip=192.168.0.12 \ --hostname-override=k8s-master-1 \ --cert-dir=/apps/k8s/ssl \ --runtime-cgroups=/systemd/system.slice \ --root-dir=/var/lib/kubelet \ --log-dir=/apps/k8s/log \ --alsologtostderr=true \ --config=/apps/k8s/conf/kubelet.yaml \ --logtostderr=true \ --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/crio/crio.sock \ --containerd=unix:///var/run/crio/crio.sock \ --pod-infra-container-image=docker.io/juestnow/pause:3.5 \ --v=2 \ --image-pull-progress-deadline=30s" EOF |
1 2 3 4 5 |
注意: 1.如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况; 2.--bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求; 3.K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件; 4.--pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸; |
kubelet config配置生成
从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet –help 会提示:
1 |
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag |
创建 kubelet 参数配置文件模板(可配置项参考 代码中注释):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
cat > /apps/k8s/conf/kubelet.yaml <<EOF --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration staticPodPath: "/apps/work/kubernetes/manifests" syncFrequency: 30s fileCheckFrequency: 20s httpCheckFrequency: 20s address: 192.168.0.12 port: 10250 readOnlyPort: 0 tlsCipherSuites: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_128_GCM_SHA256 rotateCertificates: true authentication: x509: clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem" webhook: enabled: true cacheTTL: 2m0s anonymous: enabled: false authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s registryPullQPS: 5 registryBurst: 10 eventRecordQPS: 15 eventBurst: 30 enableDebuggingHandlers: true healthzPort: 10248 healthzBindAddress: 192.168.0.12 oomScoreAdj: -999 clusterDomain: cluster.local clusterDNS: - 10.66.0.2 streamingConnectionIdleTimeout: 4h0m0s nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 5m0s nodeLeaseDurationSeconds: 40 imageMinimumGCAge: 2m0s imageGCHighThresholdPercent: 70 imageGCLowThresholdPercent: 50 volumeStatsAggPeriod: 1m0s kubeletCgroups: "/systemd/system.slice" cgroupsPerQOS: true cgroupDriver: systemd cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s topologyManagerPolicy: none runtimeRequestTimeout: 2m0s hairpinMode: hairpin-veth maxPods: 55 podsPerCore: 0 podPidsLimit: -1 resolvConf: "/etc/resolv.conf" cpuCFSQuota: true cpuCFSQuotaPeriod: 100ms maxOpenFiles: 1000000 contentType: application/vnd.kubernetes.protobuf kubeAPIQPS: 15 kubeAPIBurst: 30 serializeImagePulls: false evictionHard: imagefs.available: 10% memory.available: 500Mi nodefs.available: 10% evictionSoft: imagefs.available: 15% memory.available: 500Mi nodefs.available: 15% evictionSoftGracePeriod: imagefs.available: 2m memory.available: 2m nodefs.available: 2m evictionPressureTransitionPeriod: 20s evictionMinimumReclaim: imagefs.available: 500Mi memory.available: 0Mi nodefs.available: 500Mi enableControllerAttachDetach: true makeIPTablesUtilChains: true iptablesMasqueradeBit: 14 iptablesDropBit: 15 failSwapOn: false containerLogMaxSize: 100Mi containerLogMaxFiles: 10 configMapAndSecretChangeDetectionStrategy: Watch systemReserved: cpu: 1000m ephemeral-storage: 1Gi memory: 1024Mi kubeReserved: cpu: 500m ephemeral-storage: 1Gi memory: 512Mi systemReservedCgroup: "/systemd/system.slice" kubeReservedCgroup: "/systemd/system.slice" enforceNodeAllocatable: - pods allowedUnsafeSysctls: - kernel.msg* - kernel.shm* - kernel.sem - fs.mqueue.* - net.* EOF |
解析:
1 2 3 4 5 6 7 8 |
1.address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API; 2.readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定; 3.authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口; 4.authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证; 5.authentication.webhook.enabled=true:开启 HTTPs bearer token 认证; 6.对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized; 7.authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC); 8.需要 root 账户运行; |
192.168.0.13节点:在k8s-master-2 节点上执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat > /apps/k8s/conf/kubelet <<EOF KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \ --node-ip=192.168.0.13 \ --hostname-override=k8s-master-2 \ --cert-dir=/apps/k8s/ssl \ --runtime-cgroups=/systemd/system.slice \ --root-dir=/var/lib/kubelet \ --log-dir=/apps/k8s/log \ --alsologtostderr=true \ --config=/apps/k8s/conf/kubelet.yaml \ --logtostderr=true \ --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/crio/crio.sock \ --containerd=unix:///var/run/crio/crio.sock \ --pod-infra-container-image=docker.io/juestnow/pause:3.5 \ --v=2 \ --image-pull-progress-deadline=30s" EOF |
kubelet config配置生成
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
cat > /apps/k8s/conf/kubelet.yaml <<EOF --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration staticPodPath: "/apps/work/kubernetes/manifests" syncFrequency: 30s fileCheckFrequency: 20s httpCheckFrequency: 20s address: 192.168.0.13 port: 10250 readOnlyPort: 0 tlsCipherSuites: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_128_GCM_SHA256 rotateCertificates: true authentication: x509: clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem" webhook: enabled: true cacheTTL: 2m0s anonymous: enabled: false authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s registryPullQPS: 5 registryBurst: 10 eventRecordQPS: 15 eventBurst: 30 enableDebuggingHandlers: true healthzPort: 10248 healthzBindAddress: 192.168.0.13 oomScoreAdj: -999 clusterDomain: cluster.local clusterDNS: - 10.66.0.2 streamingConnectionIdleTimeout: 4h0m0s nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 5m0s nodeLeaseDurationSeconds: 40 imageMinimumGCAge: 2m0s imageGCHighThresholdPercent: 70 imageGCLowThresholdPercent: 50 volumeStatsAggPeriod: 1m0s kubeletCgroups: "/systemd/system.slice" cgroupsPerQOS: true cgroupDriver: systemd cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s topologyManagerPolicy: none runtimeRequestTimeout: 2m0s hairpinMode: hairpin-veth maxPods: 55 podsPerCore: 0 podPidsLimit: -1 resolvConf: "/etc/resolv.conf" cpuCFSQuota: true cpuCFSQuotaPeriod: 100ms maxOpenFiles: 1000000 contentType: application/vnd.kubernetes.protobuf kubeAPIQPS: 15 kubeAPIBurst: 30 serializeImagePulls: false evictionHard: imagefs.available: 10% memory.available: 500Mi nodefs.available: 10% evictionSoft: imagefs.available: 15% memory.available: 500Mi nodefs.available: 15% evictionSoftGracePeriod: imagefs.available: 2m memory.available: 2m nodefs.available: 2m evictionPressureTransitionPeriod: 20s evictionMinimumReclaim: imagefs.available: 500Mi memory.available: 0Mi nodefs.available: 500Mi enableControllerAttachDetach: true makeIPTablesUtilChains: true iptablesMasqueradeBit: 14 iptablesDropBit: 15 failSwapOn: false containerLogMaxSize: 100Mi containerLogMaxFiles: 10 configMapAndSecretChangeDetectionStrategy: Watch systemReserved: cpu: 1000m ephemeral-storage: 1Gi memory: 1024Mi kubeReserved: cpu: 500m ephemeral-storage: 1Gi memory: 512Mi systemReservedCgroup: "/systemd/system.slice" kubeReservedCgroup: "/systemd/system.slice" enforceNodeAllocatable: - pods allowedUnsafeSysctls: - kernel.msg* - kernel.shm* - kernel.sem - fs.mqueue.* - net.* EOF |
192.168.0.14节点:在k8s-master-3 节点上执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat > /apps/k8s/conf/kubelet <<EOF KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \ --node-ip=192.168.0.14 \ --hostname-override=k8s-master-3 \ --cert-dir=/apps/k8s/ssl \ --runtime-cgroups=/systemd/system.slice \ --root-dir=/var/lib/kubelet \ --log-dir=/apps/k8s/log \ --alsologtostderr=true \ --config=/apps/k8s/conf/kubelet.yaml \ --logtostderr=true \ --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/crio/crio.sock \ --containerd=unix:///var/run/crio/crio.sock \ --pod-infra-container-image=docker.io/juestnow/pause:3.5 \ --v=2 \ --image-pull-progress-deadline=30s" EOF |
kubelet config配置生成
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
cat > /apps/k8s/conf/kubelet.yaml <<EOF --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration staticPodPath: "/apps/work/kubernetes/manifests" syncFrequency: 30s fileCheckFrequency: 20s httpCheckFrequency: 20s address: 192.168.0.14 port: 10250 readOnlyPort: 0 tlsCipherSuites: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_128_GCM_SHA256 rotateCertificates: true authentication: x509: clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem" webhook: enabled: true cacheTTL: 2m0s anonymous: enabled: false authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s registryPullQPS: 5 registryBurst: 10 eventRecordQPS: 15 eventBurst: 30 enableDebuggingHandlers: true healthzPort: 10248 healthzBindAddress: 192.168.0.14 oomScoreAdj: -999 clusterDomain: cluster.local clusterDNS: - 10.66.0.2 streamingConnectionIdleTimeout: 4h0m0s nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 5m0s nodeLeaseDurationSeconds: 40 imageMinimumGCAge: 2m0s imageGCHighThresholdPercent: 70 imageGCLowThresholdPercent: 50 volumeStatsAggPeriod: 1m0s kubeletCgroups: "/systemd/system.slice" cgroupsPerQOS: true cgroupDriver: systemd cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s topologyManagerPolicy: none runtimeRequestTimeout: 2m0s hairpinMode: hairpin-veth maxPods: 55 podsPerCore: 0 podPidsLimit: -1 resolvConf: "/etc/resolv.conf" cpuCFSQuota: true cpuCFSQuotaPeriod: 100ms maxOpenFiles: 1000000 contentType: application/vnd.kubernetes.protobuf kubeAPIQPS: 15 kubeAPIBurst: 30 serializeImagePulls: false evictionHard: imagefs.available: 10% memory.available: 500Mi nodefs.available: 10% evictionSoft: imagefs.available: 15% memory.available: 500Mi nodefs.available: 15% evictionSoftGracePeriod: imagefs.available: 2m memory.available: 2m nodefs.available: 2m evictionPressureTransitionPeriod: 20s evictionMinimumReclaim: imagefs.available: 500Mi memory.available: 0Mi nodefs.available: 500Mi enableControllerAttachDetach: true makeIPTablesUtilChains: true iptablesMasqueradeBit: 14 iptablesDropBit: 15 failSwapOn: false containerLogMaxSize: 100Mi containerLogMaxFiles: 10 configMapAndSecretChangeDetectionStrategy: Watch systemReserved: cpu: 1000m ephemeral-storage: 1Gi memory: 1024Mi kubeReserved: cpu: 500m ephemeral-storage: 1Gi memory: 512Mi systemReservedCgroup: "/systemd/system.slice" kubeReservedCgroup: "/systemd/system.slice" enforceNodeAllocatable: - pods allowedUnsafeSysctls: - kernel.msg* - kernel.shm* - kernel.sem - fs.mqueue.* - net.* EOF |
11.5 kubelet systemd Unit 配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
cd /opt/k8s/work/ cat > kubelet.service <<EOF [Unit] Description=Kubernetes Kubelet Wants=network-online.target After=network-online.target [Service] ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/hugetlb/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/blkio/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpuset/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/devices/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/net_cls,net_prio/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/perf_event/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/cpu,cpuacct/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/freezer/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/memory/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/pids/systemd/system.slice ExecStartPre=-/bin/mkdir -p /sys/fs/cgroup/systemd/systemd/system.slice LimitNOFILE=655350 LimitNPROC=655350 LimitCORE=infinity LimitMEMLOCK=infinity EnvironmentFile=-/apps/k8s/conf/kubelet ExecStart=/apps/k8s/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF |
分发到所有 master 节点:
1 2 3 4 |
cd /opt/k8s/work scp kubelet.service root@192.168.0.12:/usr/lib/systemd/system/ scp kubelet.service root@192.168.0.13:/usr/lib/systemd/system/ scp kubelet.service root@192.168.0.14:/usr/lib/systemd/system/ |
11.6 启动 kubelet 服务
在k8s-master-1 k8s-master-2 k8s-master-3 节点上执行
1 2 3 4 5 6 7 8 |
# 全局刷新service systemctl daemon-reload # 设置kubelet开机启动 systemctl enable kubelet # 启动kubelet systemctl start kubelet #重启kubelet systemctl restart kubelet |
11.7 检查启动结果
在k8s-master-1 k8s-master-2 k8s-master-3 节点上执行
1 2 3 4 5 6 7 8 9 10 |
[root@k8s-master-1 work]# systemctl status kubelet|grep Active Active: active (running) since Thu 2022-02-17 17:23:02 CST; 5s ago [root@k8s-master-2 ~]# systemctl status kubelet|grep Active Active: active (running) since Thu 2022-02-17 17:25:40 CST; 6s ago [root@k8s-master-3 work]# systemctl status kubelet|grep Active Active: active (running) since Thu 2022-02-17 17:23:02 CST; 5s ago |
确保状态为 active (running),否则查看日志,确认原因:
1 |
journalctl -u kubelet |
11.8 检查静态pod 是否启动
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
root@k8s-master-3 ~]# crictl ps f0e4c083dcce1 ad393d6a4d1b1ccbce65c2a4db6064635a8aac883d9dd66a38e14ce925e93b7f 2 hours ago Running kube-rbac-proxy 14 f7ba3e70afb06 [root@k8s-master-3 ~]# netstat -tnlp| grep 6443 tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1790/nginx: master tcp6 0 0 :::6443 :::* LISTEN 1790/nginx: master [root@k8s-master-3 ~]# curl -k https://127.0.0.1:6443 { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 } #负载均衡正常部署其它插件 |
12、部署高可用 kube-controller-manager 集群
1 2 3 4 5 6 7 8 9 |
本文档介绍部署高可用 kube-controller-manager 集群的步骤。 该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。 为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书: 与 kube-apiserver 的安全端口通信; 在安全端口(https,10257) 输出 prometheus 格式的 metrics; 注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。 |
12.1 创建 kube-controller-manager 证书和私钥
创建证书签名请求:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/work cat > /opt/k8s/cfssl/k8s/k8s-controller-manager.json << EOF { "CN": "system:kube-controller-manager", "hosts": [""], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "$CERT_ST", "L": "$CERT_L", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ] } EOF |
- hosts 列表包含所有 kube-controller-manager 节点 IP;
- CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。
生成证书和私钥:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
cd /opt/k8s/work cfssl gencert \ -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \ -config=/opt/k8s/cfssl/ca-config.json \ -profile=kubernetes \ /opt/k8s/cfssl/k8s/k8s-controller-manager.json | \ cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-controller-manager ll /opt/k8s/cfssl/pki/k8s/k8s-controller-manager* |
将生成的证书和私钥分发到所有 master 节点:
1 2 3 4 5 |
cd /opt/k8s/work scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager* root@192.168.0.12:/apps/k8s/ssl/k8s scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager* root@192.168.0.13:/apps/k8s/ssl/k8s scp -r /opt/k8s/cfssl/pki/k8s/k8s-controller-manager* root@192.168.0.14:/apps/k8s/ssl/k8s |
12.2 创建和分发 kubeconfig 文件
kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager.pem \ --embed-certs=true \ --client-key=/opt/k8s/cfssl/pki/k8s/k8s-controller-manager-key.pem \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context kubernetes --kubeconfig=kube-controller-manager.kubeconfig |
- kube-controller-manager 与 kube-apiserver 混布,故直接通过节点 IP 访问 kube-apiserver;
分发 kubeconfig 到所有 master 节点:
1 2 3 4 5 |
cd /opt/k8s/kubeconfig scp kube-controller-manager.kubeconfig root@192.168.0.12:/apps/k8s/config/ scp kube-controller-manager.kubeconfig root@192.168.0.13:/apps/k8s/config/ scp kube-controller-manager.kubeconfig root@192.168.0.14:/apps/k8s/config/ |
12.3 创建 kube-controller-manager 启动配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
cd /opt/k8s/work cat >kube-controller-manager <<EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --profiling \ --concurrent-service-syncs=2 \ --concurrent-deployment-syncs=10 \ --concurrent-gc-syncs=30 \ --leader-elect=true \ --bind-address=0.0.0.0 \ --service-cluster-ip-range=10.66.0.0/16 \ --cluster-cidr=10.80.0.0/12 \ --node-cidr-mask-size=24 \ --cluster-name=kubernetes \ --allocate-node-cidrs=true \ --kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \ --authentication-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \ --authorization-kubeconfig=/apps/k8s/config/kube-controller-manager.kubeconfig \ --use-service-account-credentials=true \ --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --requestheader-client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --requestheader-allowed-names=aggregator \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --node-monitor-grace-period=30s \ --node-monitor-period=5s \ --pod-eviction-timeout=1m0s \ --node-startup-grace-period=20s \ --terminated-pod-gc-threshold=50 \ --alsologtostderr=true \ --cluster-signing-cert-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --cluster-signing-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \ --deployment-controller-sync-period=10s \ --experimental-cluster-signing-duration=876000h0m0s \ --root-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --service-account-private-key-file=/apps/k8s/ssl/k8s/k8s-ca-key.pem \ --enable-garbage-collector=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/apps/k8s/ssl/k8s/k8s-controller-manager.pem \ --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-controller-manager-key.pem \ --kube-api-qps=100 \ --kube-api-burst=100 \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \ --log-dir=/apps/k8s/log \ --v=2" EOF |
参数解析:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
--port=0:关闭监听非安全端口(http),同时 --address 参数无效,--bind-address 参数有效; --secure-port=10257 端口的 https /metrics 请求; --kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver; --authentication-kubeconfig 和 --authorization-kubeconfig:kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 --tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。 --cluster-signing-*-file:签名 TLS Bootstrap 创建的证书; --experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期; --root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验; --service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用; --service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致; --leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态; --controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token; --horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1; --tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥; --use-service-account-credentials=true: kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver; |
分发 kube-controller-manager 配置文件到所有 master 节点:
1 2 3 4 |
cd /opt/k8s/work scp kube-controller-manager root@192.168.0.12:/apps/k8s/conf/ scp kube-controller-manager root@192.168.0.13:/apps/k8s/conf/ scp kube-controller-manager root@192.168.0.14:/apps/k8s/conf/ |
12.4 创建 kube-controller-manager systemd unit 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
cd /opt/k8s/work cat > kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] LimitNOFILE=655350 LimitNPROC=655350 LimitCORE=infinity LimitMEMLOCK=infinity EnvironmentFile=-/apps/k8s/conf/kube-controller-manager ExecStart=/apps/k8s/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF |
12.5 为各节点创建和分发 kube-controller-mananger systemd unit 文件
分发到所有 master 节点:
1 2 3 4 |
cd /opt/k8s/work scp kube-controller-manager.service root@192.168.0.12:/usr/lib/systemd/system/ scp kube-controller-manager.service root@192.168.0.13:/usr/lib/systemd/system/ scp kube-controller-manager.service root@192.168.0.14:/usr/lib/systemd/system/ |
12.6 启动 kube-controller-manager 服务
k8s-master-1 k8s-master-2 k8s-master-3 节点上分别执行
1 2 3 4 5 6 7 8 |
# 全局刷新service systemctl daemon-reload # 设置kube-controller-manager开机启动 systemctl enable kube-controller-manager # 启动kube-controller-manager systemctl start kube-controller-manager # 重启kube-controller-manager systemctl restart kube-controller-manager |
12.7 检查服务运行状态
在k8s-master-1 k8s-master-2 k8s-master-3 节点上执行
1 |
systemctl status kube-controller-manager|grep Active |
确保状态为 active (running),否则查看日志,确认原因:
1 |
journalctl -u kube-controller-manager |
kube-controller-manager 监听 10257 端口,接收 https 请求:
1 2 |
[root@k8s-master-1 conf]# netstat -lnpt | grep kube-cont tcp6 0 0 :::10257 :::* LISTEN 3430/kube-controlle |
12.8 查看当前的 leader
1 |
kubectl -n kube-system get leases kube-controller-manager |
可见,当前的 leader 为 k8s-master-1 节点。
12.9 测试 kube-controller-manager 集群的高可用
1 |
停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。 |
13、部署高可用 kube-scheduler 集群
1 2 3 4 5 6 7 8 9 |
本文档介绍部署高可用 kube-scheduler 集群的步骤。 该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。 为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书: 与 kube-apiserver 的安全端口通信; 在安全端口(https,10259) 输出 prometheus 格式的 metrics; 注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行。 |
13.1 创建 kube-scheduler 证书和私钥
创建证书签名请求:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/work cat > /opt/k8s/cfssl/k8s/k8s-scheduler.json <<EOF { "CN": "system:kube-scheduler", "hosts": [""], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ] } EOF |
- hosts 列表包含所有 kube-scheduler 节点 IP;
- CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限;
生成证书和私钥:
1 2 3 4 5 6 7 8 9 10 11 |
cd /opt/k8s/work cfssl gencert \ -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \ -config=/opt/k8s/cfssl/ca-config.json \ -profile=kubernetes \ /opt/k8s/cfssl/k8s/k8s-scheduler.json | \ cfssljson -bare /opt/k8s/cfssl/pki/k8s/k8s-scheduler ls -al /opt/k8s/cfssl/pki/k8s/k8s-scheduler* |
将生成的证书和私钥分发到所有 master 节点:
1 2 3 4 5 |
cd /opt/k8s/work scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.0.12:/apps/k8s/ssl/k8s scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.0.13:/apps/k8s/ssl/k8s scp -r /opt/k8s/cfssl/pki/k8s/k8s-scheduler* root@192.168.0.14:/apps/k8s/ssl/k8s |
13.2 创建和分发 kubeconfig 文件
kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/kubeconfig # 设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig # 设置客户端认证参数 kubectl config set-credentials system:kube-scheduler \ --client-certificate=/opt/k8s/cfssl/pki/k8s/k8s-scheduler.pem \ --embed-certs=true \ --client-key=/opt/k8s/cfssl/pki/k8s/k8s-scheduler-key.pem \ --kubeconfig=kube-scheduler.kubeconfig # 设置上下文参数 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig # 设置默认上下文 kubectl config use-context kubernetes --kubeconfig=kube-scheduler.kubeconfig |
分发 kubeconfig 到所有 master 节点:
1 2 3 4 |
cd /opt/k8s/kubeconfig scp kube-scheduler.kubeconfig root@192.168.0.12:/apps/k8s/config/ scp kube-scheduler.kubeconfig root@192.168.0.13:/apps/k8s/config/ scp kube-scheduler.kubeconfig root@192.168.0.14:/apps/k8s/config/ |
13.3 创建 kube-scheduler 配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
cd /opt/k8s/work cat >kube-scheduler <<EOF KUBE_SCHEDULER_OPTS=" \ --logtostderr=true \ --bind-address=0.0.0.0 \ --leader-elect=true \ --kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \ --authentication-kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \ --authorization-kubeconfig=/apps/k8s/config/kube-scheduler.kubeconfig \ --tls-cert-file=/apps/k8s/ssl/k8s/k8s-scheduler.pem \ --tls-private-key-file=/apps/k8s/ssl/k8s/k8s-scheduler-key.pem \ --client-ca-file=/apps/k8s/ssl/k8s/k8s-ca.pem \ --requestheader-allowed-names= \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ --alsologtostderr=true \ --kube-api-qps=100 \ --authentication-tolerate-lookup-failure=false \ --kube-api-burst=100 \ --log-dir=/apps/k8s/log \ --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256 \ --v=2" EOF |
- –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
- –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
分发 kube-scheduler 配置文件到所有 master 节点:
1 2 3 4 |
cd /opt/k8s/work scp kube-scheduler root@192.168.0.12:/apps/k8s/conf/ scp kube-scheduler root@192.168.0.13:/apps/k8s/conf/ scp kube-scheduler root@192.168.0.14:/apps/k8s/conf/ |
13.4 创建 kube-scheduler systemd unit 模板文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
cd /opt/k8s/work cat > kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] LimitNOFILE=655350 LimitNPROC=655350 LimitCORE=infinity LimitMEMLOCK=infinity EnvironmentFile=-/apps/k8s/conf/kube-scheduler ExecStart=/apps/k8s/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF |
13.5 为各节点创建和分发 kube-scheduler systemd unit 文件
分发到所有 master 节点:
1 2 3 4 5 |
cd /opt/k8s/work scp kube-scheduler.service root@192.168.0.12:/usr/lib/systemd/system/ scp kube-scheduler.service root@192.168.0.13:/usr/lib/systemd/system/ scp kube-scheduler.service root@192.168.0.14:/usr/lib/systemd/system/ |
13.6 启动 kube-scheduler 服务
k8s-master-1 k8s-master-2 k8s-master-3 节点上分别执行
1 2 3 4 5 6 7 8 |
# 全局刷新service systemctl daemon-reload # 设置kube-scheduler开机启动 systemctl enable kube-scheduler # 启动kube-scheduler systemctl start kube-scheduler # 重启kube-scheduler systemctl restart kube-scheduler |
13.7 检查服务运行状态
k8s-master-1 k8s-master-2 k8s-master-3 节点上分别执行
1 |
systemctl status kube-scheduler|grep Active |
确保状态为 active (running),否则查看日志,确认原因:
1 |
journalctl -u kube-scheduler |
kube-scheduler 监听 10259 端口,接收 https 请求:
1 2 3 |
[root@k8s-master-1 work]# netstat -tnlp| grep kube-sc tcp6 0 0 :::10259 :::* LISTEN 3981/kube-scheduler |
13.8 查看当前的 leader
1 |
kubectl -n kube-system get leases kube-scheduler |
可见,当前的 leader 为 k8s-master-1 节点。
13.9 测试 kube-scheduler 集群的高可用
1 |
随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限。 |
14、node节点kubelet部署
配置使用master kubelet 配置
14.1 bootstrap kubeconfig 分发
分发到所有node节点:
1 2 3 4 |
scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.0.x:/apps/k8s/conf/ scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.0.x:/apps/k8s/conf/ scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.3.62:/apps/k8s/conf/ scp /opt/k8s/kubeconfig/bootstrap.kubeconfig root@192.168.3.70:/apps/k8s/conf/ |
14.2 生成配置文件
生成各node配置参考 11.4 生成kubelet 配置文件 ,主要修改以下参数为你的node节点参数即可
修改内容:
1 2 3 4 5 6 7 |
# /apps/k8s/conf/kubelet 文件修改 --node-ip=192.168.0.12 #修改为你的node节点IP --hostname-override=k8s-master-3 # 修改为你的node节点名字 # /apps/k8s/conf/kubelet.yaml 修改 address: 192.168.0.12 # 修改为你的node节点IP healthzBindAddress: 192.168.0.12 # 修改为你的node节点IP |
配置模板(修改后在各个节点分别执行)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat > /apps/k8s/conf/kubelet <<EOF KUBELET_OPTS="--bootstrap-kubeconfig=/apps/k8s/conf/bootstrap.kubeconfig \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/cni/bin \ --kubeconfig=/apps/k8s/conf/kubelet.kubeconfig \ --node-ip=192.168.0.12 \ --hostname-override=k8s-master-1 \ --cert-dir=/apps/k8s/ssl \ --runtime-cgroups=/systemd/system.slice \ --root-dir=/var/lib/kubelet \ --log-dir=/apps/k8s/log \ --alsologtostderr=true \ --config=/apps/k8s/conf/kubelet.yaml \ --logtostderr=true \ --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/crio/crio.sock \ --containerd=unix:///var/run/crio/crio.sock \ --pod-infra-container-image=docker.io/juestnow/pause:3.5 \ --v=2 \ --image-pull-progress-deadline=30s" EOF |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
cat > /apps/k8s/conf/kubelet.yaml <<EOF --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration staticPodPath: "/apps/work/kubernetes/manifests" syncFrequency: 30s fileCheckFrequency: 20s httpCheckFrequency: 20s address: 192.168.0.12 port: 10250 readOnlyPort: 0 tlsCipherSuites: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_256_GCM_SHA384 - TLS_RSA_WITH_AES_128_GCM_SHA256 rotateCertificates: true authentication: x509: clientCAFile: "/apps/k8s/ssl/k8s/k8s-ca.pem" webhook: enabled: true cacheTTL: 2m0s anonymous: enabled: false authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s registryPullQPS: 5 registryBurst: 10 eventRecordQPS: 15 eventBurst: 30 enableDebuggingHandlers: true healthzPort: 10248 healthzBindAddress: 192.168.0.12 oomScoreAdj: -999 clusterDomain: cluster.local clusterDNS: - 10.66.0.2 streamingConnectionIdleTimeout: 4h0m0s nodeStatusUpdateFrequency: 10s nodeStatusReportFrequency: 5m0s nodeLeaseDurationSeconds: 40 imageMinimumGCAge: 2m0s imageGCHighThresholdPercent: 70 imageGCLowThresholdPercent: 50 volumeStatsAggPeriod: 1m0s kubeletCgroups: "/systemd/system.slice" cgroupsPerQOS: true cgroupDriver: systemd cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s topologyManagerPolicy: none runtimeRequestTimeout: 2m0s hairpinMode: hairpin-veth maxPods: 55 podsPerCore: 0 podPidsLimit: -1 resolvConf: "/etc/resolv.conf" cpuCFSQuota: true cpuCFSQuotaPeriod: 100ms maxOpenFiles: 1000000 contentType: application/vnd.kubernetes.protobuf kubeAPIQPS: 15 kubeAPIBurst: 30 serializeImagePulls: false evictionHard: imagefs.available: 10% memory.available: 500Mi nodefs.available: 10% evictionSoft: imagefs.available: 15% memory.available: 500Mi nodefs.available: 15% evictionSoftGracePeriod: imagefs.available: 2m memory.available: 2m nodefs.available: 2m evictionPressureTransitionPeriod: 20s evictionMinimumReclaim: imagefs.available: 500Mi memory.available: 0Mi nodefs.available: 500Mi enableControllerAttachDetach: true makeIPTablesUtilChains: true iptablesMasqueradeBit: 14 iptablesDropBit: 15 failSwapOn: false containerLogMaxSize: 100Mi containerLogMaxFiles: 10 configMapAndSecretChangeDetectionStrategy: Watch systemReserved: cpu: 1000m ephemeral-storage: 1Gi memory: 1024Mi kubeReserved: cpu: 500m ephemeral-storage: 1Gi memory: 512Mi systemReservedCgroup: "/systemd/system.slice" kubeReservedCgroup: "/systemd/system.slice" enforceNodeAllocatable: - pods allowedUnsafeSysctls: - kernel.msg* - kernel.shm* - kernel.sem - fs.mqueue.* - net.* EOF |
分发 systemd unit 文件 到所有 node 节点:
1 2 3 4 5 6 7 |
cd /opt/k8s/work scp kubelet.service root@192.168.0.x:/usr/lib/systemd/system/ scp kubelet.service root@192.168.0.x:/usr/lib/systemd/system/ scp kubelet.service root@192.168.3.x:/usr/lib/systemd/system/ scp kubelet.service root@192.168.3.x:/usr/lib/systemd/system/ |
14.3 启动 kubelet 服务
k8s-node-1 k8s-node-2 k8s-node-3 k8s-node-4 节点上执行
1 2 3 4 5 6 7 8 |
# 全局刷新service systemctl daemon-reload # 设置kubelet开机启动 systemctl enable kubelet # 启动kubelet systemctl start kubelet # 重启kubelet systemctl restart kubelet |
14.4 检查启动结果
k8s-node-1 k8s-node-2 k8s-node-3 k8s-node-4 节点上执行
1 2 3 |
systemctl status kubelet|grep Active [root@k8s-node-1 ~]# systemctl status kubelet|grep Active Active: active (running) since Fri 2022-02-11 04:48:17 CST; 3 days ago |
确保状态为 active (running),否则查看日志,确认原因:
1 |
journalctl -u kubelet |
14.5 检查静态pod 是否启动
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[root@k8s-node-1 ~]# crictl ps e180c4244b490 ad393d6a4d1b1ccbce65c2a4db6064635a8aac883d9dd66a38e14ce925e93b7f 3 days ago Running kube-rbac-proxy 4 94f09ded0f736 [root@k8s-node-1 ~]# netstat -tnlp| grep 6443 tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 1790/nginx: master tcp6 0 0 :::6443 :::* LISTEN 1790/nginx: master [root@k8s-node-1 ~]# curl -k https://127.0.0.1:6443 { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 } #负载均衡正常部署其它插件 |
15、部署 kube-proxy 组件
1 2 3 4 5 |
kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 endpoint 的变化情况,创建路由规则以提供服务 IP 和负载均衡功能。 本文档讲解部署 ipvs 模式的 kube-proxy 过程。 注意:如果没有特殊指明,本文档的所有操作均在 qist 节点上执行,然后远程分发文件和执行命令。 |
15.1 创建 kube-proxy 证书
创建证书签名请求:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/work cat > /opt/k8s/cfssl/k8s/kube-proxy.json << EOF { "CN": "system:kube-proxy", "hosts": [""], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:node-proxier", "OU": "Kubernetes-manual" } ] } EOF |
- CN:指定该证书的 User 为 system:kube-proxy;
- 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
- 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;
生成证书和私钥:
1 2 3 4 5 6 7 8 9 10 11 12 |
cd /opt/k8s/work cfssl gencert \ -ca=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ -ca-key=/opt/k8s/cfssl/pki/k8s/k8s-ca-key.pem \ -config=/opt/k8s/cfssl/ca-config.json \ -profile=kubernetes \ /opt/k8s/cfssl/k8s/kube-proxy.json | \ cfssljson -bare /opt/k8s/cfssl/pki/k8s/kube-proxy ls -al /opt/k8s/cfssl/pki/k8s/kube-proxy* |
15.2 创建和分发 kubeconfig 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/kubeconfig kubectl config set-cluster kubernetes \ --certificate-authority=/opt/k8s/cfssl/pki/k8s/k8s-ca.pem \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials system:kube-proxy \ --client-certificate=/opt/k8s/cfssl/pki/k8s/kube-proxy.pem \ --client-key=/opt/k8s/cfssl/pki/k8s/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=system:kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig |
分发 kubeconfig 文件:
1 2 3 4 5 6 7 8 9 10 |
cd /opt/k8s/kubeconfig scp kube-proxy.kubeconfig root@192.168.0.12:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.0.13:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.0.14:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.0.x:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.0.x:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.3.x:/apps/k8s/conf scp kube-proxy.kubeconfig root@192.168.3.x:/apps/k8s/conf |
15.3 创建 kube-proxy 配置文件
从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 –write-config-to 选项生成该配置文件,或者参考 源代码的注释。
创建 kube-proxy配置:
以 k8s-master-1 为例,其他节点依次修改此参数和kubelet一致并执行
修改以下参数为对应节点名字跟kubelet一致:
1 |
–hostname-override=k8s-master-1 |
所有节点修改并执行
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cat > /apps/k8s/conf/kube-proxy <<EOF KUBE_PROXY_OPTS="--logtostderr=true \ --v=2 \ --masquerade-all=true \ --proxy-mode=ipvs \ --profiling=true \ --ipvs-min-sync-period=5s \ --ipvs-sync-period=5s \ --ipvs-scheduler=rr \ --conntrack-max-per-core=0 \ --cluster-cidr=10.80.0.0/12 \ --log-dir=/apps/k8s/log \ --metrics-bind-address=0.0.0.0 \ --alsologtostderr=true \ --hostname-override=k8s-master-1 \ --kubeconfig=/apps/k8s/conf/kube-proxy.kubeconfig" EOF |
- bindAddress: 监听地址;
- clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;
- clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;
- hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
- mode: 使用 ipvs 模式;
15.4 创建和分发 kube-proxy systemd unit 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
cd /opt/k8s/work cat > kube-proxy.service <<EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] LimitNOFILE=655350 LimitNPROC=655350 LimitCORE=infinity LimitMEMLOCK=infinity EnvironmentFile=-/apps/k8s/conf/kube-proxy ExecStart=/apps/k8s/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF |
分发 kube-proxy systemd unit 文件:
1 2 3 4 5 6 7 8 9 |
cd /opt/k8s/work scp kube-proxy.service root@192.168.0.12:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.0.13:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.0.14:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.0.x:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.0.x:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.3.x:/usr/lib/systemd/system/ scp kube-proxy.service root@192.168.3.x:/usr/lib/systemd/system/ |
15.5 启动 kube-proxy 服务
所有节点执行
1 2 3 4 5 6 7 8 |
# 全局刷新service systemctl daemon-reload # 设置kube-proxy开机启动 systemctl enable kube-proxy # 启动kube-proxy systemctl start kube-proxy # 重启kube-proxy systemctl restart kube-proxy |
15.6 检查启动结果
所有节点执行
1 |
systemctl status kube-proxy|grep Active |
确保状态为 active (running),否则查看日志,确认原因:
1 |
journalctl -u kube-proxy |
15.7 查看监听端口
1 2 3 4 |
[root@k8s-master-1 work]# netstat -lnpt|grep kube-prox tcp6 0 0 :::10249 :::* LISTEN 8840/kube-proxy tcp6 0 0 :::10256 :::* LISTEN 8840/kube-proxy |
- 10249:http prometheus metrics port;
- 10256:http healthz port;
15.8 查看 ipvs 路由规则
任意节点执行
1 |
/usr/sbin/ipvsadm -ln |
预期输出:
1 2 3 4 5 6 7 8 |
[root@k8s-master-1 work]# /usr/sbin/ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.66.0.1:443 rr -> 192.168.0.12:5443 Masq 1 0 0 -> 192.168.0.13:5443 Masq 1 0 0 -> 192.168.0.14:5443 Masq 1 0 0 |
可见所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 5443 端口;
16、flannel 插件部署
“Network”: “10.80.0.0/12” 改成kube-controller-manager 组件cluster-cidr 参数网段
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
# 生成yaml cd /opt/k8s/yaml cat > kube-flannel.yaml <<EOF --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name":"cni0", "cniVersion":"0.3.1", "plugins":[ { "type":"flannel", "delegate":{ "forceAddress":false, "hairpinMode": true, "isDefaultGateway":true } }, { "type":"portmap", "capabilities":{ "portMappings":true } } ] } net-conf.json: | { "Network": "10.80.0.0/12", "Backend": { "Type": "VXLAN" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux priorityClassName: system-node-critical tolerations: - effect: NoSchedule operator: Exists - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: flannel initContainers: - name: install-cni-plugin image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: rancher/mirrored-flannelcni-flannel:v0.16.3 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: rancher/mirrored-flannelcni-flannel:v0.16.3 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg EOF |
16.2 部署 kube-flannel
1 2 |
cd /opt/k8s/yaml kubectl apply -f kube-flannel.yml |
16.3 查看pod 部署状态
1 2 3 4 5 6 7 8 9 |
[root@k8s-master-1 work]# kubectl get pod | grep kube-flannel kube-flannel-ds-amd64-h8nxx 1/1 Running 0 1d kube-flannel-ds-amd64-psnrb 1/1 Running 0 1d kube-flannel-ds-amd64-rxnml 1/1 Running 0 1d kube-flannel-ds-amd64-s7r4b 1/1 Running 0 1d kube-flannel-ds-amd64-t5lss 1/1 Running 0 1d kube-flannel-ds-amd64-v79t9 1/1 Running 0 1d kube-flannel-ds-amd64-z7btq 1/1 Running 0 1d |
16.4 查看节点网卡
1 |
ip a |
16.5 查看集群节点状态
1 2 3 4 5 6 7 8 9 10 11 |
kubectl get node [root@k8s-master-1 work]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-1 Ready <none> 1d v1.23.3 k8s-master-2 Ready <none> 1d v1.23.3 k8s-master-3 Ready <none> 1d v1.23.3 k8s-node-1 Ready <none> 1d v1.23.3 k8s-node-2 Ready <none> 1d v1.23.3 k8s-node-3 Ready <none> 1d v1.23.3 k8s-node-4 Ready <none> 1d v1.23.3 |
17、coredns 部署
17.1 生成部署yaml
- clusterIP: 对应 kubelet kubelet.yaml 里面clusterDNS IP
- cluster.local: 对应 对应 kubelet kubelet.yaml 里面 clusterDomain
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
cd /opt/k8s/work/yaml cat > coredns.yaml <<EOF # __MACHINE_GENERATED_WARNING__ apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods verified endpoint_pod_names fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: beta.kubernetes.io/os: linux containers: - name: coredns image: coredns/coredns imagePullPolicy: Always resources: limits: memory: 100Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.66.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP EOF |
17.2 部署 coredns
1 2 |
cd /opt/k8s/yaml kubectl apply -f coredns.yaml |
17.3 查看pod 部署状态
1 2 3 4 |
[root@k8s-master-1 yaml]# kubectl -n kube-system get pod | grep coredns coredns-78944585c4-br6zr 1/1 Running 0 46s coredns-78944585c4-qv25f 1/1 Running 0 46s |
17.4 测试dns
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
# 任意K8S 集群节点测试 ssh 192.168.0.12 # 安装dig yum install bind-utils [root@k8s-master-1 yaml]# dig @10.66.0.2 www.qq.com ; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> @10.66.0.2 www.qq.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17237 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: b4777c31e9da1f11 (echoed) ;; QUESTION SECTION: ;www.qq.com. IN A ;; ANSWER SECTION: www.qq.com. 30 IN CNAME ins-r23tsuuf.ias.tencent-cloud.net. ins-r23tsuuf.ias.tencent-cloud.net. 30 IN A 101.91.42.232 ins-r23tsuuf.ias.tencent-cloud.net. 30 IN A 101.91.22.57 ;; Query time: 5 msec ;; SERVER: 10.66.0.2#53(10.66.0.2) ;; WHEN: 五 2月 18 14:18:27 CST 2022 ;; MSG SIZE rcvd: 209 |
可以看到,正常解析(完结)
- 本文固定链接: https://www.yoyoask.com/?p=7578
- 转载请注明: shooter 于 SHOOTER 发表