1. 前言
1 |
本文以2021年官方更新的最新版 v1.20.6 来介绍。 |
2.环境准备
2.1 机器规划
2.2. 软件版本
2.3 负载均衡VIP
1 2 3 |
高可用vip使用keepalived+HA-proxy 自行搭建,也可以用nginx,本文不做赘述 192.168.188.220 负载均衡VIP(需要再202,203,204机器上安装keepalived+Ha-proxy) |
3.搭建集群
3.1. 系统基础配置
系统基础配置参考:传送门
以下配置在9台机器上面操作
3.1 配置host文件
1 2 3 4 5 6 7 8 9 |
192.168.188.202 k8s-master-01 192.168.188.203 k8s-master-02 192.168.188.204 k8s-master-03 192.168.188.205 k8s-node-01 192.168.188.206 k8s-node-02 192.168.188.207 k8s-node-03 192.168.188.208 k8s-node-04 192.168.188.209 k8s-node-05 192.168.188.210 k8s-node-06 |
1 2 |
可以在本地创建好,用ansible上传到所有机器,(ansible自行配置) ansible shooter -m copy -a 'src=/etc/hosts dest=/etc/hosts' -k |
3.2 配置工作目录
1 |
每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件,现专门选择 在本地来统一生成这些文件,然后通过ansible分发到其他机器。以下操作在本地ansible机器上进行. |
3.3搭建etcd集群
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
1.为主机创建etcd工作目录(本地) mkdir -p /etc/etcd mkdir -p /etc/etcd/ssl 2. 创建etcd证书(本地) cd /data/work/ 3.工具配置(创建证书的工具) wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl* cp cfssl_linux-amd64 /usr/local/bin/cfssl cp cfssljson_linux-amd64 /usr/local/bin/cfssljson cp cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo |
3.3.1配置ca请求文件(本地)
1 |
vim ca-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "system" } ], "ca": { "expiry": "87600h" } } |
1 2 3 |
注: CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group) |
3.3.2 创建ca证书
1 |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca |
3.3.3配置ca证书策略
1 |
vim ca-config.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } |
3.3.4 配置etcd请求csr文件
1 |
vim etcd-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
{ "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.188.202", "192.168.188.203", "192.168.188.204" ], "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "system" }] } |
3.3.5 生成证书
1 2 3 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*.pem |
同步证书到三台master节点
1 2 3 4 5 6 7 8 9 10 11 12 13 |
ansible: 1.创建etcd证书目录 ansible master -m file -a 'name=/etc/etcd/ssl state=directory' -k 2.复制本地生成的etcd证书到,远程三台master目录 ansible master -m copy -a 'src=/data/work/ca.pem dest=/etc/etcd/ssl/' -k ansible master -m copy -a 'src=/data/work/ca-key.pem dest=/etc/etcd/ssl/' -k ansible master -m copy -a 'src=/data/work/etcd.pem dest=/etc/etcd/ssl/' -k ansible master -m copy -a 'src=/data/work/etcd-key.pem dest=/etc/etcd/ssl/' -k |
3.3.6 部署etcd集群
下载etcd软件包
1 2 3 4 5 6 7 8 |
#下载etcd软件包 ansible master -m command -a 'chdir=/root/k8s/ wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz' -k #解压etcd软件包 ansible master -m command -a 'chdir=/root/k8s/ tar -xf etcd-v3.4.13-linux-amd64.tar.gz' -k #使用ansible复制etcd二进制文件到三台master机器 ansible master -m command -a 'chdir=/root/etcd-v3.4.13-linux-amd64/ cp -p etcd etcdctl /usr/local/bin/' -k |
3.3.7 创建配置文件
1 |
vim etcd.conf |
1 2 3 4 5 6 7 8 9 10 11 12 |
#[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.188.202:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.188.202:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.188.202:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.188.202:2379" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.188.202:2380,etcd2=https://192.168.188.203:2380,etcd3=https://192.168.188.204:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" |
1 2 3 4 5 6 7 8 9 10 11 |
注:master2和master3分别需要修改配置文件中etcd名字和ip 变量解析: ETCD_NAME:节点名称,集群中唯一 ETCD_DATA_DIR:数据目录 ETCD_LISTEN_PEER_URLS:集群通信监听地址 ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 ETCD_INITIAL_CLUSTER:集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN:集群Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群 |
3.3.8 创建启动服务文件
方式一:有配置文件的启动(本文采用第一种)
1 |
vim etcd.service |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-client-cert-auth \ --client-cert-auth Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
方式二: 无配置文件的启动方式
1 |
vim etcd.service |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/local/bin/etcd \ --name=etcd1 \ --data-dir=/var/lib/etcd/default.etcd \ --cert-file=/etc/etcd/ssl/etcd.pem \ --key-file=/etc/etcd/ssl/etcd-key.pem \ --trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-cert-file=/etc/etcd/ssl/etcd.pem \ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \ --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://192.168.188.202:2380 \ --listen-client-urls=https://192.168.188.202:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.188.202:2379 \ --initial-advertise-peer-urls=https://192.168.188.202:2380 \ --initial-cluster=etcd1=https://192.168.188.202:2380,etcd2=https://192.168.188.203:2380,etcd3=https://192.168.188.204:2380 \ --initial-cluster-token=etcd-cluster \ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
同步相关文件到各个节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
cp etcd.conf /etc/etcd/ cp etcd.service /usr/lib/systemd/system/ mkdir -p /var/lib/etcd/default.etcd ansible 发送配置文件到远程三台master主机 1.创建数据存放目录(必须创建否则无法启动) ansible shooter -m file -a 'name=/var/lib/etcd/default.etcd state=directory' -k 2.复制各项配置文件至三台master主机(etcd配置文件注意修改每台的节点名称和ip) ansible master -m copy -a 'src=/data/work/etcd.conf dest=/etc/etcd/' -k ansible master -m copy -a 'src=/data/work/etcd.service dest=/usr/lib/systemd/system/' -k |
3.4.0 启动etcd集群
1 2 3 4 5 6 7 8 9 10 11 12 |
systemctl daemon-reload systemctl enable etcd.service systemctl start etcd.service systemctl status etcd ansible: ansible master -m command -a 'systemctl daemon-reload' -k ansible master -m command -a 'systemctl enable etcd.service' -k ansible master -m command -a 'systemctl start etcd.service' -k ansible master -m command -a 'systemctl status etcd' -k 注:第一次启动可能会卡一段时间,因为节点会等待其他节点启动 |
3.4.1查看集群状态
1 2 3 4 5 |
ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.188.202:2379,https://192.168.188.203:2379,https://192.168.188.204:2379 endpoint health #查看集群当前leader ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.188.202:2379,https://192.168.188.203:2379,https://192.168.188.204:2379 endpoint status status |
3.4 kubernetes组件部署
3.4.1 下载安装包
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#下载安装包 wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz #ansible发送到所有节点 ansible shooter -m copy -a 'src=/root/k8s1.20/kubernetes-server-linux-amd64.tar.gz dest=/root/k8s' -k #ansible解压所有节点压缩包 ansible shooter -m command -a 'tar -zxvf /root/k8s/kubernetes-server-linux-amd64.tar.gz -C /root/k8s' -k #ansible复制master组件到指定位置 ansible master -m command -a 'chdir=/root/k8s/kubernetes/server/bin/ cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/' -k #ansible复制node组件到指定位置 ansible node -m command -a 'chdir=/root/k8s/kubernetes/server/bin/ cp kubelet kube-proxy /usr/local/bin/' -k |
3.4.2 创建工作目录
1 2 3 4 5 6 7 |
mkdir -p /etc/kubernetes/ # kubernetes组件配置文件存放目录 mkdir -p /etc/kubernetes/ssl # kubernetes组件证书文件存放目录 mkdir /var/log/kubernetes # kubernetes组件日志文件存放目录 ansible: ansible shooter -m file -a 'name=/etc/kubernetes/ssl state=directory' -k ansible shooter -m file -a 'name=/var/log/kubernetes state=directory' -k |
3.4.3 部署api-server
创建csr请求文件
1 |
vim kube-apiserver-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.188.202", "192.168.188.203", "192.168.188.204", "192.168.188.205", "192.168.188.206", "192.168.188.207", "192.168.188.208", "192.168.188.209", "192.168.188.210", "192.168.188.220", "10.255.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "system" } ] } |
1 2 3 |
注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1) |
3.4.5生成证书和token文件
1 2 3 4 5 6 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver cat > token.csv << EOF > $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap" > EOF |
3.4.6创建配置文件
1 |
vim kube-apiserver.conf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --bind-address=192.168.188.202 \ --secure-port=6443 \ --advertise-address=192.168.188.202 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.255.0.0/16 \ --token-auth-file=/etc/kubernetes/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --client-ca-file=/etc/kubernetes/ssl/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \ --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ # 1.20以上版本必须有此参数(写入文件需要去掉注释否则报错) --service-account-issuer=https://kubernetes.default.svc.cluster.local \ # 1.20以上版本必须有此参数(写入文件需要去掉注释否则报错) --etcd-cafile=/etc/etcd/ssl/ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --etcd-servers=https://192.168.188.202:2379,https://192.168.188.203:2379,https://192.168.188.204:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=4" |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
注: --logtostderr:启用日志 --v:日志等级 --log-dir:日志目录 --etcd-servers:etcd集群地址 --bind-address:监听地址 --secure-port:https安全端口 --advertise-address:集群通告地址 --allow-privileged:启用授权 --service-cluster-ip-range:Service虚拟IP地址段 --enable-admission-plugins:准入控制模块 --authorization-mode:认证授权,启用RBAC授权和节点自管理 --enable-bootstrap-token-auth:启用TLS bootstrap机制 --token-auth-file:bootstrap token文件 --service-node-port-range:Service nodeport类型默认分配端口范围 --kubelet-client-xxx:apiserver访问kubelet客户端证书 --tls-xxx-file:apiserver https证书 --etcd-xxxfile:连接Etcd集群证书 --audit-log-xxx:审计日志 |
3.4.7创建服务启动文件
1 |
vim kube-apiserver.service |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
同步相关文件到各个节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
cp ca*.pem /etc/kubernetes/ssl/ cp kube-apiserver*.pem /etc/kubernetes/ssl/ cp token.csv /etc/kubernetes/ cp kube-apiserver.conf /etc/kubernetes/ cp kube-apiserver.service /usr/lib/systemd/system/ ansible: ansible master -m copy -a 'src=/data/work/ca.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/ca-key.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-apiserver.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-apiserver-key.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/token.csv dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-apiserver.conf dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-apiserver.service dest=/usr/lib/systemd/system/' -k |
1 |
注:master2和master3配置文件(conf)的IP地址修改为实际的本机IP |
3.4.8 启动服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver ansible: ansible master -m command -a 'systemctl daemon-reload' -k ansible master -m command -a 'systemctl enable kube-apiserver' -k ansible master -m command -a 'systemctl start kube-apiserver' -k ansible master -m command -a 'systemctl status kube-apiserver' -k 测试: curl --insecure https://192.168.188.202:6443/ ansible master -m command -a 'curl --insecure https://192.168.188.202:6443/' -k 有返回说明启动正常 |
3.4.9部署kubectl
创建csr请求文件
vim admin-csr.json
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:masters", "OU": "system" } ] } |
1 2 3 4 5 6 7 |
说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。 |
生成证书
1 2 3 4 5 6 7 8 9 10 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #复制证书到指定位置 cp admin*.pem /etc/kubernetes/ssl/ ansible: ansible master -m copy -a 'src=/data/work/admin.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/admin-key.pem dest=/etc/kubernetes/ssl/' -k |
创建kubeconfig配置文件
1 |
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书 |
在任意master机器执行如下操作,然后将生成的配置文件拷贝到其他2台机器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
#设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.188.220:36443 --kubeconfig=kube.config #设置客户端认证参数 kubectl config set-credentials admin --client-certificate=/etc/kubernetes/ssl/admin.pem --client-key=/etc/kubernetes/ssl/admin-key.pem --embed-certs=true --kubeconfig=kube.config #设置上下文参数 kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config #设置默认上下文 kubectl config use-context kubernetes --kubeconfig=kube.config #创建配置文件目录 mkdir ~/.kube #拷贝kubectl配置文件(配置文件拷贝到其他2台机器各一份) cp kube.config ~/.kube/config #授权kubernetes证书访问kubelet的api权限(任意master机器执行即可) kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes |
查看集群组件状态
1 2 3 |
kubectl cluster-info kubectl get componentstatuses kubectl get all --all-namespaces |
配置kubectl子命令补全 (三台master都要执行)
1 2 3 4 5 6 |
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) kubectl completion bash > ~/.kube/completion.bash.inc source '/root/.kube/completion.bash.inc' source $HOME/.bash_profile |
部署到这里了。分界线———————————————————
3.5.0 部署kube-controller-manager
创建csr请求文件
1 |
vim kube-controller-manager-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
{ "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.188.202", "192.168.188.203", "192.168.188.204" ], "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:kube-controller-manager", "OU": "system" } ] } |
1 2 3 |
注: hosts 列表包含所有 kube-controller-manager 节点 IP CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限 |
生成证书
1 2 3 4 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager ls kube-controller-manager*.pem |
创建kube-controller-manager的kubeconfig
1 2 3 4 5 6 7 8 |
#设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.188.220:36443 --kubeconfig=kube-controller-manager.kubeconfig #设置客户端认证参数 kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig #设置上下文参数 kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig #设置默认上下文 kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig |
创建配置文件
1 |
vim kube-controller-manager.conf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \ --secure-port=10252 \ --bind-address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.255.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ --allocate-node-cidrs=true \ --cluster-cidr=10.0.0.0/16 \ --experimental-cluster-signing-duration=87600h \ --root-ca-file=/etc/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2" |
创建启动文件
1 |
vim kube-controller-manager.service |
1 2 3 4 5 6 7 8 9 10 11 12 |
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target |
发送(同步)相关文件到各个节点
1 2 3 4 5 6 7 8 9 10 11 |
cp kube-controller-manager*.pem /etc/kubernetes/ssl/ cp kube-controller-manager.kubeconfig /etc/kubernetes/ cp kube-controller-manager.conf /etc/kubernetes/ cp kube-controller-manager.service /usr/lib/systemd/system/ ansible: ansible master -m copy -a 'src=/data/work/kube-controller-manager-key.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-controller-manager.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-controller-manager.kubeconfig dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-controller-manager.conf dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-controller-manager.service dest=/usr/lib/systemd/system/' -k |
启动服务
1 2 3 4 5 6 7 8 9 10 |
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager ansible: ansible master -m command -a 'systemctl daemon-reload' -k ansible master -m command -a 'systemctl enable kube-controller-manager' -k ansible master -m command -a 'systemctl start kube-controller-manager' -k ansible master -m command -a 'systemctl status kube-controller-manager' -k |
3.5.1 部署kube-scheduler
创建csr请求文件
1 |
vim kube-scheduler-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
{ "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.188.202", "192.168.188.203", "192.168.188.204" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "system:kube-scheduler", "OU": "system" } ] } |
1 2 3 |
注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。 |
生成证书
1 2 3 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler ls kube-scheduler*.pem |
创建kube-scheduler的kubeconfig
1 2 3 4 5 6 7 8 |
#设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.188.220:36443 --kubeconfig=kube-scheduler.kubeconfig #设置客户端认证参数 kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig #设置上下文参数 kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig #设置默认上下文 kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig |
创建配置文件
1 |
vim kube-scheduler.conf |
1 2 3 4 5 6 7 |
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ --leader-elect=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2" |
创建服务启动文件
1 |
vim kube-scheduler.service |
1 2 3 4 5 6 7 8 9 10 11 12 |
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target |
同步相关文件到各个节点
1 2 3 4 5 6 7 8 9 10 11 |
cp kube-scheduler*.pem /etc/kubernetes/ssl/ cp kube-scheduler.kubeconfig /etc/kubernetes/ cp kube-scheduler.conf /etc/kubernetes/ cp kube-scheduler.service /usr/lib/systemd/system/ ansible: ansible master -m copy -a 'src=/data/work/kube-scheduler-key.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-scheduler.pem dest=/etc/kubernetes/ssl/' -k ansible master -m copy -a 'src=/data/work/kube-scheduler.kubeconfig dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-scheduler.conf dest=/etc/kubernetes/' -k ansible master -m copy -a 'src=/data/work/kube-scheduler.service dest=/usr/lib/systemd/system/' -k |
启动
1 2 3 4 5 6 7 8 9 10 11 |
ansible: systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler ansible: ansible master -m command -a 'systemctl daemon-reload' -k ansible master -m command -a 'systemctl enable kube-scheduler' -k ansible master -m command -a 'systemctl start kube-scheduler' -k ansible master -m command -a 'systemctl status kube-scheduler' -k |
3.5.2 部署 containerd 组件(所有node节点)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
cd /root/k8s1.20/work wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz wget https://github.com/opencontainers/runc/releases/download/v1.0.0-rc10/runc.amd64 wget https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz wget https://github.com/containerd/containerd/releases/download/v1.3.3/containerd-1.3.3.linux-amd64.tar.gz #发送到所有work节点 ansible: #创建文件夹 ansible node -m file -a 'name=/root/ruanjian state=directory' -k #复制所有组件包到所有node节点 ansible node -m copy -a 'src=/root/k8s1.20/work/ dest=/root/ruanjian' -k # |
解压:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
bash cd /root/ruanjian mkdir containerd tar -xvf containerd-1.3.3.linux-amd64.tar.gz -C containerd tar -xvf crictl-v1.17.0-linux-amd64.tar.gz mkdir cni-plugins sudo tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins sudo mv runc.amd64 runc ansible: ansible node -m shell -a 'cd /root/ruanjian && mkdir containerd tar -xvf containerd-1.3.3.linux-amd64.tar.gz -C containerd && tar -xvf crictl-v1.17.0-linux-amd64.tar.gz ' -k ansible node -m shell -a 'cd /root/ruanjian && mkdir cni-plugins && tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins' -k ansible node -m shell -a 'cd /root/ruanjian && sudo mv runc.amd64 runc' -k |
分发二进制文件到所有 worker 节点:
1 2 3 4 5 6 7 |
#创建node工作目录 ansible node -m file -a 'name=/opt/k8s/bin state=directory' -k #复制二进制文件到工作目录 ansible node -m shell -a 'cd /root/ruanjian && cp -rf containerd/bin/* crictl cni-plugins/* runc /opt/k8s/bin' -k #授权和建立cni文件夹 ansible node -m shell -a 'chmod a+x /opt/k8s/bin/* && mkdir -p /etc/cni/net.d' -k |
创建和分发 containerd 配置文件
创建配置文件containerd-config.toml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cat << EOF | sudo tee containerd-config.toml version = 2 root = "/data/k8s/containerd/root" state = "/data/k8s/containerd/state" [plugins] [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.2" [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/k8s/bin" conf_dir = "/etc/cni/net.d" [plugins."io.containerd.runtime.v1.linux"] shim = "containerd-shim" runtime = "runc" runtime_root = "" no_shim = false shim_debug = false EOF |
发送配置文件到所有work节点
1 2 3 4 5 6 7 8 9 |
#创建目录 ansible node -m shell -a 'mkdir -p /etc/containerd/ /data/k8s/containerd/{root,state}' -k #分发配置文件 ansible node -m copy -a 'src=/data/work/containerd-config.toml dest=/etc/containerd/' -k #更改文件名称 ansible node -m shell -a 'cd /etc/containerd/ && mv containerd-config.toml config.toml' -k |
创建 containerd systemd unit 文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat <<EOF | sudo tee containerd.service [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin" ExecStartPre=/sbin/modprobe overlay ExecStart=/opt/k8s/bin/containerd Restart=always RestartSec=5 Delegate=yes KillMode=process OOMScoreAdjust=-999 LimitNOFILE=1048576 LimitNPROC=infinity LimitCORE=infinity [Install] WantedBy=multi-user.target EOF |
分发 systemd unit 文件,启动 containerd 服务
1 2 3 4 5 6 7 |
#分发配置文件 ansible node -m copy -a 'src=/data/work/containerd.service dest=/etc/systemd/system' -k #启动containerd ansible node -m command -a 'systemctl daemon-reload' -k ansible node -m command -a 'systemctl enable containerd' -k ansible node -m command -a 'systemctl start containerd' -k ansible node -m command -a 'systemctl status containerd' -k |
创建和分发 crictl 配置文件
1 2 |
crictl 是兼容 CRI 容器运行时的命令行工具,提供类似于 docker 的命令行查看功能。 具体参考[官方文档](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)。 |
创建crictl配置文件
1 2 3 4 5 6 |
cat << EOF | sudo tee crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF |
分发到所有 worker 节点:
1 2 3 4 5 6 7 8 9 10 |
#分发配置文件 ansible node -m copy -a 'src=/data/work/crictl.yaml dest=/etc/' -k 使用crictl命令(如果感觉在/opt/k8s/bin/crictl不方便,可以软连接至/usr/local/bin) ansible node -m shell -a 'ln -s /opt/k8s/bin/crictl /usr/local/bin/crictl' -k crictl images crictl ps 和docker命令有些类似 |
3.5.3 部署kubelet(需要再master上执行)
创建kubelet-bootstrap.kubeconfig
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
export NODE_NAMES=(k8s-node-01 k8s-node-02 k8s-node-03 k8s-node-04 k8s-node-05 k8s-node-06) #创建token export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:${node_name} \ --kubeconfig ~/.kube/config) BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv) #设置集群参数 kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.188.220:36443 --kubeconfig=kubelet-bootstrap.kubeconfig #设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig #设置上下文参数 kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig #设置默认上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig #创建角色绑定(这句在master上执行,授予 kube-apiserver 访问 kubelet API 的权限,否则会报无法访问apiserver 超时) kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap |
创建配置文件
1 |
vim kubelet.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "172.10.1.14", #这里的ip要改为各个node的地址 "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "cgroupfs", #如果docker的驱动为systemd,处修改为systemd。此处设置很重要,否则后面node节点无法加入到集群,当前未用docker默认即可 "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/ssl/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "192.168.188.205", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "cgroupfs", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] } |
创建启动文件
1 |
vim kubelet.service |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=containerd.service Requires=containerd.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \ --cert-dir=/etc/kubernetes/cert \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --container-runtime=remote \ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \ --root-dir=/var/lib/kubelet \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet.json \ --hostname-override=##NODE_NAME## \ --image-pull-progress-deadline=15m \ --volume-plugin-dir=/var/lib/kubelet/kubelet-plugins/volume/exec/ \ --logtostderr=true \ --log-dir=/var/log/kubernetes \ --v=2 Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
注: –hostname-override:显示名称,集群中唯一, –network-plugin:启用CNI –kubeconfig:空路径,会自动生成,后面用于连接apiserver –bootstrap-kubeconfig:首次启动向apiserver申请证书 –config:配置参数文件 –cert-dir:kubelet证书生成目录 –pod-infra-container-image:管理Pod网络容器的镜像 + 如果设置了 `--hostname-override` 选项,则 `kube-proxy` 也需要设置该选项,否则会出现找不到 Node 的情况; + `--bootstrap-kubeconfig`:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求; + K8S approve kubelet 的 csr 请求后,在 `--cert-dir` 目录创建证书和私钥文件,然后写入 `--kubeconfig` 文件; + `--pod-infra-container-image` 不使用 redhat 的 `pod-infrastructure:latest` 镜像,它不能回收容器的僵尸; |
同步相关文件到各个node节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/ cp kubelet.json /etc/kubernetes/ cp kubelet.service /usr/lib/systemd/system/ 以上步骤,如果master节点不安装kubelet,则不用执行 ansible: #分发配置文件到各个work节点 ansible node -m copy -a 'src=/data/work/kubelet-bootstrap.kubeconfig dest=/etc/kubernetes/' -k ansible node -m copy -a 'src=/data/work/kubelet.json dest=/etc/kubernetes/' -k ansible node -m copy -a 'src=/data/work/ca.pem dest=/etc/kubernetes/ssl/' -k ansible node -m copy -a 'src=/data/work/kubelet.service dest=/usr/lib/systemd/system/' -k 注:kubelete.json配置文件address改为各个节点的ip地址 |
启动服务
1 |
各个work节点上操作 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
mkdir /var/lib/kubelet mkdir /var/log/kubernetes systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet ansible: #创建目录 ansible node -m shell -a 'mkdir -p /var/lib/kubelet mkdir /var/log/kubernetes' -k #启动kubelet ansible node -m command -a 'systemctl daemon-reload' -k ansible node -m command -a 'systemctl enable kubelet' -k ansible node -m command -a 'systemctl start kubelet' -k ansible node -m command -a 'systemctl status kubelet' -k |
1 2 |
确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求: |
1 |
kubectl get csr |
签发证书
1 2 3 4 5 6 7 8 9 10 11 |
kubectl certificate approve node-csr-0COEz3hT8thpsf7pG7zTuaZTFRNjZHhEG4Ar_HkgUhw kubectl certificate approve node-csr-3XlZDNJwTHOAaGkzsMzjwJNhMkaXrK_tca-0oo6Yej0 kubectl certificate approve node-csr-5Xafwl3kR5f9OFIXJOu9tRkd4shYoava7P4lxWlM2Q0 kubectl certificate approve node-csr-M0_ZBhEen7r48VzR8jRWV3MJznDac0kFRXOkncUFp4c kubectl certificate approve node-csr-Uf8f8BdnZLf_LylpVU8QT_nYz4kT_N_jL-txQJJlrN8 kubectl certificate approve node-csr-yVw2wKze5Tphz_6VdPh5uSOrKQqlgDVwu5g5FkdaXHg |
1 2 |
kubectl get csr kubectl get nodes |
3.5.4 部署kube-proxy (node节点)
创建csr请求文件
1 |
vim kube-proxy-csr.json |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shanghai", "L": "Shanghai", "O": "k8s", "OU": "system" } ] } |
生成证书
1 2 3 |
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy ls kube-proxy*.pem |
创建kubeconfig文件
1 2 3 4 5 6 7 |
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.188.220:36443 --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig |
创建kube-proxy配置文件
1 |
vim kube-proxy.yaml |
1 2 3 4 5 6 7 8 9 |
apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 192.168.188.205 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 172.30.0.0/16 healthzBindAddress: 192.168.188.205:10256 kind: KubeProxyConfiguration metricsBindAddress: 192.168.188.205:10249 mode: "ipvs" |
1 |
注:配置文件kube-proxy.yaml中address修改为各节点的实际IP |
创建服务启动文件
1 |
vim kube-proxy.service |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target |
同步文件到各个节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
cp kube-proxy*.pem /etc/kubernetes/ssl/ cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/ cp kube-proxy.service /usr/lib/systemd/system/ master节点不安装kube-proxy,则以上步骤不用执行 ansible: ansible node -m copy -a 'src=/data/work/kube-proxy-key.pem dest=/etc/kubernetes/ssl/' -k ansible node -m copy -a 'src=/data/work/kube-proxy.pem dest=/etc/kubernetes/ssl/' -k ansible node -m copy -a 'src=/data/work/kube-proxy.kubeconfig dest=/etc/kubernetes/' -k #注:配置文件kube-proxy.yaml中address修改为各节点的实际IP ansible node -m copy -a 'src=/data/work/kube-proxy.yaml dest=/etc/kubernetes/' -k ansible node -m copy -a 'src=/data/work/kube-proxy.service dest=/usr/lib/systemd/system/' -k |
启动服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
mkdir -p /var/lib/kube-proxy systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy systemctl status kube-proxy ansible: #创建目录 ansible node -m shell -a 'mkdir -p /var/lib/kube-proxy' -k #启动kube-proxy ansible node -m command -a 'systemctl daemon-reload' -k ansible node -m command -a 'systemctl enable kube-proxy' -k ansible node -m command -a 'systemctl start kube-proxy' -k ansible node -m command -a 'systemctl status kube-proxy' -k |
3.5.6 配置网络组件(master)
1 2 3 4 5 6 7 8 9 |
curl https://docs.projectcalico.org/manifests/calico.yaml -O 修改yaml value: "192.168.0.0/16" => value: "172.30.0.0/16" 增加: - name: IP_AUTODETECTION_METHOD value: "interface=eth.*" 修改: path: /opt/cni/bin => path: /opt/k8s/bin |
修改权限
1 2 3 4 5 6 7 8 |
修改为: rules: - apiGroups: - projectcontour.io resources: - "*" verbs: - '*' |
应用:
1 |
kubectl apply -f calico.yaml |
3.5.7 部署CoreDNS
1 |
下载coredns yaml文件:https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed |
修改yaml文件:
1 2 3 4 5 6 |
修改参数有3个地方: 一个是ip6.arpa 指定,一个是更改成国内镜像源,一个是定义clusterIP, 具体如下: 1. ip6.arpa修改为kubernetes cluster.local. in-addr.arpa ip6.arpa 2. forward . /etc/resolv.conf 3. clusterIP为:10.255.0.2(kubelet配置文件中的clusterDNS) |
1 |
vim coredns.yaml |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: coredns/coredns:1.8.0 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.255.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP |
1 |
kubectl apply -f core-dns.yaml |
3.5.8 验证集群
部署nginx
1 |
vim nginx.yaml |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
apiVersion: v1 kind: ReplicationController metadata: name: nginx-controller spec: replicas: 6 selector: name: nginx template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx:1.19.6 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service-nodeport spec: ports: - port: 80 targetPort: 80 nodePort: 30001 protocol: TCP type: NodePort selector: name: nginx |
1 2 |
kubectl get svc kubectl get pods |
ping验证nginx service
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
访问: 注意:VIP需要在haproxy里配置负载转发到node节点 listen proxy_k8s bind :80 mode tcp balance roundrobin server k8s-node1 192.168.188.205:30001 check inter 10s server k8s-node2 192.168.188.206:30001 check inter 10s server k8s-node3 192.168.188.207:30001 check inter 10s server k8s-node1 192.168.188.208:30001 check inter 10s server k8s-node2 192.168.188.209:30001 check inter 10s server k8s-node3 192.168.188.210:30001 check inter 10s |
访问:http://192.168.188.220/
未完,待调整
1 |
大佬:https://github.com/opsnull/follow-me-install-kubernetes-cluster |
- 本文固定链接: https://www.yoyoask.com/?p=5002
- 转载请注明: shooter 于 SHOOTER 发表