Morse's Site
1393 字
7 分钟
Kubernetes 高可用集群搭建(一) | ETCD集群搭建
2019-12-21

环境准备#

  • 服务器准备及系统配置调整, 这个参考前一篇基础配置.

  • ssh配置

> ssh-keygen -b 2048 -t rsa

以上环境的准备可在一台vm上创建好后, 后面只需要克隆这台就可以了.

  • containerd

这里不在使用 Docker 作为cri的依赖. 而是使用containerd.

> sudo apt-get update && sudo apt-get install -y containerd
> sudo mkdir -p /etc/containerd
> sudo containerd config default | sudo tee /etc/containerd/config.toml
* cni配置
> sudo mkdir -p /etc/cni/net.d
> sudo cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.3.1",
    "type": "loopback"
}
EOF
* 镜像加速配置及默认pause镜像配置
# /etc/containerd/config.toml 大概91行
    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://wlij5bh2.mirror.aliyuncs.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://registry.cn-hangzhou.aliyuncs.com/morse_k8s"]

# sandbox_image 修改
sandbox_image = "k8s.gcr.io/pause:3.1"
# 改为
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/morse_k8s/pause:3.1"


# systemd
# 结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

* containerd的几个重要命令
ctr image ls
# 简写: ctr i ls
ctr image remove docker.io/library/hello-world:latest
> ctr container list
# 简写: ctr c list
ctr c remove demo
# 命名空间
> sudo ctr ns ls
# 配置
> sudo containerd config default

怎么查其他信息, 这个时候我们可以用这个工具crictl.

-n: 命名空间, kubernetes的命名空间为 k8s.io. 注意 -n 的位置, 放在ctr后面 * 镜像准备

k8s.gcr.io/pause:3.1
# registry.cn-hangzhou.aliyuncs.com/morse_k8s/pause:3.1

k8s.gcr.io/etcd:3.4.13-0
# registry.cn-hangzhou.aliyuncs.com/morse_k8s/etcd:3.4.13-0

参考 Container runtimes | Kubernetes Containerd 使用教程 – 云原生实验室 - Kubernetes|Docker|Istio|Envoy|Hugo|Golang|云原生 容器运行时 | Kubernetes containerd/ops.md at master · containerd/containerd · GitHub https://github.com/containerd/containerd/blob/master/docs/cri/config.md

Etcd集群搭建#

前置条件#

  • 2 GB 或更多的 RAM, 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区
  • 开启机器上的某些端口。(可以关闭防火墙)

sudo cat /sys/class/dmi/id/product_uuid用这个命令校验机器ID

这里的ProductId在虚拟机copy的时候可能会重复, 记得清理

  • 修改hosts
# 每台机器添加彼此的host解析
192.168.10.5 vm5.fuxiao.dev  vm5
192.168.10.6 vm6.fuxiao.dev  vm6
192.168.10.7 vm7.fuxiao.dev  vm7
  • 安装kubeadm, kubelet (每台) 因为使用kubelet来管理 etcd, 即通过容器运行Etcd. 所以需要先安装kubelet. CRI(即上面的containerd). 因为kubeadm 不能 帮你安装或者管理 kubelet 或 kubectl. 所以这里kubectl, kubelet, kubeadm就一起安装了.
sudo apt-get install -y kubelet kubeadm kubectl
  • 将 kubelet 配置为 etcd 的服务管理器。(每台)
> sudo cd /etc/systemd/system/kubelet.service.d
> # 修改10-kubeadm.conf文件
> # 屏蔽文件中的Environment, Environment 参数
> sudo mkdir -p /var/lib/kubelet
> sudo cat<<EOF|sudo tee /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --pod-manifest-path=/etc/kubernetes/manifests --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --runtime-request-timeout=15m"
EOF
> sudo systemctl daemon-reload
> sudo systemctl restart kubelet

当CRI非Docker时, 需要添加以下启动参数.

--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --runtime-request-timeout=15m

不同的CRI sock 文件位置不同

Docker: /var/run/docker.sock
containerd: /run/containerd/containerd.sock
CRI-O: /var/run/crio/crio.sock

只是创建, kubelet这一步并不能运行

  • 配置脚本
# cat create_etcd.sh

#!/bin/bash
# 使用 IP 或可解析的主机名替换 HOST0、HOST1 和 HOST2
export HOST0=192.168.10.5
export HOST1=192.168.10.6
export HOST2=192.168.10.7

# 创建临时目录来存储将被分发到其它主机上的文件
rm -rf /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/

ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=("vm5" "vm6" "vm7")
# 注意这里的names中的值与下面initial-cluster中参数的key保持一致

for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta2"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${HOST}"
        peerCertSANs:
        - "${HOST}"
        extraArgs:
            initial-cluster: vm5=https://${ETCDHOSTS[0]}:2380,vm6=https://${ETCDHOSTS[1]}:2380,vm7=https://${ETCDHOSTS[2]}:2380
            initial-cluster-state: new
            name: ${NAME}
            listen-peer-urls: https://${HOST}:2380
            listen-client-urls: https://${HOST}:2379
            advertise-client-urls: https://${HOST}:2379
            initial-advertise-peer-urls: https://${HOST}:2380
EOF
done

注意: 1. NAMESinitial-cluster的key要匹配, initial-cluster中多个值不能用空格 2. 执行脚本: . create_etcd.sh, (注意执行命令前的那个点, 没有那个点导不出shell中的环境变量)

  • etcd证书
sudo kubeadm init phase certs etcd-ca
# ls -la /etc/kubernetes/pki/etcd/
#  /etc/kubernetes/pki/etcd/ca.crt
#  /etc/kubernetes/pki/etcd/ca.key

  • 为每个成员创建证书
sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
sudo kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
sudo cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# 清理不可重复使用的证书
sudo find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
sudo kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
sudo cp -R /etc/kubernetes/pki /tmp/${HOST1}/
sudo find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
sudo kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
sudo kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# 不需要移动 certs 因为它们是给 HOST0 使用的

# 清理不应从此主机复制的证书
sudo find /tmp/${HOST2} -name ca.key -type f -delete
sudo find /tmp/${HOST1} -name ca.key -type f -delete
  • 复制证书和 kubeadm 配置
#!/bin/bash
export USER=ubuntu
export HOST1=192.168.10.22
export HOST2=192.168.10.23
ETCDHOSTS=(${HOST1} ${HOST2})
for i in "${!ETCDHOSTS[@]}"; do
	sudo scp -r /tmp/${ETCDHOSTS[$i]}/* ${USER}@${ETCDHOSTS[$i]}:
done
USER=ubuntu
HOST=${HOST1}
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es && mkdir /etc/kubernetes/pki
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/
  • 确保已经所有预期的文件都存在

  • 创建静态 Pod 清单

root@HOST0 $ sudo kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
root@HOST1 $ sudo kubeadm init phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
root@HOST2 $ sudo kubeadm init phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml

修改每个节点中etcd静态清单配置

# 将image的值改为下面的值
image: registry.cn-hangzhou.aliyuncs.com/morse_k8s/etcd:3.4.13-0
  • 可选:检查群集运行状况
HOST=192.168.10.5
sudo etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${HOST}:2379 endpoint health --cluster
...
# 注意: 正常可以看到全部主机列表
https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms

etcdctl 命令可到etcd的github release页面下载

扩展#

  • crictl工具

初始配置如下:

sudo touch /etc/crictl.yaml
sudo cat << EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: /run/containerd/containerd.sock
EOF

使用方式基本等同docker的命令.

> sudo crictl pods
> sudo crictl images
> sudo crictl ps

SAN: Subject Alternative Names : 可选附加主题备用名称, 用于 API Server 服务证书的可选附加主题备用名称 #01 Blog/云原生# #01 Blog/Kubernetes#

Kubernetes 高可用集群搭建(一) | ETCD集群搭建
https://fuwari.vercel.app/posts/cncf/etcd-cluster/
作者
Morse Hsiao
发布于
2019-12-21
许可协议
CC BY-NC-SA 4.0