300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > Kubernetes — 使用 kubeadm 部署高可用集群

Kubernetes — 使用 kubeadm 部署高可用集群

时间:2021-07-07 04:14:30

相关推荐

Kubernetes — 使用 kubeadm 部署高可用集群

目录

文章目录

目录Kubernetes 在生产环境中架构高可用集群部署拓扑1、网络代理配置2、Load Balancer 环境准备3、Kubernetes Cluster 环境准备安装 Container Runtime安装 kubeadm、kubelet 和 kubectl4、初始化 Master 主控制平面节点kubeadm init 的工作流执行初始化(可选)清理或重新进行初始化5、添加 Master 冗余控制平面节点6、添加 Node 工作负载节点7、安装 CNI 网络插件8、安装 Metrics ServerKubernetes Metrics Server启用 API Aggregator安装 Metrics Server9、安装 Dashboard GUI10、访问 Dashboard UI11、通过 NFS 实现持久化存储NFS Server 安装NFS Client 挂载Kubernetes 部署 nfs-client-provisioner

Kubernetes 在生产环境中架构

Client 层:外部用户、客户端等;

服务访问层:Traefik、Kong APIGW 等 Ingress。外部客户端访问 Kubernetes Cluster 内部的 Service 时,使用 Ingress 支持的服务发现、负载均衡、路由规则定义来实现。此外,应当实现 Ingress 的 HA。

业务应用层:即基于 Kubernetes 之上构建和运行的企业业务应用。

镜像管理:使用 Harbor 私有镜像仓库服务;日志管理:使用 ELK Stack;监控告警管理:使用 Prometheus 和 Grafana;微服务架构:使用 Istio 的 Service Mesh 方案,或者使用 APIGW;DevOps:使用 Gitlab、Jenkins 等 CI/CD 工具;单体应用:无状态类服务使用 Deployment,有状态类服务则使用 Statefulset,如果关联的服务较多且复杂则使用 Helm。规划好 Namespace:应当做到每个 Namespace 专属用于某类型的应用,例如:monitor namespace 用于统一管理监控告警、日志管理方面的 pod、service、pvc、ingress 等资源对象。

基础设施层:即由 Kubernetes、Calico SDN、Ceph SDS 或 NFS 等系统组成的基础设施服务。

高可用集群部署拓扑

官方文档:https://kubernetes.io/zh/docs/setup/production-environment/

基础设施:OpenStack虚拟机集群:3 Master、2 Node、2 Load Balancer计算资源:x86-64 processor、2CPU、2GB RAM、20GB free disk space操作系统:CentOS 7.x+版本:Kubernetes 1.18.14Container Runtime:Docker

1、网络代理配置

因为要科学上网,所以需要对 HTTP/S Proxy 和 No Proxy 进行精心的配置,否则要么下不下来软件,要么出现网络连通性的错误。

export https_proxy=http://{proxy_ip}:7890 http_proxy=http://{proxy_ip}:7890 all_proxy=socks5://{proxy_ip}:7890 no_proxy=localhost,127.0.0.1,{apiserver_endpoint_ip},{k8s_mgmt_network_ip_pool},{pod_network_ip_pool},{service_network_ip_pool}

2、Load Balancer 环境准备

基于 OpenStack Octavia LBaaS 来提供 HA Load Balancer,也可以手动的配置 keepalived and haproxy(/kubernetes/kubeadm/blob/master/docs/ha-considerations.md#options-for-software-load-balancing)。

VIP 选择 kube-mgmt-subnet

Listener 选择 TCP :6443 Socket(kube-apiserver 的监听端口)

Members 选择 3 个 k8s-master

Monitor 同样选择 TCP :6443 Socket

注意:创建好 Load Balancer 之后,首先要测试一下 TCP 反向代理运行正常。由于 apiserver 现在尚未运行,所以预期会出现一个连接拒绝错误。在我们初始化了第一个控制平面节点之后,要记得再次进行测试。

# nc -v LOAD_BALANCER_IP PORTnc -v 192.168.0.100 6443

3、Kubernetes Cluster 环境准备

注意:在所有节点上执行以下操作。

科学上网。添加全节点的 Hostname 解析。

# vi /etc/hosts192.168.0.100 kube-apiserver-endpoint192.168.0.148 k8s-master-1192.168.0.112 k8s-master-2192.168.0.193 k8s-master-3192.168.0.208 k8s-node-1192.168.0.174 k8s-node-2

开启全节点之间的 SSH 免密登录。禁用 Swap 交换分区,为了保证 kubelet 正常工作。确保 iptables 工具不使用 nftables 后端,nftables 后端与当前的 kubeadm 软件包不兼容,它会导致重复的防火墙规则并破坏 kube-proxy。确保节点之间的网络联通性。

关闭 SELinux,为了允许容器访问主机的文件系统。

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)setenforce 0sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

在 RHEL/CentOS 7 上为了保证 kube-proxy 控制的数据流量必须进过 iptables 的处理来进行本地路由,所以要确保 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1。

# 确保加载了 br_netfilter 模块。modprobe br_netfilterlsmod | grep br_netfilter# 确保 sysctl 配置,将 Bridge 的 IPv4 流量传递到 iptables 的 Chain(链)cat <<EOF > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system

安装基础依赖软件:

yum install ebtables ethtool ipvsadm -y

安装 Container Runtime

注意:当 Linux 使用 systemd 时,会创建一个 cgroup,此时需要保证 Container Runtime、kubelet 和 systemd 使用的是同一个 cgroup,否则会出现不可预测的问题。为此,我们需要将 Container Runtime、kubelet 配置成使用 systemd 来作为 cgroup 驱动,以此使系统更为稳定。

对于 Docker 而言,设置native.cgroupdriver=systemd选项即可。

安装

# 安装依赖包sudo yum install -y yum-utils device-mapper-persistent-data lvm2# 新增 Docker 仓库sudo yum-config-manager --add-repo \/linux/centos/docker-ce.repo# 安装 Docker CEsudo yum update -y && sudo yum install -y \containerd.io-1.2.13 \docker-ce-19.03.11 \docker-ce-cli-19.03.11

配置

# 创建 /etc/docker 目录sudo mkdir /etc/docker# 设置 Docker daemoncat <<EOF | sudo tee /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"]}EOF

重启

# Create /etc/systemd/system/docker.service.dsudo mkdir -p /etc/systemd/system/docker.service.d# 重启 Dockersudo systemctl daemon-reloadsudo systemctl restart dockersudo systemctl enable dockersudo systemctl status docker

安装 kubeadm、kubelet 和 kubectl

注意:kubeadm 是 Kubernetes Cluster 的部署工具,但 kubeadm 不能用于安装、管理 kubelet 或 kubectl,所以我们需要收到安装它们,并且确保三者的版本仓库是一致的。

更新 Kubernetes YUM 仓库

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages./yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages./yum/doc/yum-key.gpg https://packages./yum/doc/rpm-package-key.gpgEOF

安装

# 查询版本$ yum list kubelet kubeadm kubectl --showduplicates | grep 1.18.14 | sort -rkubelet.x86_64 1.18.14-0 kuberneteskubectl.x86_64 1.18.14-0 kuberneteskubeadm.x86_64 1.18.14-0 kubernetes# 安装指定版本yum install -y kubelet-1.18.14 kubeadm-1.18.14 kubectl-1.18.14 --disableexcludes=kubernetes# 确定版本一致$ kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"-12-18T12:08:45Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}$ kubectl version --clientClient Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"-12-18T12:11:25Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}$ kubelet --versionKubernetes v1.18.14

配置:上面我们提到过,需要将 Container Runtime、kubelet 配置成使用 systemd 来作为 cgroup 驱动,以此使系统更为稳定。

# vi /etc/sysconfig/kubeletKUBELET_EXTRA_ARGS=--cgroup-driver=systemd

启动

$ systemctl daemon-reload$ systemctl restart kubelet$ systemctl enable --now kubelet$ systemctl status kubelet

注意:kubelet.sercice 每隔几秒就会重启一次,循环等待 kubeadm 的指令。

4、初始化 Master 主控制平面节点

kubeadm init 的工作流

kubeadm init 命令通过执行下列步骤来启动一个 Kubernetes Master:

预检测系统状态:当出现 ERROR 时就退出 kubeadm,除非问题得到解决或者显式指定了--ignore-preflight-errors=<错误列表>参数。此外,也会出现 WARNING。

生成一个自签名的 CA 证书来为每个系统组件建立身份标识:可以显式指定--cert-dirCA 中心目录(默认为 /etc/kubernetes/pki),在该目录下方式 CA 证书、密钥等文件。API Server 证书将为任何--apiserver-cert-extra-sans参数值提供附加的 SAN 条目,必要时将其小写。

将 kubeconfig 文件写入 /etc/kubernetes/ 目录:以便 kubelet、Controller Manager 和 Scheduler 用来连接到 API Server,它们都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。

为 API Server、Controller Manager 和 Scheduler 生成 static Pod 的清单文件:存放在 /etc/kubernetes/manifests 下,kubelet 会轮训监视这个目录,在启动 Kubernetes 时用于创建系统组件的 Pod。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的 static Pod 清单文件。

待 Master 的 static Pods 都运行正常后,kubeadm init 的工作流程才会继续往下执行。

对 Master 使用 Labels 和 Stain mark(污点标记):以此隔离生产工作负载不会调度到 Master 上。

生成 Token:将来其他的 Node 可使用该 Token 向 Master 注册自己。也可以显式指定--token提供 Token String。

为了使 Node 能够遵照启动引导令牌(Bootstrap Tokens)和 TLS 启动引导(TLS bootstrapping)这两份文档中描述的机制加入到 Cluster 中,kubeadm 会执行所有的必要配置:

创建一个 ConfigMap 提供添加 Node 到 Cluster 中所需的信息,并为该 ConfigMap 设置相关的 RBAC 访问规则。允许启动引导令牌访问 CSR 签名 API。配置自动签发新的 CSR 请求。

通过 API Server 安装一个 DNS 服务器(CoreDNS)和 kube-proxy:注意,尽管现在已经部署了 DNS 服务器,但直到安装 CNI 时才调度它。

执行初始化

注意 1:因为我们要部署高可用集群,所以必须使用选项--control-plane-endpoint指定 API Server 的 HA Endpoint。

注意 2:由于 kubeadm 默认从 k8s.grc.io 下载所需镜像,因此可以通过--image-repository指定阿里云的镜像仓库。

注意 3:如果显式指定--upload-certs,则意味着在扩展冗余 Master 时,你必须要手动地将 CA 证书从主控制平面节点复制到将要加入的冗余控制平面节点上,推荐使用。

初始化

kubeadm init \--control-plane-endpoint "192.168.0.100" \--kubernetes-version "1.18.14" \--pod-network-cidr "10.0.0.0/8" \--service-cidr "172.16.0.0/16" \--token "abcdef.0123456789abcdef" \--token-ttl "0" \--image-repository /google_containers \--upload-certsW1221 00:02:43.240309 10942 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.14[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.148 192.168.0.100][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.0.148 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W1221 00:02:47.773223 10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W1221 00:02:47.774303 10942 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 23.117265 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1

查看 Pods:检查 Master 的组件是否齐全。

# 配置 kubectlmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get pod -n kube-systemNAMEREADY STATUS RESTARTS AGEcoredns-7ff77c879f-fh9vb0/1Pending 023mcoredns-7ff77c879f-qmk7z0/1Pending 023metcd-k8s-master-1 1/1Running 024mkube-apiserver-k8s-master-1 1/1Running 024mkube-controller-manager-k8s-master-1 1/1Running 024mkube-proxy-7hx55 1/1Running 023mkube-scheduler-k8s-master-1 1/1Running 024m

查看 Images

$ docker imagesREPOSITORYTAG IMAGE ID CREATED /google_containers/kube-proxyv1.18.14 8e6bca1d4e68 2 days ago/google_containers/kube-apiserver v1.18.14 f17e261f4c8a 2 days ago/google_containers/kube-controller-manager v1.18.14 b734a959c6fb 2 days ago/google_containers/kube-scheduler v1.18.14 95660d582e82 2 days ago95./google_containers/pause 3.2 80d28bedfe5d 10 months ago /google_containers/coredns 1.6.767da37a9a360 10 months ago 43./google_containers/etcd 3.4.3-0 303ce5db0e90 14 months ago 288MB

查看 Containers

$ docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTSNAMESf9a068b890d7 8e6bca1d4e68"/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_03b6adfa0b1a5 /google_containers/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-7hx55_kube-system_aacb0da3-16ec-414c-b138-856e2b470bb9_0dcc47de63e50 f17e261f4c8a"kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_053afb7fbe8c0 b734a959c6fb"kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_0a4101a231c1b 303ce5db0e90"etcd --advertise-cl…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0197f510ff6c5 95660d582e82"kube-scheduler --au…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_03a4590590093 /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-k8s-master-1_kube-system_f75424d466cd7197fb8095b0f59ea8d9_04bbdc99a7a68 /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-k8s-master-1_kube-system_c693bd1fadf036d8e2e4df0afd49f062_019488127c269 /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-k8s-master-1_kube-system_f85e02734d6479f3bb3e468eea87fd3a_0e67d2f7a27b0 /google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-k8s-master-1_kube-system_0213a889f9350758ac9847629f75db19_0

测试 API Server LB 是否正常

$ nc -v 192.168.0.100 6443Connection to 192.168.0.100 port 6443 [tcp/sun-sr-https] succeeded!

注意:上述 Token 的过期时间是 24 小时,如果希望在 24 小时之后继续添加不通的节点,则需要重新生产 Token:

# 新建 Tokenkubeadm token create# output: 5didvk.d09sbcov8ph2amjw# 新建 --discovery-token-ca-cert-hashopenssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \openssl dgst -sha256 -hex | sed 's/^.* //'# output: 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78

(可选)清理或重新进行初始化

要再次运行 kubeadm init,你必须首先卸载集群,可以在 Master 上触发尽力而为的清理:

kubeadm reset

Reset 过程不会重置或清除 iptables 规则或 IPVS 表。如果你希望重置 iptables 或 IPVS,则必须手动进行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xipvsadm -C

根据需求调整参数,重新进行初始化:

kubeadm init <args>

或许,彻底删除节点:

kubectl delete node <node name>

5、添加 Master 冗余控制平面节点

在第一个 Master 初始化完毕之后,我们就可以继续添加冗余 Master 节点了。

添加 k8s-master-2

kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[preflight] Running pre-flight checks before initializing the new control plane instance[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master-2 localhost] and IPs [192.168.0.112 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.0.112 192.168.0.100][certs] Generating "front-proxy-client" certificate and key[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"[certs] Using the existing "sa" key[kubeconfig] Generating kubeconfig files[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"W1221 00:30:18.978564 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-controller-manager"W1221 00:30:18.986650 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W1221 00:30:18.987613 27668 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[check-etcd] Checking that the etcd cluster is healthy[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...[etcd] Announced new etcd member joining to the existing etcd cluster[etcd] Creating static Pod manifest for "etcd"[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s{"level":"warn","ts":"-12-21T00:30:34.018+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.0.112:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Control plane (master) label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.

添加 k8s-master-3

kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1 \--control-plane --certificate-key 463868e92236803eb8fdeaa3d7b0ada67cf0f882c45974682c6ac2f20be1d544

检查 Master 节点数

# 配置 kubectlmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config$ kubectl get nodesNAME STATUSROLES AGEVERSIONk8s-master-1 NotReady master 35mv1.18.14k8s-master-2 NotReady master 8m14s v1.18.14k8s-master-3 NotReady master 2m30s v1.18.14

6、添加 Node 工作负载节点

部署完高可用的 Master 控制平面之后,我们就可以注册任意个 Node 工作负载节点了。

添加 Node

kubeadm join 192.168.0.100:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:88dc9773b5dfc0cde6082314a1a4a9bbdb6ddfd3f1f84a7113581a3b07e839e1W1221 00:39:36.256784 29495 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Starting the kubelet[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

检查 Node:

$ kubectl get nodesNAME STATUSROLES AGEVERSIONk8s-master-1 NotReady master 37mv1.18.14k8s-master-2 NotReady master 10mv1.18.14k8s-master-3 NotReady master 4m24s v1.18.14k8s-node-1NotReady <none> 51sv1.18.14k8s-node-2NotReady <none> 48sv1.18.14

7、安装 CNI 网络插件

我们选择使用 Calico SDN 方案。官方文档:/about/about-calico

注意

Pod 网络不得与任何主机网络重叠:所以我们在执行 kubeadm init 时显式指定了--pod-network-cidr参数。确保 CNI 网络插件支持 RBAC(基于角色的访问控制)确保 CNI 支持 IPv6 或 IPv4v6,当你需要使用的时候在 OpenStack 环境中要关闭 “端口安全” 功能,否则 Calico IPIP 隧道无法打通

安装

$ kubectl apply -f /manifests/calico.yamlconfigmap/calico-config createdcustomresourcedefinition.apiextensions.k8s.io/bgpconfigurations. createdcustomresourcedefinition.apiextensions.k8s.io/bgppeers. createdcustomresourcedefinition.apiextensions.k8s.io/blockaffinities. createdcustomresourcedefinition.apiextensions.k8s.io/clusterinformations. createdcustomresourcedefinition.apiextensions.k8s.io/felixconfigurations. createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies. createdcustomresourcedefinition.apiextensions.k8s.io/globalnetworksets. createdcustomresourcedefinition.apiextensions.k8s.io/hostendpoints. createdcustomresourcedefinition.apiextensions.k8s.io/ipamblocks. createdcustomresourcedefinition.apiextensions.k8s.io/ipamconfigs. createdcustomresourcedefinition.apiextensions.k8s.io/ipamhandles. createdcustomresourcedefinition.apiextensions.k8s.io/ippools. createdcustomresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations. createdcustomresourcedefinition.apiextensions.k8s.io/networkpolicies. createdcustomresourcedefinition.apiextensions.k8s.io/networksets. createdclusterrole.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers createdclusterrole.rbac.authorization.k8s.io/calico-node createdclusterrolebinding.rbac.authorization.k8s.io/calico-node createddaemonset.apps/calico-node createdserviceaccount/calico-node createddeployment.apps/calico-kube-controllers createdserviceaccount/calico-kube-controllers createdpoddisruptionbudget.policy/calico-kube-controllers created

检查 Calico Pods

$ watch kubectl get pod --all-namespacesEvery 2.0s: kubectl get pod --all-namespacesMon Dec 21 13:12:30 NAMESPACENAME READY STATUS RESTARTS AGEkube-system calico-kube-controllers-7dbc97f587-nqrxv 1/1Running 09m34skube-system calico-node-47xmr1/1Running 09m34skube-system calico-node-8zwbg1/1Running 09m34skube-system calico-node-dj4qt1/1Running 09m34skube-system calico-node-glqqj1/1Running 09m34skube-system calico-node-jb4t41/1Running 09m34skube-system coredns-7ff77c879f-fh9vb 1/1Running 013hkube-system coredns-7ff77c879f-qmk7z 1/1Running 013hkube-system etcd-k8s-master-11/1Running 013hkube-system etcd-k8s-master-21/1Running 012hkube-system etcd-k8s-master-31/1Running 012hkube-system kube-apiserver-k8s-master-11/1Running 013hkube-system kube-apiserver-k8s-master-21/1Running 012hkube-system kube-apiserver-k8s-master-31/1Running 012hkube-system kube-controller-manager-k8s-master-1 1/1Running 113hkube-system kube-controller-manager-k8s-master-2 1/1Running 012hkube-system kube-controller-manager-k8s-master-3 1/1Running 012hkube-system kube-proxy-7hx55 1/1Running 013hkube-system kube-proxy-8dmc4 1/1Running 012hkube-system kube-proxy-9clqs 1/1Running 012hkube-system kube-proxy-cq5tq 1/1Running 012hkube-system kube-proxy-pm79q 1/1Running 012hkube-system kube-scheduler-k8s-master-11/1Running 113hkube-system kube-scheduler-k8s-master-21/1Running 012hkube-system kube-scheduler-k8s-master-31/1Running 012h

检查 Cluster 节点的状态:安装了 CNI 之后节点的状态应该是 Ready 的。

$ kubectl get nodesNAME STATUS ROLES AGE VERSIONk8s-master-1 Ready master 13h v1.18.14k8s-master-2 Ready master 12h v1.18.14k8s-master-3 Ready master 12h v1.18.14k8s-node-1Ready <none> 12h v1.18.14k8s-node-2Ready <none> 12h v1.18.14

8、安装 Metrics Server

Kubernetes Metrics Server

Kubernetes Metrics Server 是 Cluster 的核心监控数据的聚合器,kubeadm 默认是不部署的。

Metrics Server 供 Dashboard 等其他组件使用,是一个扩展的 APIServer,依赖于 API Aggregator。所以,在安装 Metrics Server 之前需要先在 kube-apiserver 中开启 API Aggregator。

Metrics API 只可以查询当前的度量数据,并不保存历史数据Metrics API URI 为 /apis/metrics.k8s.io/,在 k8s.io/metrics 下维护必须部署 metrics-server 才能使用该 API,metrics-server 通过调用 kubelet Summary API 获取数据

使用 Metrics Server 有必备两个条件:

API Server 启用 Aggregator Routing 支持。否则 API Server 不识别请求:

Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

API Server 能访问 Metrics Server Pod IP。否则 API Server 无法访问 Metrics Server:

E1223 07:23:04.330206 1 available_controller.go:420] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.171.248.214:4443/apis/metrics.k8s.io/v1beta1: Get https://10.171.248.214:4443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

启用 API Aggregator

API Aggregation 允许在不修改 Kubernetes 核心代码的同时扩展 Kubernetes API,即:将第三方服务注册到 Kubernetes API 中,这样就可以通过 Kubernetes API 来访问第三方服务了,例如:Metrics Server API。

注:另外一种扩展 Kubernetes API 的方法是使用 CRD(Custom Resource Definition,自定义资源定义)。

检查 API Server 是否开启了 Aggregator Routing:查看 API Server 是否具有--enable-aggregator-routing=true选项。

$ ps -ef | grep apiserverroot23896 29500 0 12:40 pts/0 00:00:00 grep --color=auto apiserverroot28613 28551 1 12月21 ?01:05:29 kube-apiserver --advertise-address=192.168.0.112 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=172.16.0.0/16 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key

修改每个 API Server 的 kube-apiserver.yaml 配置开启 Aggregator Routing:修改 manifests 配置后会 API Server 会自动重启生效。

$ vi /etc/kubernetes/manifests/kube-apiserver.yaml...spec:containers:- command:...- --enable-aggregator-routing=true

安装 Metrics Server

检查 Cluster 是否安装了 Metrics Server

$ kubectl top podsError from server (NotFound): the server could not find the requested resource (get services http:heapster:)

部署 Metrics Server

# 下载 YAML 文件wget /kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml# 编辑修改 metrics-server 的启动参数:# --kubelet-insecure-tls 跳过 TLS 认证,否则会出现 x509 的认证问题,用于测试环境。# --kubelet-preferred-address-types=InternalIP 使用 Node IP 进行通信。- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP- --kubelet-use-node-status-port- --kubelet-insecure-tls# 部署$ kubectl apply -f components.yamlserviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

注意:如果出现了 ErrImagePull 的问题,那么意味着 k8s.gcr.io/metrics-server/metrics-server:v0.4.1 镜像下载失败了:

$ docker pull k8s.gcr.io/metrics-server/metrics-server:v0.4.1Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

这时候就需要我们在每个节点上都手动的下载镜像了:

$ docker pull bitnami/metrics-server:0.4.1$ docker tag bitnami/metrics-server:0.4.1 k8s.gcr.io/metrics-server/metrics-server:v0.4.1$ docker imagesREPOSITORYTAG IMAGE ID CREATED SIZEbitnami/metrics-server 0.4.14fb6df85a88d 6 hours ago 171MBk8s.gcr.io/metrics-server/metrics-serverv0.4.1 4fb6df85a88d 6 hours ago 171MB

然后再次执行 Metrics Server 的部署指令。

检查 Metrics Server Service

$ kubectl get svc --all-namespaces | grep metrics-serverkube-system metrics-server ClusterIP 172.16.128.176 <none> 443/TCP 5h55m

检查 API Server 是否可以连通 Metrics Server

$ kubectl describe svc metrics-server -n kube-systemName: metrics-serverNamespace: kube-systemLabels: k8s-app=metrics-serverAnnotations: Selector: k8s-app=metrics-serverType: ClusterIPIP:172.16.128.176Port: https 443/TCPTargetPort: https/TCPEndpoints: 10.171.248.214:4443Session Affinity: NoneEvents: <none># 在 Master Node 上 Ping。$ ping 10.171.248.21464 bytes from 10.171.248.214: icmp_seq=1 ttl=63 time=0.282 ms

检查 Metrics Server

$ kubectl top nodesNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%k8s-master-1 174m 8%1156Mi66%k8s-master-2 123m 6%1134Mi65%k8s-master-3 104m 5%1075Mi61%k8s-node-178m3%853Mi 49%k8s-node-278m3%824Mi 47%

9、安装 Dashboard GUI

默认情况下不会部署 Dashboard,需要我们手动安装。在安装 Dashboard 之前我们还需要确保安装了 Metrics Server。否则,错误提示我们在 Cluster 中没有安装 Metrics Server。

$ kubectl logs -f -n kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-qppkd192.168.0.208 - - [21/Dec/:07:54:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"{"level":"error","msg":"Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)","time":"-12-21T07:54:09Z"}$ kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-7b544877d5-p6g8t/12/21 07:54:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.

另外,官方 YAML 部署的 Dashboard 的 Service 也不是 NodePort 类型的,我们需要手动修改。

编辑

$ wget /kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml$ vi recommended.yaml---kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:type: NodePortports:- port: 443targetPort: 8443nodePort: 30000selector:k8s-app: kubernetes-dashboard---

部署:出于安全访问的考虑,默认情况下,Dashboard 会使用最基础的 RBAC 配置进行部署。

$ kubectl create -f recommended.yamlnamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper created

检查 Dashboard Pods

$ kubectl get pods --all-namespaces | grep dashboardkubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-ctn6h 1/1Running 079skubernetes-dashboard kubernetes-dashboard-7b544877d5-sghsf 1/1Running 079s

检查 Dashboard Services:检查是否与 YAML 描述的一致。

$ kubectl get svc -n kubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdashboard-metrics-scraper ClusterIP 172.16.165.243 <none> 8000/TCP 2m32skubernetes-dashboard NodePort 172.16.219.86 <none> 443:30000/TCP 2m33s

10、访问 Dashboard UI

因为 kubernetes-dashboard Service 是 NodePod 类型,所以我们使用任意节点 IP + NodePortNum 就可以访问了。

注意:NodePort 并非 VIP HA LB 的方式,所以我们可以使用高可用部署环境中的 Load Balancer 进行代理。

由于 Dashboard 默认是自建的 HTTPS 证书,该证书是不受浏览器信任的,强制跳转即可。

登录 Dashboard 的时候支持 Kubeconfig 和 Token 两种认证方式,由于 Kubeconfig 中也依赖 Token 字段,所以生成 Token 这个步骤是必不可少的。

创建 Dashboard Admin 管理员

$ cat dashboard-admin.yamlapiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardname: dashboard-adminnamespace: kubernetes-dashboard$ kubectl create -f ./dashboard-admin.yamlserviceaccount/dashboard-admin created# CLI: kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

为 Dashboard Admin 分配权限

$ cat dashboard-admin-bind-cluster-role.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: dashboard-admin-bind-cluster-rolelabels:k8s-app: kubernetes-dashboardroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: dashboard-adminnamespace: kubernetes-dashboard$ kubectl create -f ./dashboard-admin-bind-cluster-role.yamlclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created# CLI: kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin

生成 Token

$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')Name: dashboard-admin-token-bc2dhNamespace: kubernetes-dashboardLabels: <none>Annotations: kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 3c26e908-49b2-4de8-8f08-699dba736a56Type: kubernetes.io/service-account-tokenData====ca.crt:1025 bytesnamespace: 20 bytestoken:eyJhbGciOiJSUzI1NiIsImtpZCI6IkV3a2JIWGhCdTFTSEFpd2hxam1WM29RTzQwcXdlT2dLYlRTRFU2TGotNHMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYmMyZGgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2MyNmU5MDgtNDliMi00ZGU4LThmMDgtNjk5ZGJhNzM2YTU2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.v24ad9kFp4_kEUo4pPVRxqHxe6rXCv4kZryR3QoWrCKTdHM2FmJPrU1q_w5wGyXVShQQGyBlQJpdAAX5RAIy4gVvLLbc7W9qE74jHVijb4pBp9j8r-lsYpGvpiXadVCqXYCXHBxQSeYKXNM6hwDGKFGxQZ2LDA8K_j590fVeozSxM1RBJxetM4KfF0KomDurOhjITu0rortufeKhkOvRcoB0EikPkwqbU0Q5Ip7m_YIYHZsEqo292GieTyfZIM0KRLGGn4GX53qTbgk1bhQYsequUjs3sKw5Ng8vSOq8NPX0vB88Mjl0fzPWtKvnvpWokg1JL_fW4-qKA7HbUcenXQ

访问

11、通过 NFS 实现持久化存储

NFS Server 安装

$ yum install -y nfs-utils$ yum install -y rpcbind$ mkdir /public$ vi /etc/exports/public 192.168.186.*$ systemctl start rpcbind && systemctl enable rpcbind && systemctl status rpcbind$ systemctl start nfs-server && systemctl enable nfs-server && systemctl status nfs-server

NFS Client 挂载

$ mkdir /mnt/public$ vi /etc/fstab192.168.186.198:/public /mnt/public nfs defaults 0 0$ mount -a

Kubernetes 部署 nfs-client-provisioner

下载 nfs-client

git clone /kubernetes-sigs/nfs-subdir-external-provisioner.git

创建 ServiceAccount

$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')$ NAMESPACE=${NS:-default}$ cd nfs-subdir-external-provisioner/deploy$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deployment.yaml$ kubectl create -f ./rbac.yaml

部署 NFS Provisioner

$ vi deployment.yaml...spec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs- name: NFS_SERVERvalue: 192.168.186.198 # NFS Server IP- name: NFS_PATHvalue: /public# Share Dir Pathvolumes:- name: nfs-client-rootnfs:server: 192.168.186.198 # NFS Server IPpath: /public # Share Dir Path# 创建 NFS Provisioner$ kubectl apply -f deployment.yaml

创建 NFS Storage Class

kubectl apply -f class.yaml

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。