300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > 基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)

基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)

时间:2020-02-29 01:48:05

相关推荐

基于阿里云镜像源使用kubeadm安装k8s单master节点集群(v1.17.3)

一、环境准备

1、系统要求

按量付费阿里云主机三台

要求:centos7.6~7.8;以下为 /install/install-k8s.html#%E6%A3%80%E6%9F%A5-centos-hostname 网站的检验结果。

2、前置步骤(所有节点)

centos 版本为 7.6 或 7.7、CPU 内核数量大于等于 2,且内存大于等于 4Ghostname 不是 localhost,且不包含下划线、小数点、大写字母任意节点都有固定的内网 IP 地址(集群机器统一内网)任意节点上 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离任意节点不会直接使用 docker run 或 docker-compose 运行容器。Pod

#关闭防火墙: 或者阿里云开通安全组端口访问systemctl stop firewalldsystemctl disable firewalld#关闭 selinux: sed -i 's/enforcing/disabled/' /etc/selinux/configsetenforce 0#关闭 swap:swapoff -a #临时 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久#将桥接的 IPv4 流量传递到 iptables 的链:# 修改 /etc/sysctl.conf# 如果有配置,则修改sed -i "s#^net.ipv4.ip_forward.*#net.ipv4.ip_forward=1#g" /etc/sysctl.confsed -i "s#^net.bridge.bridge-nf-call-ip6tables.*#net.bridge.bridge-nf-call-ip6tables=1#g" /etc/sysctl.confsed -i "s#^net.bridge.bridge-nf-call-iptables.*#net.bridge.bridge-nf-call-iptables=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.all.disable_ipv6.*#net.ipv6.conf.all.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.default.disable_ipv6.*#net.ipv6.conf.default.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.lo.disable_ipv6.*#net.ipv6.conf.lo.disable_ipv6=1#g" /etc/sysctl.confsed -i "s#^net.ipv6.conf.all.forwarding.*#net.ipv6.conf.all.forwarding=1#g" /etc/sysctl.conf# 可能没有,追加echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.confecho "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.confecho "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf# 执行命令以应用sysctl -p

二、安装Docker环境(所有节点)

#1、安装docker##1.1、卸载旧版本sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine##1.2、安装基础依赖yum install -y yum-utils \device-mapper-persistent-data \lvm2##1.3、配置docker yum源sudo yum-config-manager \--add-repo \/docker-ce/linux/centos/docker-ce.repo##1.4、安装并启动 dockeryum install -y docker-ce-19.03.8 docker-ce-cli-19.03.8 containerd.iosystemctl enable dockersystemctl start docker##1.5、配置docker加速sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://t1gbabbr."]}EOFsudo systemctl daemon-reloadsudo systemctl restart docker

三、安装k8s环境

1、安装k8s、kubelet、kubeadm、kubectl(所有节点)

# 配置K8S的yum源cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=/kubernetes/yum/doc/yum-key.gpg/kubernetes/yum/doc/rpm-package-key.gpgEOF# 卸载旧版本yum remove -y kubelet kubeadm kubectl# 安装kubelet、kubeadm、kubectlyum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3#开机启动和重启kubeletsystemctl enable kubelet && systemctl start kubelet##注意,如果此时查看kubelet的状态,他会无限重启,等待接收集群命令,和初始化。这个是正常的。

2、初始化master节点(master节点)

#1、下载master节点需要的镜像【选做】#创建一个.sh文件,内容如下,#!/bin/bashimages=(kube-apiserver:v1.17.3kube-proxy:v1.17.3kube-controller-manager:v1.17.3kube-scheduler:v1.17.3coredns:1.6.5etcd:3.4.3-0pause:3.1)for imageName in ${images[@]} ; dodocker pull -/google_containers/$imageNamedone#2、初始化master节点kubeadm init \--apiserver-advertise-address=172.26.165.243 \--image-repository -/google_containers \--kubernetes-version v1.17.3 \--service-cidr=10.96.0.0/16 \--pod-network-cidr=192.168.0.0/16#service网络和pod网络;docker service create #docker container --> ip brigde#Pod ---> ip 地址,整个集群 Pod 是可以互通。255*255#service ---> #3、配置 kubectlmkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config#4、提前保存令牌kubeadm join 172.26.165.243:6443 --token afb6st.b7jz45ze7zpg65ii \--discovery-token-ca-cert-hash sha256:e5e5854508dafd04f0e9cf1f502b5165e25ff3017afd23cade0fe6acb5bc14ab#5、部署网络插件#上传网络插件,并部署#kubectl apply -f calico-3.13.1.yamlkubectl apply -f /manifests/calico.yaml#网络好的时候,就没有下面的操作了calico:image: calico/cni:v3.14.0image: calico/cni:v3.14.0image: calico/pod2daemon-flexvol:v3.14.0image: calico/node:v3.14.0image: calico/kube-controllers:v3.14.0#6、查看状态,等待就绪watch kubectl get pod -n kube-system -o wide

3、worker加入集群

#1、使用刚才master打印的令牌命令加入kubeadm join 172.26.248.150:6443 --token ktnvuj.tgldo613ejg5a3x4 \--discovery-token-ca-cert-hash sha256:f66c496cf7eb8aa06e1a7cdb9b6be5b013c613cdcf5d1bbd88a6ea19a2b454ec#2、如果超过2小时忘记了令牌,可以这样做kubeadm token create --print-join-command #打印新令牌kubeadm token create --ttl 0 --print-join-command #创建个永不过期的令牌

4、搭建NFS作为默认sc

4.1、配置NFS服务器

yum install -y nfs-utils#执行命令 vi /etc/exports,创建 exports 文件,文件内容如下:echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports#/nfs/data 172.26.248.0/20(rw,no_root_squash)#执行以下命令,启动 nfs 服务# 创建共享目录mkdir -p /nfs/datasystemctl enable rpcbindsystemctl enable nfs-serversystemctl start rpcbindsystemctl start nfs-serverexportfs -r#检查配置是否生效exportfs# 输出结果如下所示/nfs/data /nfs/data#测试Pod直接挂载NFS了apiVersion: v1kind: Podmetadata:name: vol-nfsnamespace: defaultspec:volumes:- name: htmlnfs:path: /nfs/data #1000Gserver: 自己的nfs服务器地址containers:- name: myappimage: nginxvolumeMounts:- name: htmlmountPath: /usr/share/nginx/html/

4.2、搭建NFS-Client

#服务器端防火墙开放111、662、875、892、2049的 tcp / udp 允许,否则远端客户无法连接。#安装客户端工具yum install -y nfs-utils#执行以下命令检查 nfs 服务器端是否有设置共享目录# showmount -e $(nfs服务器的IP)showmount -e 172.26.165.243# 输出结果如下所示Export list for 172.26.165.243/nfs/data *#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /root/nfsmountmkdir /root/nfsmount# mount -t nfs $(nfs服务器的IP):/root/nfs_root /root/nfsmount#高可用备份的方式mount -t nfs 172.26.165.243:/nfs/data /root/nfsmount# 写入一个测试文件echo "hello nfs server" > /root/nfsmount/test.txt#在 nfs 服务器上执行以下命令,验证文件写入成功cat /data/volumes/test.txt

4.3、设置动态供应

4.3.1、创建provisioner(NFS环境前面已经搭好)

# 先创建授权# vi nfs-rbac.yaml---apiVersion: v1kind: ServiceAccountmetadata:name: nfs-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-provisioner-runnerrules:- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["watch", "create", "update", "patch"]- apiGroups: [""]resources: ["services", "endpoints"]verbs: ["get","create","list", "watch","update"]- apiGroups: ["extensions"]resources: ["podsecuritypolicies"]resourceNames: ["nfs-provisioner"]verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-provisionersubjects:- kind: ServiceAccountname: nfs-provisionernamespace: defaultroleRef:kind: ClusterRolename: nfs-provisioner-runnerapiGroup: rbac.authorization.k8s.io---#vi nfs-deployment.yaml;创建nfs-client的授权kind: DeploymentapiVersion: apps/v1metadata:name: nfs-client-provisionerspec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-provisionercontainers:- name: nfs-client-provisionerimage: lizhenliang/nfs-client-provisionervolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAME #供应者的名字value: storage.pri/nfs #名字虽然可以随便起,以后引用要一致- name: NFS_SERVERvalue: 172.26.165.243- name: NFS_PATHvalue: /nfs/datavolumes:- name: nfs-client-rootnfs:server: 172.26.165.243path: /nfs/data##这个镜像中volume的mountPath默认为/persistentvolumes,不能修改,否则运行时会报错#创建storageclass# vi storageclass-nfs.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: storage-nfsprovisioner: storage.pri/nfsreclaimPolicy: Delete#扩展"reclaim policy"有三种方式:Retain、Recycle、Deleted。Retain#保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源:手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。手动清空后端存储volume上的数据。手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。Delete删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用户手动为创建后的动态PV编辑"reclaim policy"Recycle保留PV,但清空其上数据,已废弃

4.3.2、创建存储类

#创建storageclass# vi storageclass-nfs.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: storage-nfsprovisioner: storage.pri/nfsreclaimPolicy: Delete

"reclaim policy"有三种方式:Retain、Recycle、Deleted。

Retain

保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源 手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。手动清空后端存储volume上的数据。手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。

Delete

删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用户手动为创建后的动态PV编辑"reclaim policy"

Recycle

保留PV,但清空其上数据,已废弃

4.3.3、改变默认sc

##改变系统默认schttps://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-classkubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

4.4、验证nfs动态供应

4.4.1、创建pvc

#vi pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc-claim-01# annotations:# volume.beta.kubernetes.io/storage-class: "storage-nfs"spec:storageClassName: storage-nfs #这个class一定注意要和sc的名字一样accessModes:- ReadWriteManyresources:requests:storage: 1Mi

4.4.2、使用pvc

#vi testpod.yamlkind: PodapiVersion: v1metadata:name: test-podspec:containers:- name: test-podimage: busyboxcommand:- "/bin/sh"args:- "-c"- "touch /mnt/SUCCESS && exit 0 || exit 1"volumeMounts:- name: nfs-pvcmountPath: "/mnt"restartPolicy: "Never"volumes:- name: nfs-pvcpersistentVolumeClaim:claimName: pvc-claim-01

5、安装metrics-server

#1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus)---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: system:aggregated-metrics-readerlabels:rbac.authorization.k8s.io/aggregate-to-view: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-admin: "true"rules:- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:name: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: apiregistration.k8s.io/v1beta1kind: APIServicemetadata:name: v1beta1.metrics.k8s.iospec:service:name: metrics-servernamespace: kube-systemgroup: metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100---apiVersion: v1kind: ServiceAccountmetadata:name: metrics-servernamespace: kube-system---apiVersion: apps/v1kind: Deploymentmetadata:name: metrics-servernamespace: kube-systemlabels:k8s-app: metrics-serverspec:selector:matchLabels:k8s-app: metrics-servertemplate:metadata:name: metrics-serverlabels:k8s-app: metrics-serverspec:serviceAccountName: metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers- name: tmp-diremptyDir: {}containers:- name: metrics-serverimage: mirrorgooglecontainers/metrics-server-amd64:v0.3.6imagePullPolicy: IfNotPresentargs:- --cert-dir=/tmp- --secure-port=4443- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostnameports:- name: main-portcontainerPort: 4443protocol: TCPsecurityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- name: tmp-dirmountPath: /tmpnodeSelector:kubernetes.io/os: linuxkubernetes.io/arch: "amd64"---apiVersion: v1kind: Servicemetadata:name: metrics-servernamespace: kube-systemlabels:kubernetes.io/name: "Metrics-server"kubernetes.io/cluster-service: "true"spec:selector:k8s-app: metrics-serverports:- port: 443protocol: TCPtargetPort: main-port---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system

参考链接:

/leifengyang/kubesphere/grw8se

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。