300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > 探秘 VMware Tanzu Kubernetes 发行版

探秘 VMware Tanzu Kubernetes 发行版

时间:2019-05-28 20:47:09

相关推荐

探秘 VMware Tanzu Kubernetes 发行版

公众号关注「奇妙的 Linux 世界」

设为「星标」,每天带你玩转 Linux !

之前接触的 Kubernetes 集群部署工具大多数都是依赖于 ssh 连接到待部署的节点上进行部署操作,这样就要求部署前需要提前准备好集群节点,且要保证这些节点的网络互通以及时钟同步等问题。

类似于 kubespray 或者 kubekey 这些部署工具是不会去管这些底层的 IaaS 资源的创建,是要自己提前准备好。但是在一些企业私有云环境中,使用了如VMware vShpere[1]或OpenStack[2]这些虚拟化平台,是可以将 K8s 集群部署与 IaaS 资源创建这两步统一起来的,这样就可以避免手动创建和配置虚拟机这些繁琐的步骤。

目前将 IaaS 资源创建与 K8s 集群部署结合起来也有比较成熟的方案,比如基于cluster-api[3]项目的tanzu[4]。本文就以VMware Tanzu 社区版[5]为例在一台物理服务器上,从安装 ESXi OS 到部署完成 Tanzu Workload 集群,来体验一下这种部署方案的与众不同之处。

部署流程

下载依赖文件

安装 govc 依赖

安装 ESXi OS

安装 vCenter

配置 vCenter

创建 bootstrap 虚拟机

初始化 bootstrap 节点

部署 Tanzu Manager 集群

部署 Tanzu Workload 集群

劝退三连 😂

需要有一个VMware 的账户[6]用于下载一些 ISO 镜像和虚拟机模版;

需要有一台物理服务器,推荐最低配置 8C 32G,至少 256GB 存储;

需要一台 DHCP 服务器,由于默认是使用 DHCP 获取 IP 来分配给虚拟机的,因此 ESXi 所在的 VM Network 网络中必须有一台 DHCP 服务器用于给虚拟机分配 IP;

下载依赖文件

整个部署流程所需要的依文件赖如下,可以先将这些依赖下载到本地的机器上,方便后续使用。

root@devbox:/root/tanzu#tree-sh.├──[12M]govc_Linux_x86_64.tar.gz├──[895M]photon-3-kube-v1.21.2+vmware.1-tkg.2-12816990095845873721.ova├──[225M]photon-ova-4.0-c001795b80.ova├──[170M]tce-linux-amd64-v0.9.1.tar.gz├──[9.0G]VMware-VCSA-all-7.0.3-18778458.iso└──[390M]VMware-VMvisor-Installer-7.0U2a-17867351.x86_64.iso

注意 ESXi 和 vCenter 的版本最好是 7.0 及以上,我只在 ESXi 7.0.2 和 vCenter 7.0.3 上测试过,其他版本可能会有些差异;另外 ESXi 的版本不建议使用最新的 7.0.3,因为有比较严重的 bug,官方也建议用户生产环境不要使用该版本了vSphere 7.0 Update 3 Critical Known Issues - Workarounds & Fix (86287)[13]。

安装 govc 及依赖

在本地机器上安装好 govc 和 jq,这两个工具后面在配置 vCenter 的时候会用到。

macOS

$brewinstallgovcjq

Debian/Ubuntu

$tar-xfgovc_Linux_x86_64.tar.gz-C/usr/local/bin$aptinstalljq-y

其他 Linux 可以在 govc 和 jq 的 GitHub 上下载对应的安装文件进行安装。

安装 ESXi OS

ESXi OS 的安装网上有很多教程,没有太多值得讲解的地方,因此就参照一下其他大佬写的博客或者官方的安装文档VMware ESXi 安装和设置[14]来就行;需要注意一点,ESXi OS 安装时 VMFSL 分区将会占用大量的存储空间,这将会使得 ESXi OS 安装所在的磁盘最终创建出来的 datastore 比预期小很多,而且这个 VMFSL 分区在安装好之后就很难再做调整了。因此如果磁盘存储空间比较紧张,在安装 ESXi OS 之前可以考虑下如何去掉这个分区;或者和我一样将 ESXI OS 安装在了一个 16G 的 USB Dom 盘上,不过生产环境不建议采用这种方案 😂(其实个人觉着安装在 U 盘上问题不大,ESXi OS 启动之后是加载到内存中运行的,不会对 U 盘有大量的读写操作,只不过在机房中 U 盘被人不小心拔走就凉了。

设置 govc 环境变量

#ESXi节点的IPexportESXI_IP="192.168.18.47"#ESXi登录的用户名,初次安装后默认为rootexportGOVC_USERNAME="root"#在ESXi安装时设置的root密码exportGOVC_PASSWORD="admin@"#允许不安全的SSL连接exportGOVC_INSECURE=trueexportGOVC_URL="https://${ESXI_IP}"exportGOVC_DATASTORE=datastore1

测试 govc 是否能正常连接 ESXi 主机

Name:localhost.localPath:/ha-datacenter/host/localhost/localhostManufacturer:DellLogicalCPUs:20CPUs@2394MHzProcessortype:Intel(R)Xeon(R)Silver4210RCPU@2.40GHzCPUusage:579MHz(1.2%)Memory:261765MBMemoryusage:16457MB(6.3%)Boottime:-02-0211:53:59.630124+0000UTCState:connected

安装 vCenter

按照 VMware 官方的 vCenter 安装文档关于 vCenter Server 安装和设置[15]来安装实在是过于繁琐,其实官方的 ISO 安装方式无非是运行一个 installer web 服务,然后在浏览器上配置好 vCenter 虚拟机的参数,再将填写的配置信息在部署 vcsa 虚拟机的时候注入到 ova 的配置参数中。

知道这个安装过程的原理之后我们也可以自己配置 vCenter 的参数信息,然后通过 govc 来部署 ova;这比使用 UI 的方式简单方便很多,最终只需要填写一个配置文件,一条命令就可以部署完成啦。

首先是挂载 vCenter 的 ISO,找到 vcsa ova 文件,它是 vCenter 虚拟机的模版

$mount-oloopVMware-VCSA-all-7.0.3-18778458.iso/mnt$ls/mnt/vcsa/VMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10.ova/mnt/vcsa/VMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10.ova

根据自己的环境信息修改下面安装脚本中的相关配置:

#!/usr/bin/envbashVCSA_OVA_FILE=$1set-oerrexitset-onounsetset-opipefail#ESXi的IP地址exportESXI_IP="192.168.18.47"#ESXi的用户名exportGOVC_USERNAME="root"#ESXI的密码exportGOVC_PASSWORD="admin@"#安装vCenter虚拟机使用的datastore名称exportGOVC_DATASTORE=datastore1exportGOVC_INSECURE=trueexportGOVC_URL="https://${ESXI_IP}"#vCenter的登录密码VM_PASSWORD="admin@"#vCenter的IP地址VM_IP=192.168.20.92#vCenter虚拟机的名称VM_NAME=vCenter-Server-Appliance#vCenter虚拟机使用的网络VM_NETWORK="VMNetwork"#DNS服务器VM_DNS="223.6.6.6"#NTP服务器VM_NTP="0."deploy_vcsa_vm(){config=$(govchost.info-k-json|jq-r'.HostSystems[].Config')gateway=$(jq-r'.Network.IpRouteConfig.DefaultGateway'<<<"$config")route=$(jq-r'.Network.RouteTableInfo.IpRoute[]|select(.DeviceName=="vmk0")|select(.Gateway=="0.0.0.0")'<<<"$config")prefix=$(jq-r'.PrefixLength'<<<"$route")opts=(cis.vmdir.password=${VM_PASSWORD}cis.appliance.root.passwd=${VM_PASSWORD}cis.appliance.root.shell=/bin/bashcis.deployment.node.type=embeddedcis.vmdir.domain-name=vsphere.localcis.vmdir.site-name=.addr.family=ipv4cis.appliance.ssh.enabled=Truecis.ceip_enabled=Falsecis.deployment.autoconfig=.addr=${VM_IP}.prefix=${prefix}.dns.servers=${VM_DNS}.gateway=$gatewaycis.appliance.ntp.servers="${VM_NTP}".mode=static)props=$(printf--"guestinfo.%s\n""${opts[@]}"|jq--slurp-R'split("\n")|map(select(.!=""))|map(split("="))|map({"Key":.[0],"Value":.[1]})')cat<<EOF|govcimport.${VCSA_OVA_FILE##*.}-options-"${VCSA_OVA_FILE}"{"Name":"${VM_NAME}","Deployment":"tiny","DiskProvisioning":"thin","IPProtocol":"IPv4","Annotation":"VMwarevCenterServerAppliance","PowerOn":false,"WaitForIP":false,"InjectOvfEnv":true,"NetworkMapping":[{"Name":"Network1","Network":"${VM_NETWORK}"}],"PropertyMapping":${props}}EOF}deploy_vcsa_vmgovcvm.change-vm"${VM_NAME}"-gvmwarePhoton64Guestgovcvm.power-on"${VM_NAME}"govcvm.ip-a"${VM_NAME}"

通过脚本安装 vCenter,指定第一参数为 OVA 的绝对路径。运行完后将会自动将 ova 导入到 vCenter,并启动虚拟机;

#执行该脚本,第一个参数传入vCenterISO中vcsaova文件的绝对路径$bashinstall-vcsa.sh/mnt/vcsa/VMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10.ova[03-02-2218:40:19]UploadingVMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10-disk1.vmdk...OK[03-02-2218:41:09]UploadingVMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10-disk2.vmdk...(29%,52.5MiB/s)[03-02-2218:43:08]UploadingVMware-vCenter-Server-Appliance-7.0.3.00100-18778458_OVF10-disk2.vmdk...OK[03-02-2218:43:08]InjectingOVFenvironment...PoweringonVirtualMachine:3...OKfe80::20c:29ff:fe03:2f80

设置 vCenter 登录的环境变量,我们使用 govc 来配置 vCenter,通过浏览器 Web UI 的方式配置起来效率有点低,不如 govc 命令一把梭方便 😂

exportGOVC_URL="https://192.168.20.92"exportGOVC_USERNAME="administrator@vsphere.local"exportGOVC_PASSWORD="admin@"exportGOVC_INSECURE=trueexportGOVC_DATASTORE=datastore1

虚拟机启动后将自动进行 vCenter 的安装配置,等待一段时间 vCenter 安装好之后,使用 govc about 查看 vCenter 的信息,如果能正确或渠道说明 vCenter 就安装好了;

$govcaboutFullName:VMwarevCenterServer7.0.3build-18778458Name:VMwarevCenterServerVendor:VMware,Inc.Version:7.0.3Build:18778458OStype:linux-x64APItype:VirtualCenterAPIversion:7.0.3.0ProductID:vpxUUID:0b49e119-e38f-4fbc-84a8-d7a0e548027d

配置 vCenter

这一步骤主要是配置 vCenter:创建 Datacenter、cluster、folder 等资源,并将 ESXi 主机添加到 cluster 中;

配置 vCenter

#创建Datacenter数据中心$govcdatacenter.createSH-IDC#创建Cluster集群$govccluster.create-dc=SH-IDCTanzu-Cluster#将ESXi主机添加到Cluster当中$govccluster.add-dc=SH-IDC-cluster=Tanzu-Cluster-hostname=192.168.18.47--username=root-password='admin@'-noverify#创建folder,用于将Tanzu的节点虚拟机存放到该文件夹下$govcfolder.create/SH-IDC/vm/Tanzu-node#导入tanzu汲取节点的虚拟机ova模版$govcimport.ova-dc='SH-IDC'-ds='datastore1'photon-3-kube-v1.21.2+vmware.1-tkg.2-12816990095845873721.ova#将虚拟机转换为模版,后续tanzu集群将以该模版创建虚拟机$govcvm.markastemplatephoton-3-kube-v1.21.2

初始化 bootstrap 节点

bootstrap 节点节点是用于运行 tanzu 部署工具的节点,官方是支持 Linux/macOS/Windows 三种操作系统的,但有一些比较严格的要求:

在这里为了避免这些麻烦的配置,我就直接使用的 VMware 官方的Photon OS 4.0 Rev2[21],下载 OVA 格式的镜像直接导入到 ESXi 主机启动一台虚拟机即可,能节省不少麻烦的配置;还有一个好处就是在一台单独的虚拟机上运行 tanzu 部署工具不会污染本地的开发环境。

$wget/photon/4.0/Rev2/ova/photon-ova-4.0-c001795b80.ova#导入OVA虚拟机模版$govcimport.ova-ds='datastore1'-namebootstrap-nodephoton-ova-4.0-c001795b80.ova#修改一下虚拟机的配置,调整为4C8G$govcvm.change-c4-m8192-vmbootstrap-node#开启虚拟机$govcvm.power-onbootstrap-node#查看虚拟机获取到的IPv4地址$govcvm.ip-a-wait1mbootstrap-node$sshroot@192.168.74.10#密码默认为changeme,输入完密码之后提示在输入一遍changeme,然后再修改新的密码root@photon-machine[~]#cat/etc/os-releaseNAME="VMwarePhotonOS"VERSION="4.0"ID=photonVERSION_ID=4.0PRETTY_NAME="VMwarePhotonOS/Linux"ANSI_COLOR="1;34"HOME_URL="https://vmware.github.io/photon/"BUG_REPORT_URL="/vmware/photon/issues"

安装部署时需要的一些工具(切,Photon OS 里竟然连个 tar 命令都没有 😠

root@photon-machine[~]#tdnfinstallsudotar-yroot@photon-machine[~]#curl-LOhttps://dl.k8s.io/release/v1.21.2/bin/linux/amd64/kubectlroot@photon-machine[~]#sudoinstall-oroot-groot-m0755kubectl/usr/local/bin/kubectl

启动 docker,bootstrap 节点会以 kind 的方式运行一个 K8s 集群,需要用到 docker。虽然可以使用外部的 k8s 集群,但不是很推荐,因为 cluster-api 依赖 k8s 的版本,不能太高也不能太低;

root@photon-machine[~]#systemctlenabledocker--now

从vmware-tanzu/community-edition[22]下载 tanzu 社区版的安装包,然后解压后安装;

root@photon-machine[~]#curl-LO/vmware-tanzu/community-edition/releases/download/v0.9.1/tce-linux-amd64-v0.9.1.tar.gzroot@photon-machine[~]#tar-xftce-linux-amd64-v0.9.1.tar.gzroot@photon-machine[~]#cdtce-linux-amd64-v0.9.1/root@photon-machine[~]#bashinstall.sh

然而不幸地翻车了, install.sh 脚本中禁止 root 用户运行

+ALLOW_INSTALL_AS_ROOT=+[[0-eq0]]+[[''!=\t\r\u\e]]+echo'Donotrunthisscriptasroot'Donotrunthisscriptasroot+exit1

我就偏偏要以 root 用户来运行怎么惹 😡

#sed去掉第一个exit1就可以了root@photon-machine[~]#sed-i.bak"s/exit1//"install.shroot@photon-machine[~]#bashinstall.sh

安装好之后会输出Installation complete!(讲真官方的 install.sh 脚本输出很不友好,污染我的 terminal

+tanzuinit|initializing✔successfullyinitializedCLI++tanzupluginrepolist++greptce+TCE_REPO=+[[-z'']]+tanzupluginrepoadd--nametce--gcp-bucket-nametce-tanzu-cli-plugins--gcp-root-pathartifacts++tanzupluginrepolist++grepcore-admin+TCE_REPO=+[[-z'']]+tanzupluginrepoadd--namecore-admin--gcp-bucket-nametce-tanzu-cli-framework-admin--gcp-root-pathartifacts-admin+echo'Installationcomplete!'Installationcomplete!

部署管理集群

先是部署一个 tanzu 的管理集群,有两种方式,一种是通过官方文档[23]提到的通过 Web UI 的方式。目前这个 UI 界面比较拉垮,它主要是用来让用户填写一些配置参数,然后调用后台的 tanzu 命令来部署集群。并把集群部署的日志和进度展示出来;部署完成之后,这个 UI 又不能管理这些集群,又不支持部署 workload 集群。

另一种就是通过 tanzu 命令指定配置文件来部署,这种方式不需要通过浏览器在 web 页面上傻乎乎地点来点去填一些参数,只需要提前填写好一个 yaml 格式的配置文件即可。下面我们就采用 tanzu 命令来部署集群,管理集群的配置文件模版如下:

tanzu-mgt-cluster.yaml

#ClusterPodIP的CIDRCLUSTER_CIDR:100.96.0.0/11#Service的CIDRSERVICE_CIDR:100.64.0.0/13#集群的名称CLUSTER_NAME:tanzu-control-plan#集群的类型CLUSTER_PLAN:dev#集群节点的archOS_ARCH:amd64#集群节点的OS名称OS_NAME:photon#集群节点OS版本OS_VERSION:"3"#基础设施资源的提供方INFRASTRUCTURE_PROVIDER:vsphere#集群的VIPVSPHERE_CONTROL_PLANE_ENDPOINT:192.168.75.194#control-plan节点的磁盘大小VSPHERE_CONTROL_PLANE_DISK_GIB:"20"#control-plan节点的内存大小VSPHERE_CONTROL_PLANE_MEM_MIB:"8192"#control-plan节点的CPU核心数量VSPHERE_CONTROL_PLANE_NUM_CPUS:"4"#work节点的磁盘大小VSPHERE_WORKER_DISK_GIB:"20"#work节点的内存大小VSPHERE_WORKER_MEM_MIB:"4096"#work节点的CPU核心数量VSPHERE_WORKER_NUM_CPUS:"2"#vCenter的Datacenter路径VSPHERE_DATACENTER:/SH-IDC#虚拟机创建的Datastore路径VSPHERE_DATASTORE:/SH-IDC/datastore/datastore1#虚拟机创建的文件夹VSPHERE_FOLDER:/SH-IDC/vm/Tanzu-node#虚拟机使用的网络VSPHERE_NETWORK:/SH-IDC/network/VMNetwork#虚拟机关联的资源池VSPHERE_RESOURCE_POOL:/SH-IDC/host/Tanzu-Cluster/Resources#vCenter的IPVSPHERE_SERVER:192.168.75.110#vCenter的用户名VSPHERE_USERNAME:administrator@vsphere.local#vCenter的密码,以base64编码VSPHERE_PASSWORD:<encoded:base64password>#vCenter的证书指纹,可以通过govcabout.cert-json|jq-r'.ThumbprintSHA1'获取VSPHERE_TLS_THUMBPRINT:EB:F3:D8:7A:E8:3D:1A:59:B0:DE:73:96:DC:B9:5F:13:86:EF:B6:27#虚拟机注入的ssh公钥,需要用它来ssh登录集群节点VSPHERE_SSH_AUTHORIZED_KEY:ssh-rsa#一些默认参数AVI_ENABLE:"false"IDENTITY_MANAGEMENT_TYPE:noneENABLE_AUDIT_LOGGING:"false"ENABLE_CEIP_PARTICIPATION:"false"TKG_HTTP_PROXY_ENABLED:"false"DEPLOY_TKG_ON_VSPHERE7:"true"

通过 tanzu CLI 部署管理集群

$tanzumanagement-clustercreate--filetanzu-mgt-cluster.yaml-v6#如果没有配置VSPHERE_TLS_THUMBPRINT会有一个确认vSpherethumbprint的交互,输入Y就可以Validatingthepre-requisites...DoyouwanttocontinuewiththevSpherethumbprintEB:F3:D8:7A:E8:3D:1A:59:B0:DE:73:96:DC:B9:5F:13:86:EF:B6:27[y/N]:y

部署日志

root@photon-machine[~]#tanzumanagement-clustercreate--filetanzu-mgt-cluster.yaml-v6compatibilityfile(/root/.config/tanzu/tkg/compatibility/tkg-compatibility.yaml)alreadyexists,skippingdownloadBOMfilesinside/root/.config/tanzu/tkg/bomalreadyexists,skippingdownloadCEIPOpt-instatus:falseValidatingthepre-requisites...vSphere7.0EnvironmentDetected.YouhaveconnectedtoavSphere7.0environmentwhichdoesnothavevSpherewithTanzuenabled.vSpherewithTanzuincludesanintegratedTanzuKubernetesGridServicewhichturnsavSphereclusterintoaplatformforrunningKubernetesworkloadsindedicatedresourcepools.ConfiguringTanzuKubernetesGridServiceisdonethroughvSphereHTML5client.TanzuKubernetesGridServiceisthepreferredwaytoconsumeTanzuKubernetesGridinvSphere7.0environments.Alternativelyyoumaydeployanon-integratedTanzuKubernetesGridinstanceonvSphere7.0.DeployingTKGmanagementclusteronvSphere7.0...IdentityProvidernotconfigured.Someauthenticationfeatureswon'twork.CheckingifVSPHERE_CONTROL_PLANE_ENDPOINT192.168.20.94isalreadyinuseSettingupmanagementcluster...Validatingconfiguration...Usinginfrastructureprovidervsphere:v0.7.10Generatingclusterconfiguration...Settingupbootstrapper...Fetchingconfigurationforkindnodeimage...kindConfig:&{{Clusterkind.x-k8s.io/v1alpha4}[{map[][{/var/run/docker.sock/var/run/docker.sockfalsefalse}][][][]}]{0100.96.0.0/11100.64.0.0/13false}map[]map[][apiVersion:kubeadm.k8s.io/v1beta2kind:ClusterConfigurationimageRepository:projects./tkgetcd:local:imageRepository:projects./tkgimageTag:v3.4.13_vmware.15dns:type:CoreDNSimageRepository:projects./tkgimageTag:v1.8.0_vmware.5][][][]}Creatingkindcluster:tkg-kind-c7vj6kds0a6sf43e6210Creatingcluster"tkg-kind-c7vj6kds0a6sf43e6210"...Ensuringnodeimage(projects./tkg/kind/node:v1.21.2_vmware.1)...Pullingimage:projects./tkg/kind/node:v1.21.2_vmware.1...Preparingnodes...Writingconfiguration...Startingcontrol-plane...InstallingCNI...InstallingStorageClass...Waiting2m0sforcontrol-plane=Ready...Readyafter19sBootstrappercreated.Kubeconfig:/root/.kube-tkg/tmp/config_3fkzTCOLInstallingprovidersonbootstrapper...FetchingprovidersInstallingcert-managerVersion="v1.1.0"Waitingforcert-managertobeavailable...InstallingProvider="cluster-api"Version="v0.3.23"TargetNamespace="capi-system"InstallingProvider="bootstrap-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-bootstrap-system"InstallingProvider="control-plane-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-control-plane-system"InstallingProvider="infrastructure-vsphere"Version="v0.7.10"TargetNamespace="capv-system"installedComponent=="cluster-api"Type=="CoreProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="BootstrapProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="ControlPlaneProvider"Version=="v0.3.23"installedComponent=="vsphere"Type=="InfrastructureProvider"Version=="v0.7.10"Waitingforproviderinfrastructure-vsphereWaitingforprovidercontrol-plane-kubeadmWaitingforprovidercluster-apiWaitingforproviderbootstrap-kubeadmWaitingforresourcecapi-kubeadm-control-plane-controller-manageroftype*v1.Deploymenttobeupandrunningpodsarenotyetrunningfordeployment'capi-kubeadm-control-plane-controller-manager'innamespace'capi-kubeadm-control-plane-system',retryingPassedwaitingonproviderbootstrap-kubeadmafter25.205820854spodsarenotyetrunningfordeployment'capi-controller-manager'innamespace'capi-webhook-system',retryingPassedwaitingonproviderinfrastructure-vsphereafter30.185406332sPassedwaitingonprovidercluster-apiafter30.213216243sSuccesswaitingonallproviders.Startcreatingmanagementcluster...patchclusterobjectwithoperationstatus:{"metadata":{"annotations":{"TKGOperationInfo":"{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"-02-0602:35:34.30219421+0000UTC\",\"OperationTimeout\":1800}","TKGOperationLastObservedTimestamp":"-02-0602:35:34.30219421+0000UTC"}}}clustercontrolplaneisstillbeinginitialized,retryingGettingsecretforclusterWaitingforresourcetanzu-control-plan-kubeconfigoftype*v1.SecrettobeupandrunningSavingmanagementclusterkubeconfiginto/root/.kube/configInstallingprovidersonmanagementcluster...FetchingprovidersInstallingcert-managerVersion="v1.1.0"Waitingforcert-managertobeavailable...InstallingProvider="cluster-api"Version="v0.3.23"TargetNamespace="capi-system"InstallingProvider="bootstrap-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-bootstrap-system"InstallingProvider="control-plane-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-control-plane-system"InstallingProvider="infrastructure-vsphere"Version="v0.7.10"TargetNamespace="capv-system"installedComponent=="cluster-api"Type=="CoreProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="BootstrapProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="ControlPlaneProvider"Version=="v0.3.23"installedComponent=="vsphere"Type=="InfrastructureProvider"Version=="v0.7.10"Waitingforprovidercontrol-plane-kubeadmWaitingforproviderbootstrap-kubeadmWaitingforproviderinfrastructure-vsphereWaitingforprovidercluster-apiWaitingforresourcecapi-kubeadm-control-plane-controller-manageroftype*v1.DeploymenttobeupandrunningPassedwaitingonprovidercontrol-plane-kubeadmafter10.046865402sWaitingforresourceantrea-controlleroftype*v1.DeploymenttobeupandrunningMovingallClusterAPIobjectsfrombootstrapclustertomanagementcluster...Performingmove...DiscoveringClusterAPIobjectsMovingClusterAPIobjectsClusters=1CreatingobjectsinthetargetclusterDeletingobjectsfromthesourceclusterWaitingforadditionalcomponentstobeupandrunning...Waitingforpackagestobeupandrunning...Waitingforpackage:antreaWaitingforpackage:metrics-serverWaitingforpackage:tanzu-addons-managerWaitingforpackage:vsphere-cpiWaitingforpackage:vsphere-csiWaitingforresourceantreaoftype*v1alpha1.PackageInstalltobeupandrunningWaitingforresourcevsphere-cpioftype*v1alpha1.PackageInstalltobeupandrunningWaitingforresourcevsphere-csioftype*v1alpha1.PackageInstalltobeupandrunningWaitingforresourcemetrics-serveroftype*v1alpha1.PackageInstalltobeupandrunningWaitingforresourcetanzu-addons-manageroftype*v1alpha1.PackageInstalltobeupandrunningSuccessfullyreconciledpackage:antreaSuccessfullyreconciledpackage:vsphere-csiSuccessfullyreconciledpackage:metrics-serverContextsetformanagementclustertanzu-control-planas'tanzu-control-plan-admin@tanzu-control-plan'.Deletingkindcluster:tkg-kind-c7vj6kds0a6sf43e6210Managementclustercreated!Youcannowcreateyourfirstworkloadclusterbyrunningthefollowing:tanzuclustercreate[name]-f[file]Someaddonsmightbegettinginstalled!Checktheirstatusbyrunningthefollowing:kubectlgetapps-A

部署完成之后,将管理集群的 kubeconfig 文件复制到 kubectl 默认的目录下

root@photon-machine[~]#cp${HOME}/.kube-tkg/config${HOME}/.kube/config

查看集群状态信息

#管理集群的cluster资源信息,管理集群的CR默认保存在了tkg-systemnamespace下root@photon-machine[~]#kubectlgetcluster-ANAMESPACENAMEPHASEtkg-systemtanzu-control-planProvisioned#管理集群的machine资源信息root@photon-machine[~]#kubectlgetmachine-ANAMESPACENAMEPROVIDERIDPHASEVERSIONtkg-systemtanzu-control-plan-control-plane-gs4blvsphere://4239c450-f621-d78e-3c44-4ac8890c0cd3Runningv1.21.2+vmware.1tkg-systemtanzu-control-plan-md-0-7cdc97c7c6-kxcnxvsphere://4239d776-c04c-aacc-db12-3380542a6d03Provisionedv1.21.2+vmware.1#运行的组件状态root@photon-machine[~]#kubectlgetpod-ANAMESPACENAMEREADYSTATUSRESTARTSAGEcapi-kubeadm-bootstrap-systemcapi-kubeadm-bootstrap-controller-manager-6494884869-wlzhx2/2Running08m37scapi-kubeadm-control-plane-systemcapi-kubeadm-control-plane-controller-manager-857d687b9d-tpznv2/2Running08m35scapi-systemcapi-controller-manager-778bd4dfb9-tkvwg2/2Running08m41scapi-webhook-systemcapi-controller-manager-9995bdc94-svjm22/2Running08m41scapi-webhook-systemcapi-kubeadm-bootstrap-controller-manager-68845b65f8-sllgv2/2Running08m38scapi-webhook-systemcapi-kubeadm-control-plane-controller-manager-9847c6747-vvz6g2/2Running08m35scapi-webhook-systemcapv-controller-manager-55bf67fbd5-4t46v2/2Running08m31scapv-systemcapv-controller-manager-587fbf697f-bbzs92/2Running08m31scert-managercert-manager-77f6fb8fd5-8tq6n1/1Running011mcert-managercert-manager-cainjector-6bd4cff7bb-6vlzx1/1Running011mcert-managercert-manager-webhook-fbfcb9d6c-qpkbc1/1Running011mkube-systemantrea-agent-5m9d42/2Running06mkube-systemantrea-agent-8mpr72/2Running05m40skube-systemantrea-controller-5bbcb98667-hklss1/1Running05m50skube-systemcoredns-8dcb5c56b-ckvb71/1Running012mkube-systemcoredns-8dcb5c56b-d98hf1/1Running012mkube-systemetcd-tanzu-control-plan-control-plane-gs4bl1/1Running012mkube-systemkube-apiserver-tanzu-control-plan-control-plane-gs4bl1/1Running012mkube-systemkube-controller-manager-tanzu-control-plan-control-plane-gs4bl1/1Running012mkube-systemkube-proxy-d4wq41/1Running012mkube-systemkube-proxy-nhkgg1/1Running011mkube-systemkube-scheduler-tanzu-control-plan-control-plane-gs4bl1/1Running012mkube-systemkube-vip-tanzu-control-plan-control-plane-gs4bl1/1Running012mkube-systemmetrics-server-59fcb9fcf-xjznj1/1Running06m29skube-systemvsphere-cloud-controller-manager-kzffm1/1Running05m50skube-systemvsphere-csi-controller-74675c9488-q9h5c6/6Running06m31skube-systemvsphere-csi-node-dmvvr3/3Running06m31skube-systemvsphere-csi-node-k6x983/3Running06m31stkg-systemkapp-controller-6499b8866-xnql71/1Running010mtkg-systemtanzu-addons-controller-manager-657c587556-rpbjm1/1Running07m58stkg-systemtanzu-capabilities-controller-manager-6ff97656b8-cq7m71/1Running011mtkr-systemtkr-controller-manager-6bc455b5d4-wm98s1/1Running010m

部署流程

结合tanzu 的源码[24]和部署输出的日志我们大体可以得知,tanzu 管理集群部署大致分为如下几步:

///vmware-tanzu/tanzu-framework/blob/main/pkg/v1/tkg/client/init.go//managementclusterinitstepconstantsconst(StepConfigPrerequisite="Configureprerequisite"StepValidateConfiguration="Validateconfiguration"StepGenerateClusterConfiguration="Generateclusterconfiguration"StepSetupBootstrapCluster="Setupbootstrapcluster"StepInstallProvidersOnBootstrapCluster="Installprovidersonbootstrapcluster"StepCreateManagementCluster="Createmanagementcluster"StepInstallProvidersOnRegionalCluster="Installprovidersonmanagementcluster"StepMoveClusterAPIObjects="Movecluster-apiobjectsfrombootstrapclustertomanagementcluster")//InitRegionStepsmanagementclusterinitstepsequencevarInitRegionSteps=[]string{StepConfigPrerequisite,StepValidateConfiguration,StepGenerateClusterConfiguration,StepSetupBootstrapCluster,StepInstallProvidersOnBootstrapCluster,StepCreateManagementCluster,StepInstallProvidersOnRegionalCluster,StepMoveClusterAPIObjects,}

ConfigPrerequisite 准备阶段,会下载tkg-compatibilitytkg-bom镜像,用于检查环境的兼容性;

DownloadingTKGcompatibilityfilefrom'projects./tkg/framework-zshippable/tkg-compatibility'DownloadingtheTKGBillofMaterials(BOM)filefrom'projects./tkg/tkg-bom:v1.4.0'DownloadingtheTKrBillofMaterials(BOM)filefrom'projects./tkg/tkr-bom:v1.21.2_vmware.1-tkg.1'ERROR/02/0602:24:46svType!=tvType;key=release,st=map[string]interface{},tt=<nil>,sv=map[version:],tv=<nil>CEIPOpt-instatus:false

ValidateConfiguration 配置文件校验,根据填写的参数校验配置是否正确,以及检查 vCenter 当中有无匹配的虚拟机模版;

Validatingthepre-requisites...vSphere7.0EnvironmentDetected.YouhaveconnectedtoavSphere7.0environmentwhichdoesnothavevSpherewithTanzuenabled.vSpherewithTanzuincludesanintegratedTanzuKubernetesGridServicewhichturnsavSphereclusterintoaplatformforrunningKubernetesworkloadsindedicatedresourcepools.ConfiguringTanzuKubernetesGridServiceisdonethroughvSphereHTML5client.TanzuKubernetesGridServiceisthepreferredwaytoconsumeTanzuKubernetesGridinvSphere7.0environments.Alternativelyyoumaydeployanon-integratedTanzuKubernetesGridinstanceonvSphere7.0.DeployingTKGmanagementclusteronvSphere7.0...IdentityProvidernotconfigured.Someauthenticationfeatureswon'twork.CheckingifVSPHERE_CONTROL_PLANE_ENDPOINT192.168.20.94isalreadyinuseSettingupmanagementcluster...Validatingconfiguration...Usinginfrastructureprovidervsphere:v0.7.10

GenerateClusterConfiguration 生成集群配置文件信息;

Generatingclusterconfiguration...

SetupBootstrapCluster 设置 bootstrap 集群,目前默认为 kind。会运行一个 docker 容器,里面套娃运行着一个 k8s 集群;这个 bootstrap k8s 集群只是临时运行 cluster-api 来部署管理集群用的,部署完成之后 bootstrap 集群也就没用了,会自动删掉;

Settingupbootstrapper...Fetchingconfigurationforkindnodeimage...kindConfig:&{{Clusterkind.x-k8s.io/v1alpha4}[{map[][{/var/run/docker.sock/var/run/docker.sockfalsefalse}][][][]}]{0100.96.0.0/11100.64.0.0/13false}map[]map[][apiVersion:kubeadm.k8s.io/v1beta2kind:ClusterConfigurationimageRepository:projects./tkgetcd:local:imageRepository:projects./tkgimageTag:v3.4.13_vmware.15dns:type:CoreDNSimageRepository:projects./tkgimageTag:v1.8.0_vmware.5][][][]}Creatingkindcluster:tkg-kind-c7vj6kds0a6sf43e6210Creatingcluster"tkg-kind-c7vj6kds0a6sf43e6210"...Ensuringnodeimage(projects./tkg/kind/node:v1.21.2_vmware.1)...Pullingimage:projects./tkg/kind/node:v1.21.2_vmware.1...Preparingnodes...Writingconfiguration...Startingcontrol-plane...InstallingCNI...InstallingStorageClass...Waiting2m0sforcontrol-plane=Ready...Readyafter19sBootstrappercreated.Kubeconfig:/root/.kube-tkg/tmp/config_3fkzTCOL

InstallProvidersOnBootstrapCluster 在 bootstrap 集群上安装 cluste-api 相关组件;

Installingprovidersonbootstrapper...Fetchingproviders#安装cert-manager主要是为了生成k8s集群部署所依赖的那一堆证书Installingcert-managerVersion="v1.1.0"Waitingforcert-managertobeavailable...InstallingProvider="cluster-api"Version="v0.3.23"TargetNamespace="capi-system"InstallingProvider="bootstrap-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-bootstrap-system"InstallingProvider="control-plane-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-control-plane-system"InstallingProvider="infrastructure-vsphere"Version="v0.7.10"TargetNamespace="capv-system"installedComponent=="cluster-api"Type=="CoreProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="BootstrapProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="ControlPlaneProvider"Version=="v0.3.23"installedComponent=="vsphere"Type=="InfrastructureProvider"Version=="v0.7.10"Waitingforproviderinfrastructure-vsphereWaitingforprovidercontrol-plane-kubeadmWaitingforprovidercluster-apiWaitingforproviderbootstrap-kubeadmPassedwaitingonproviderinfrastructure-vsphereafter30.185406332sPassedwaitingonprovidercluster-apiafter30.213216243sSuccesswaitingonallproviders.

CreateManagementCluster 创建管理集群,这一步主要是创建虚拟机、初始化节点、运行 kubeadm 部署 k8s 集群;

Startcreatingmanagementcluster...patchclusterobjectwithoperationstatus:{"metadata":{"annotations":{"TKGOperationInfo":"{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"-02-0602:35:34.30219421+0000UTC\",\"OperationTimeout\":1800}","TKGOperationLastObservedTimestamp":"-02-0602:35:34.30219421+0000UTC"}}}clustercontrolplaneisstillbeinginitialized,retryingGettingsecretforclusterWaitingforresourcetanzu-control-plan-kubeconfigoftype*v1.SecrettobeupandrunningSavingmanagementclusterkubeconfiginto/root/.kube/config

InstallProvidersOnRegionalCluster 在管理集群上安装 cluster-api 相关组件;

Installingprovidersonmanagementcluster...FetchingprovidersInstallingcert-managerVersion="v1.1.0"Waitingforcert-managertobeavailable...InstallingProvider="cluster-api"Version="v0.3.23"TargetNamespace="capi-system"InstallingProvider="bootstrap-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-bootstrap-system"InstallingProvider="control-plane-kubeadm"Version="v0.3.23"TargetNamespace="capi-kubeadm-control-plane-system"InstallingProvider="infrastructure-vsphere"Version="v0.7.10"TargetNamespace="capv-system"installedComponent=="cluster-api"Type=="CoreProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="BootstrapProvider"Version=="v0.3.23"installedComponent=="kubeadm"Type=="ControlPlaneProvider"Version=="v0.3.23"installedComponent=="vsphere"Type=="InfrastructureProvider"Version=="v0.7.10"Waitingforprovidercontrol-plane-kubeadmWaitingforproviderbootstrap-kubeadmWaitingforproviderinfrastructure-vsphereWaitingforprovidercluster-apiWaitingforresourcecapv-controller-manageroftype*v1.DeploymenttobeupandrunningPassedwaitingonproviderinfrastructure-vsphereafter20.091935635sPassedwaitingonprovidercluster-apiafter20.109419304sSuccesswaitingonallproviders.Waitingforthemanagementclustertogetreadyformove...Waitingforresourcetanzu-control-planoftype*v1alpha3.ClustertobeupandrunningWaitingforresourcestype*v1alpha3.MachineDeploymentListtobeupandrunningWaitingforresourcestype*v1alpha3.MachineListtobeupandrunningWaitingforaddonsinstallation...Waitingforresourcestype*v1alpha3.ClusterResourceSetListtobeupandrunningWaitingforresourceantrea-controlleroftype*v1.Deploymenttobeupandrunning

MoveClusterAPIObjects 将 bootstrap 集群上 cluster-api 相关的资源转移到管理集群上。这一步的目的是为了达到 self-hosted 自托管的功能:即管理集群自身的扩缩容也是通过 cluster-api 来完成,这样就不用再依赖先前的那个 bootstrap 集群了;

MovingallClusterAPIobjectsfrombootstrapclustertomanagementcluster...Performingmove...DiscoveringClusterAPIobjectsMovingClusterAPIobjectsClusters=1CreatingobjectsinthetargetclusterDeletingobjectsfromthesourceclusterContextsetformanagementclustertanzu-control-planas'tanzu-control-plan-admin@tanzu-control-plan'.Deletingkindcluster:tkg-kind-c7vj6kds0a6sf43e6210Managementclustercreated!Youcannowcreateyourfirstworkloadclusterbyrunningthefollowing:tanzuclustercreate[name]-f[file]Someaddonsmightbegettinginstalled!Checktheirstatusbyrunningthefollowing:kubectlgetapps-A

部署完成后会删除 bootstrap 集群,因为 bootstrap 集群中的资源已经转移到了管理集群中,它继续存在的意义不大。

部署 workload 集群

上面我们只是部署好了一个 tanzu 管理集群,我们真正的工作负载并不适合运行在这个集群上,因此我们还需要再部署一个 workload 集群,类似于 k8s 集群中的 worker 节点。部署 workload 集群的时候不再依赖 bootstrap 集群,而是使用管理集群。

根据官方文档vSphere Workload Cluster Template[25]中给出的模版创建一个配置文件,然后再通过 tanzu 命令来部署即可。配置文件内容如下:

#ClusterPodIP的CIDRCLUSTER_CIDR:100.96.0.0/11#Service的CIDRSERVICE_CIDR:100.64.0.0/13#集群的名称CLUSTER_NAME:tanzu-workload-cluster#集群的类型CLUSTER_PLAN:dev#集群节点的archOS_ARCH:amd64#集群节点的OS名称OS_NAME:photon#集群节点OS版本OS_VERSION:"3"#基础设施资源的提供方INFRASTRUCTURE_PROVIDER:vsphere#cluster,machine等自定义资源创建的namespaceNAMESPACE:default#CNI选用类型,目前应该只支持VMware自家的antreaCNI:antrea#集群的VIPVSPHERE_CONTROL_PLANE_ENDPOINT:192.168.20.95#control-plan节点的磁盘大小VSPHERE_CONTROL_PLANE_DISK_GIB:"20"#control-plan节点的内存大小VSPHERE_CONTROL_PLANE_MEM_MIB:"8192"#control-plan节点的CPU核心数量VSPHERE_CONTROL_PLANE_NUM_CPUS:"4"#work节点的磁盘大小VSPHERE_WORKER_DISK_GIB:"20"#work节点的内存大小VSPHERE_WORKER_MEM_MIB:"4096"#work节点的CPU核心数量VSPHERE_WORKER_NUM_CPUS:"2"#vCenter的Datacenter路径VSPHERE_DATACENTER:/SH-IDC#虚拟机创建的Datastore路径VSPHERE_DATASTORE:/SH-IDC/datastore/datastore1#虚拟机创建的文件夹VSPHERE_FOLDER:/SH-IDC/vm/Tanzu-node#虚拟机使用的网络VSPHERE_NETWORK:/SH-IDC/network/VMNetwork#虚拟机关联的资源池VSPHERE_RESOURCE_POOL:/SH-IDC/host/Tanzu-Cluster/Resources#vCenter的IPVSPHERE_SERVER:192.168.20.92#vCenter的用户名VSPHERE_USERNAME:administrator@vsphere.local#vCenter的密码,以base64编码VSPHERE_PASSWORD:<encoded:YWRtaW5AMjAyMA==>#vCenter的证书指纹,可以通过govcabout.cert-json|jq-r'.ThumbprintSHA1'获取VSPHERE_TLS_THUMBPRINT:CB:23:48:E8:93:34:AD:27:D8:FD:88:1C:D7:08:4B:47:9B:12:F4:E0#虚拟机注入的ssh公钥,需要用它来ssh登录集群节点VSPHERE_SSH_AUTHORIZED_KEY:ssh-rsa#一些默认参数AVI_ENABLE:"false"IDENTITY_MANAGEMENT_TYPE:noneENABLE_AUDIT_LOGGING:"false"ENABLE_CEIP_PARTICIPATION:"false"TKG_HTTP_PROXY_ENABLED:"false"DEPLOY_TKG_ON_VSPHERE7:"true"#是否开启虚拟机健康检查ENABLE_MHC:trueMHC_UNKNOWN_STATUS_TIMEOUT:5mMHC_FALSE_STATUS_TIMEOUT:12m#是否部署vspherecis组件ENABLE_DEFAULT_STORAGE_CLASS:true#是否开启集群自动扩缩容ENABLE_AUTOSCALER:false

通过 tanzu 命令来部署 workload 集群

root@photon-machine[~]#tanzuclustercreatetanzu-workload-cluster--filetanzu-workload-cluster.yamlValidatingconfiguration...Warning:Pinnipedconfigurationnotfound.Skippingpinnipedconfigurationinworkloadcluster.PleaserefertothedocumentationtocheckifyoucanconfigurepinnipedonworkloadclustermanuallyCreatingworkloadcluster'tanzu-workload-cluster'...Waitingforclustertobeinitialized...Waitingforclusternodestobeavailable...Waitingforclusterautoscalertobeavailable...Unabletowaitforautoscalerdeploymenttobeready.reason:deployments.apps"tanzu-workload-cluster-cluster-autoscaler"notfoundWaitingforaddonsinstallation...Waitingforpackagestobeupandrunning...Workloadcluster'tanzu-workload-cluster'created

部署完成之后查看一下集群的 CR 信息

root@photon-machine[~]#kubectlgetclusterNAMEPHASEtanzu-workload-clusterProvisioned#machine状态处于Running说明节点已经正常运行了root@photon-machine[~]#kubectlgetmachineNAMEPROVIDERIDPHASEVERSIONtanzu-workload-cluster-control-plane-4tdwqvsphere://423950ac-1c6d-e5ef-3132-77b6a53cf626Runningv1.21.2+vmware.1tanzu-workload-cluster-md-0-8555bbbfc-74vdgvsphere://4239b83b-6003-d990-4555-a72ac4dec484Runningv1.21.2+vmware.1

扩容集群

集群部署好之后,如果想对集群节点进行扩缩容,我们可以像 deployment 的一样,只需要修改一些 CR 的信息即可。cluster-api 相关组件会 watch 到这些 CR 的变化,并根据它的 spec 信息进行一系列调谐操作。如果当前集群节点数量低于所定义的节点副本数量,则会自动调用对应的 Provider 创建虚拟机,并对虚拟机进行初始化操作,将它转换为 k8s 里的一个 node 资源;

扩容 control-plan 节点

即扩容 master 节点,通过修改KubeadmControlPlane这个 CR 中的replicas副本数即可:

root@photon-machine[~]#kubectlscalekcptanzu-workload-cluster-control-plane--replicas=3#可以看到machine已经处于Provisioning状态,说明集群节点对应的虚拟机正在创建中root@photon-machine[~]#kubectlgetmachineNAMEPROVIDERIDPHASEVERSIONtanzu-workload-cluster-control-plane-4tdwqvsphere://423950ac-1c6d-e5ef-3132-77b6a53cf626Runningv1.21.2+vmware.1tanzu-workload-cluster-control-plane-mkmd2Provisioningv1.21.2+vmware.1tanzu-workload-cluster-md-0-8555bbbfc-74vdgvsphere://4239b83b-6003-d990-4555-a72ac4dec484Runningv1.21.2+vmware.1

扩容 work 节点

扩容 worker 节点,通过修改MachineDeployment这个 CR 中的replicas副本数即可:

root@photon-machine[~]#kubectlscalemdtanzu-workload-cluster-md-0--replicas=3root@photon-machine[~]#kubectlgetmachineNAMEPROVIDERIDPHASEVERSIONtanzu-workload-cluster-control-plane-4tdwqvsphere://423950ac-1c6d-e5ef-3132-77b6a53cf626Runningv1.21.2+vmware.1tanzu-workload-cluster-control-plane-mkmd2vsphere://4239278c-0503-f03a-08b8-df92286bcdd7Runningv1.21.2+vmware.1tanzu-workload-cluster-control-plane-rt5mbvsphere://4239c882-2fe5-a394-60c0-616941a6363eRunningv1.21.2+vmware.1tanzu-workload-cluster-md-0-8555bbbfc-4hlqkvsphere://42395deb-e706-8b4b-a44f-c755c222575cRunningv1.21.2+vmware.1tanzu-workload-cluster-md-0-8555bbbfc-74vdgvsphere://4239b83b-6003-d990-4555-a72ac4dec484Runningv1.21.2+vmware.1tanzu-workload-cluster-md-0-8555bbbfc-ftmlpvsphere://42399640-8e94-85e5-c4bd-8436d84966e0Runningv1.21.2+vmware.1

后续

本文只是介绍了 tanzu 集群部署的大体流程,里面包含了 cluster-api 相关的概念在本文并没有做深入的分析,因为实在是太复杂了 😂,到现在我还是没太理解其中的一些原理,因此后续我再单独写一篇博客来讲解一些 cluster-api 相关的内容,到那时候在结合本文来看就容易理解很多。

参考

community-edition[26]

vmware/photon[27]

tanzu-framework[28]

cluster-api-provider-vsphere[29]

Deploying a workload cluster[30]

Examine the Management Cluster Deployment[31]

Prepare to Deploy a Management or Standalone Clusters to vSphere[32]

引用链接

[1]

VMware vShpere:/cn/VMware-vSphere/index.html

[2]

OpenStack:/

[3]

cluster-api:/kubernetes-sigs/cluster-api

[4]

tanzu:/vmware-tanzu

[5]

VMware Tanzu 社区版:/vmware-tanzu/community-edition

[6]

VMware 的账户:/login

[7]

VMware-VMvisor-Installer-7.0U2a-17867351.x86_64.iso:/downloads/details?downloadGroup=ESXI70U2A&productId=974&rPId=46384

[8]

VMware-VCSA-all-7.0.3-19234570.iso:/downloads/details?downloadGroup=VC70U3C&productId=974&rPId=83853

[9]

photon-ova-4.0-c001795b80.ova:/photon/4.0/Rev2/ova/photon-ova-4.0-c001795b80.ova

[10]

photon-3-kube-v1.21.2+vmware.1-tkg.2-12816990095845873721.ova:/downloads/get-download?downloadGroup=TCE-090

[11]

tce-linux-amd64-v0.9.1.tar.gz:/vmware-tanzu/community-edition/releases/download/v0.9.1/tce-linux-amd64-v0.9.1.tar.gz

[12]

govc_Linux_x86_64.tar.gz:/vmware/govmomi/releases/download/v0.27.3/govc_Linux_x86_64.tar.gz

[13]

vSphere 7.0 Update 3 Critical Known Issues - Workarounds & Fix (86287):/s/article/86287

[14]

VMware ESXi 安装和设置:/cn/VMware-vSphere/7.0/vsphere-esxi-701-installation-setup-guide.pdf

[15]

关于 vCenter Server 安装和设置:/cn/VMware-vSphere/7.0/com.vmware.vcenter.install.doc/GUID-8DC3866D-5087-40A2-8067-1361A2AF95BD.html

[16]

Docker:/engine/install/

[17]

Manage Docker as a non-root user:/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user

[18]

Kubectl:https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

[19]

cgroup v1:/linux/man-pages/man7/cgroups.7.html

[20]

Check and set the cgroup:https://tanzucommunityedition.io/docs/latest/support-matrix/#check-and-set-the-cgroup

[21]

Photon OS 4.0 Rev2:/vmware/photon/wiki/Downloading-Photon-OS#photon-os-40-rev2-binaries

[22]

vmware-tanzu/community-edition:/vmware-tanzu/community-edition/releases/tag/v0.9.1

[23]

官方文档:https://tanzucommunityedition.io/docs/latest/getting-started/

[24]

tanzu 的源码:/vmware-tanzu/tanzu-framework/blob/main/pkg/v1/tkg/client/init.go

[25]

vSphere Workload Cluster Template:https://tanzucommunityedition.io/docs/latest/vsphere-wl-template/

[26]

community-edition:/vmware-tanzu/community-edition

[27]

vmware/photon:/vmware/photon

[28]

tanzu-framework:/vmware-tanzu/tanzu-framework/blob/main/pkg/v1/tkg/client/init.go

[29]

cluster-api-provider-vsphere:/kubernetes-sigs/cluster-api-provider-vsphere

[30]

Deploying a workload cluster:https://tanzucommunityedition.io/docs/latest/workload-clusters/

[31]

Examine the Management Cluster Deployment:https://tanzucommunityedition.io/docs/latest/verify-deployment/

[32]

Prepare to Deploy a Management or Standalone Clusters to vSphere:https://tanzucommunityedition.io/docs/latest/vsphere/

原文链接:https://blog.k8s.li/deploy-tanzu-k8s-cluster.html

本文转载自:「云原生实验室」,原文:/3tzdzp4j,版权归原作者所有。欢迎投稿,投稿邮箱: editor@hi-。

最近,我们建立了一个技术交流微信群。目前群里已加入了不少行业内的大神,有兴趣的同学可以加入和我们一起交流技术,在「奇妙的 Linux 世界」公众号直接回复「加群」邀请你入群。

你可能还喜欢

点击下方图片即可阅读

使用 Nginx 三方扩展 ngx_waf 快速实现一个高性能的 Web 应用防火墙

点击上方图片,『美团|饿了么』外卖红包天天免费领

更多有趣的互联网新鲜事,关注「奇妙的互联网」视频号全了解!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。