目錄
- 一.系統(tǒng)環(huán)境
- 二.前言
- 三.Kubernetes
- 3.1 概述
- 3.2 Kubernetes 組件
- 3.2.1 控制平面組件
- 3.2.2 Node組件
- 四.安裝部署Kubernetes集群
- 4.1 環(huán)境介紹
- 4.2 配置節(jié)點的基本環(huán)境
- 4.3 節(jié)點安裝docker,并進行相關配置
- 4.4 安裝kubelet,kubeadm,kubectl
- 4.5 kubeadm初始化
- 4.6 添加worker節(jié)點到k8s集群
- 4.7 部署CNI網(wǎng)絡插件calico
- 4.8 配置kubectl命令tab鍵自動補全
一.系統(tǒng)環(huán)境
服務器版本 | docker軟件版本 | CPU架構 |
---|---|---|
CentOS Linux release 7.4.1708 (Core) | Docker version 20.10.12 | x86_64 |
二.前言
下圖描述了軟件部署方式的變遷:傳統(tǒng)部署時代,虛擬化部署時代,容器部署時代。
傳統(tǒng)部署時代:
早期,各個組織是在物理服務器上運行應用程序。 由于無法限制在物理服務器中運行的應用程序資源使用,因此會導致資源分配問題。 例如,如果在同一臺物理服務器上運行多個應用程序, 則可能會出現(xiàn)一個應用程序占用大部分資源的情況,而導致其他應用程序的性能下降。 一種解決方案是將每個應用程序都運行在不同的物理服務器上, 但是當某個應用程式資源利用率不高時,剩余資源無法被分配給其他應用程式, 而且維護許多物理服務器的成本很高。
虛擬化部署時代:
因此,虛擬化技術被引入了。虛擬化技術允許你在單個物理服務器的 CPU 上運行多臺虛擬機(VM)。 虛擬化能使應用程序在不同 VM 之間被彼此隔離,且能提供一定程度的安全性, 因為一個應用程序的信息不能被另一應用程序隨意訪問。
虛擬化技術能夠更好地利用物理服務器的資源,并且因為可輕松地添加或更新應用程序, 而因此可以具有更高的可擴縮性,以及降低硬件成本等等的好處。 通過虛擬化,你可以將一組物理資源呈現(xiàn)為可丟棄的虛擬機集群。
每個 VM 是一臺 完整的計算機,在虛擬化硬件之上運行所有組件,包括其自己的操作系統(tǒng)。
容器部署時代:
容器類似于 VM,但是更寬松的隔離特性,使容器之間可以共享操作系統(tǒng)(OS)。 因此,容器比起 VM 被認為是更輕量級的。且與 VM 類似,每個容器都具有自己的文件系統(tǒng)、CPU、內(nèi)存、進程空間等。 由于它們與基礎架構分離,因此可以跨云和 OS 發(fā)行版本進行移植。
容器因具有許多優(yōu)勢而變得流行起來,例如:
- 敏捷應用程序的創(chuàng)建和部署:與使用 VM 鏡像相比,提高了容器鏡像創(chuàng)建的簡便性和效率。
- 持續(xù)開發(fā)、集成和部署:通過快速簡單的回滾(由于鏡像不可變性), 提供可靠且頻繁的容器鏡像構建和部署。
- 關注開發(fā)與運維的分離:在構建、發(fā)布時創(chuàng)建應用程序容器鏡像,而不是在部署時, 從而將應用程序與基礎架構分離。
- 可觀察性:不僅可以顯示 OS 級別的信息和指標,還可以顯示應用程序的運行狀況和其他指標信號。
- 跨開發(fā)、測試和生產(chǎn)的環(huán)境一致性:在筆記本計算機上也可以和在云中運行一樣的應用程序。
- 跨云和操作系統(tǒng)發(fā)行版本的可移植性:可在 Ubuntu、RHEL、CoreOS、本地、 Google Kubernetes Engine 和其他任何地方運行。
- 以應用程序為中心的管理:提高抽象級別,從在虛擬硬件上運行 OS 到使用邏輯資源在 OS 上運行應用程序。
- 松散耦合、分布式、彈性、解放的微服務:應用程序被分解成較小的獨立部分, 并且可以動態(tài)部署和管理 – 而不是在一臺大型單機上整體運行。
- 資源隔離:可預測的應用程序性能。
- 資源利用:高效率和高密度。
三.Kubernetes
3.1 概述
Kubernetes 是一個可移植、可擴展的開源平臺,用于管理容器化的工作負載和服務,可促進聲明式配置和自動化。 Kubernetes 擁有一個龐大且快速增長的生態(tài),其服務、支持和工具的使用范圍相當廣泛。
Kubernetes 這個名字源于希臘語,意為“舵手”或“飛行員”。k8s 這個縮寫是因為 k 和 s 之間有八個字符的關系。 Google 在 2014 年開源了 Kubernetes 項目。 Kubernetes 建立在 Google 大規(guī)模運行生產(chǎn)工作負載十幾年經(jīng)驗的基礎上, 結合了社區(qū)中最優(yōu)秀的想法和實踐。
Kubernetes 為你提供的功能如下:
- 服務發(fā)現(xiàn)和負載均衡:Kubernetes 可以使用 DNS 名稱或自己的 IP 地址來曝露容器。 如果進入容器的流量很大, Kubernetes 可以負載均衡并分配網(wǎng)絡流量,從而使部署穩(wěn)定。
- 存儲編排:Kubernetes 允許你自動掛載你選擇的存儲系統(tǒng),例如本地存儲、公共云提供商等。
- 自動部署和回滾:你可以使用 Kubernetes 描述已部署容器的所需狀態(tài), 它可以以受控的速率將實際狀態(tài)更改為期望狀態(tài)。 例如,你可以自動化 Kubernetes 來為你的部署創(chuàng)建新容器, 刪除現(xiàn)有容器并將它們的所有資源用于新容器。
- 自動完成裝箱計算:你為 Kubernetes 提供許多節(jié)點組成的集群,在這個集群上運行容器化的任務。 你告訴 Kubernetes 每個容器需要多少 CPU 和內(nèi)存 (RAM)。 Kubernetes 可以將這些容器按實際情況調(diào)度到你的節(jié)點上,以最佳方式利用你的資源。
- 自我修復:Kubernetes 將重新啟動失敗的容器、替換容器、殺死不響應用戶定義的運行狀況檢查的容器, 并且在準備好服務之前不將其通告給客戶端。
- 密鑰與配置管理:Kubernetes 允許你存儲和管理敏感信息,例如密碼、OAuth 令牌和 ssh 密鑰。 你可以在不重建容器鏡像的情況下部署和更新密鑰和應用程序配置,也無需在堆棧配置中暴露密鑰。
3.2 Kubernetes 組件
Kubernetes 集群架構如下:
Kubernetes 集群組件如下:
Kubernetes有兩種節(jié)點類型:master節(jié)點,worker節(jié)點。master節(jié)點又稱為控制平面(Control Plane)。控制平面有很多組件,控制平面組件會為集群做出全局決策,比如資源的調(diào)度。 以及檢測和響應集群事件,例如當不滿足部署的 replicas 字段時, 要啟動新的 pod)。
控制平面組件可以在集群中的任何節(jié)點上運行。 然而,為了簡單起見,設置腳本通常會在同一個計算機上啟動所有控制平面組件, 并且不會在此計算機上運行用戶容器。
3.2.1 控制平面組件
控制平面組件如下:
- kube-apiserver:API 服務器是 Kubernetes 控制平面的組件, 該組件負責公開了 Kubernetes API,負責處理接受請求的工作。 API 服務器是 Kubernetes 控制平面的前端。
Kubernetes API 服務器的主要實現(xiàn)是 kube-apiserver。 kube-apiserver 設計上考慮了水平擴縮,也就是說,它可通過部署多個實例來進行擴縮。 你可以運行 kube-apiserver 的多個實例,并在這些實例之間平衡流量。 - etcd:etcd 是兼顧一致性與高可用性的鍵值對數(shù)據(jù)庫,可以作為保存 Kubernetes 所有集群數(shù)據(jù)的后臺數(shù)據(jù)庫。你的 Kubernetes 集群的 etcd 數(shù)據(jù)庫通常需要有個備份計劃。
- kube-scheduler:kube-scheduler 是控制平面的組件, 負責監(jiān)視新創(chuàng)建的、未指定運行節(jié)點(node)的 Pods, 并選擇節(jié)點來讓 Pod 在上面運行。調(diào)度決策考慮的因素包括單個 Pod 及 Pods 集合的資源需求、軟硬件及策略約束、 親和性及反親和性規(guī)范、數(shù)據(jù)位置、工作負載間的干擾及最后時限。
- kube-controller-manager:kube-controller-manager 是控制平面的組件, 負責運行控制器進程。從邏輯上講, 每個控制器都是一個單獨的進程, 但是為了降低復雜性,它們都被編譯到同一個可執(zhí)行文件,并在同一個進程中運行。
這些控制器包括:
節(jié)點控制器(Node Controller):負責在節(jié)點出現(xiàn)故障時進行通知和響應
任務控制器(Job Controller):監(jiān)測代表一次性任務的 Job 對象,然后創(chuàng)建 Pods 來運行這些任務直至完成
端點控制器(Endpoints Controller):填充端點(Endpoints)對象(即加入 Service 與 Pod)
服務帳戶和令牌控制器(Service Account & Token Controllers):為新的命名空間創(chuàng)建默認帳戶和 API 訪問令牌 - cloud-controller-manager:一個 Kubernetes 控制平面組件, 嵌入了特定于云平臺的控制邏輯。 云控制器管理器(Cloud Controller Manager)允許你將你的集群連接到云提供商的 API 之上, 并將與該云平臺交互的組件同與你的集群 交互的組件分離開來。cloud-controller-manager 僅運行特定于云平臺的控制器。 因此如果你在自己的環(huán)境中運行 Kubernetes,或者在本地計算機中運行學習環(huán)境, 所部署的集群不需要有云控制器管理器。
與 kube-controller-manager 類似,cloud-controller-manager 將若干邏輯上獨立的控制回路組合到同一個可執(zhí)行文件中, 供你以同一進程的方式運行。 你可以對其執(zhí)行水平擴容(運行不止一個副本)以提升性能或者增強容錯能力。
下面的控制器都包含對云平臺驅(qū)動的依賴:
節(jié)點控制器(Node Controller):用于在節(jié)點終止響應后檢查云提供商以確定節(jié)點是否已被刪除
路由控制器(Route Controller):用于在底層云基礎架構中設置路由
服務控制器(Service Controller):用于創(chuàng)建、更新和刪除云提供商負載均衡器
3.2.2 Node組件
節(jié)點組件會在每個節(jié)點上運行,負責維護運行的 Pod 并提供 Kubernetes 運行環(huán)境。
node組件如下:
- kubelet:kubelet 會在集群中每個節(jié)點(node)上運行。 它保證容器(containers)都運行在 Pod 中。kubelet 接收一組通過各類機制提供給它的 PodSpecs, 確保這些 PodSpecs 中描述的容器處于運行狀態(tài)且健康。 kubelet 不會管理不是由 Kubernetes 創(chuàng)建的容器。
- kube-proxy:kube-proxy 是集群中每個節(jié)點(node)所上運行的網(wǎng)絡代理, 實現(xiàn) Kubernetes 服務(Service) 概念的一部分。kube-proxy 維護節(jié)點上的一些網(wǎng)絡規(guī)則, 這些網(wǎng)絡規(guī)則會允許從集群內(nèi)部或外部的網(wǎng)絡會話與 Pod 進行網(wǎng)絡通信。如果操作系統(tǒng)提供了可用的數(shù)據(jù)包過濾層,則 kube-proxy 會通過它來實現(xiàn)網(wǎng)絡規(guī)則。 否則,kube-proxy 僅做流量轉(zhuǎn)發(fā)。
四.安裝部署Kubernetes集群
4.1 環(huán)境介紹
Kubernetes集群架構:k8scloude1作為master節(jié)點,k8scloude2,k8scloude3作為worker節(jié)點
服務器 | 操作系統(tǒng)版本 | CPU架構 | 進程 | 功能描述 |
---|---|---|---|---|
k8scloude1/192.168.110.130 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico | k8s master節(jié)點 |
k8scloude2/192.168.110.129 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點 |
k8scloude3/192.168.110.128 | CentOS Linux release 7.4.1708 (Core) | x86_64 | docker,kubelet,kube-proxy,calico | k8s worker節(jié)點 |
4.2 配置節(jié)點的基本環(huán)境
先配置節(jié)點的基本環(huán)境,3個節(jié)點都要同時設置,在此以k8scloude1作為示例
首先設置主機名
[root@localhost ~]# vim /etc/hostname [root@localhost ~]# cat /etc/hostname k8scloude1
配置節(jié)點IP地址(可選)
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens32 [root@k8scloude1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE=Ethernet BOOTPROTO=static NAME=ens32 DEVICE=ens32 ONBOOT=yes DNS1=114.114.114.114 IPADDR=192.168.110.130 NETMASK=255.255.255.0 GATEWAY=192.168.110.2 ZONE=trusted
重啟網(wǎng)絡
[root@localhost ~]# service network restart Restarting network (via systemctl): [ 確定 ] [root@localhost ~]# systemctl restart NetworkManager
重啟機器之后,主機名變?yōu)閗8scloude1,測試機器是否可以訪問網(wǎng)絡
[root@k8scloude1 ~]# ping www.baidu.com PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data. 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=25.9 ms 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=26.7 ms 64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=26.4 ms ^C --- www.a.shifen.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2004ms rtt min/avg/max/mdev = 25.960/26.393/26.724/0.320 ms
配置IP和主機名映射
[root@k8scloude1 ~]# vim /etc/hosts [root@k8scloude1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.110.130 k8scloude1 192.168.110.129 k8scloude2 192.168.110.128 k8scloude3 #復制 /etc/hosts到其他兩個節(jié)點 [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.129:/etc/hosts [root@k8scloude1 ~]# scp /etc/hosts 192.168.110.128:/etc/hosts #可以ping通其他兩個節(jié)點則成功 [root@k8scloude1 ~]# ping k8scloude1 PING k8scloude1 (192.168.110.130) 56(84) bytes of data. 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=1 ttl=64 time=0.044 ms 64 bytes from k8scloude1 (192.168.110.130): icmp_seq=2 ttl=64 time=0.053 ms ^C --- k8scloude1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.044/0.048/0.053/0.008 ms [root@k8scloude1 ~]# ping k8scloude2 PING k8scloude2 (192.168.110.129) 56(84) bytes of data. 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=1 ttl=64 time=0.297 ms 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=2 ttl=64 time=1.05 ms 64 bytes from k8scloude2 (192.168.110.129): icmp_seq=3 ttl=64 time=0.254 ms ^C --- k8scloude2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.254/0.536/1.057/0.368 ms [root@k8scloude1 ~]# ping k8scloude3 PING k8scloude3 (192.168.110.128) 56(84) bytes of data. 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=1 ttl=64 time=0.285 ms 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=2 ttl=64 time=0.513 ms 64 bytes from k8scloude3 (192.168.110.128): icmp_seq=3 ttl=64 time=0.390 ms ^C --- k8scloude3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.285/0.396/0.513/0.093 ms
關閉屏保(可選)
[root@k8scloude1 ~]# setterm -blank 0
下載新的yum源
[root@k8scloude1 ~]# rm -rf /etc/yum.repos.d/* ;wget ftp://ftp.rhce.cc/k8s/* -P /etc/yum.repos.d/ --2022-01-07 17:07:28-- ftp://ftp.rhce.cc/k8s/* => “/etc/yum.repos.d/.listing” 正在解析主機 ftp.rhce.cc (ftp.rhce.cc)... 101.37.152.41 正在連接 ftp.rhce.cc (ftp.rhce.cc)|101.37.152.41|:21... 已連接。 正在以 anonymous 登錄 ... 登錄成功! ==> SYST ... 完成。 ==> PWD ... 完成。 ...... 100%[=======================================================================================================================================================================>] 276 --.-K/s 用時 0s 2022-01-07 17:07:29 (81.9 MB/s) - “/etc/yum.repos.d/k8s.repo” 已保存 [276] #新的repo文件如下 [root@k8scloude1 ~]# ls /etc/yum.repos.d/ CentOS-Base.repo docker-ce.repo epel.repo k8s.repo
關閉selinux,設置SELINUX=disabled
[root@k8scloude1 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted [root@k8scloude1 ~]# getenforce Disabled [root@k8scloude1 ~]# setenforce 0 setenforce: SELinux is disabled
配置防火墻允許所有數(shù)據(jù)包通過
[root@k8scloude1 ~]# firewall-cmd --set-default-zone=trusted Warning: ZONE_ALREADY_SET: trusted success [root@k8scloude1 ~]# firewall-cmd --get-default-zone trusted
Linux swapoff命令用于關閉系統(tǒng)交換分區(qū)(swap area)。
注意:如果不關閉swap,就會在kubeadm初始化Kubernetes的時候報錯:“[ERROR Swap]: running with swap on is not supported. Please disable swap”
[root@k8scloude1 ~]# swapoff -a ;sed -i '/swap/d' /etc/fstab [root@k8scloude1 ~]# cat /etc/fstab # /etc/fstab # Created by anaconda on Thu Oct 18 23:09:54 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=9875fa5e-2eea-4fcc-a83e-5528c7d0f6a5 / xfs defaults 0 0
4.3 節(jié)點安裝docker,并進行相關配置
k8s是容器編排工具,需要容器管理工具,所以三個節(jié)點同時安裝docker,還是以k8scloude1為例。
安裝docker
[root@k8scloude1 ~]# yum -y install docker-ce 已加載插件:fastestmirror base | 3.6 kB 00:00:00 ...... 已安裝: docker-ce.x86_64 3:20.10.12-3.el7 ...... 完畢!
設置docker開機自啟動并現(xiàn)在啟動docker
[root@k8scloude1 ~]# systemctl enable docker --now Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@k8scloude1 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 六 2022-01-08 22:10:38 CST; 18s ago Docs: https://docs.docker.com Main PID: 1377 (dockerd) Memory: 30.8M CGroup: /system.slice/docker.service └─1377 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
查看docker版本
[root@k8scloude1 ~]# docker --version Docker version 20.10.12, build e91ed57
配置docker鏡像加速器
[root@k8scloude1 ~]# cat > /etc/docker/daemon.json <<EOF > { > "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"] > } > EOF [root@k8scloude1 ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"] }
重啟docker
[root@k8scloude1 ~]# systemctl restart docker [root@k8scloude1 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 六 2022-01-08 22:17:45 CST; 8s ago Docs: https://docs.docker.com Main PID: 1529 (dockerd) Memory: 32.4M CGroup: /system.slice/docker.service └─1529 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
設置iptables不對bridge的數(shù)據(jù)進行處理,啟用IP路由轉(zhuǎn)發(fā)功能
[root@k8scloude1 ~]# cat <<EOF> /etc/sysctl.d/k8s.conf > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > net.ipv4.ip_forward = 1 > EOF #使配置生效 [root@k8scloude1 ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
4.4 安裝kubelet,kubeadm,kubectl
三個節(jié)點都安裝kubelet,kubeadm,kubectl:
- Kubelet 是 kubernetes 工作節(jié)點上的一個代理組件,運行在每個節(jié)點上
- Kubeadm 是一個快捷搭建kubernetes(k8s)的安裝工具,它提供了 kubeadm init 以及 kubeadm join 這兩個命令來快速創(chuàng)建 kubernetes 集群,kubeadm 通過執(zhí)行必要的操作來啟動和運行一個最小可用的集群
- kubectl是Kubernetes集群的命令行工具,通過kubectl能夠?qū)罕旧磉M行管理,并能夠在集群上進行容器化應用的安裝部署。
#repoid:禁用為給定kubernetes定義的排除 ##--disableexcludes=kubernetes 禁掉除了這個之外的別的倉庫 [root@k8scloude1 ~]# yum -y install kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0 --disableexcludes=kubernetes 已加載插件:fastestmirror Loading mirror speeds from cached hostfile 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 kubeadm.x86_64.0.1.21.0-0 將被 安裝 ...... 已安裝: kubeadm.x86_64 0:1.21.0-0 kubectl.x86_64 0:1.21.0-0 kubelet.x86_64 0:1.21.0-0 ...... 完畢!
設置kubelet開機自啟動并現(xiàn)在啟動kubelet
[root@k8scloude1 ~]# systemctl enable kubelet --now Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. #kubelet現(xiàn)在是啟動不了的 [root@k8scloude1 ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since 六 2022-01-08 22:35:33 CST; 3s ago Docs: https://kubernetes.io/docs/ Process: 1722 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE) Main PID: 1722 (code=exited, status=1/FAILURE) 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE 1月 08 22:35:33 k8scloude1 systemd[1]: Unit kubelet.service entered failed state. 1月 08 22:35:33 k8scloude1 systemd[1]: kubelet.service failed.
4.5 kubeadm初始化
查看kubeadm哪些版本是可用的
[root@k8scloude2 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes 已加載插件:fastestmirror Loading mirror speeds from cached hostfile 已安裝的軟件包 kubeadm.x86_64 1.21.0-0 @kubernetes 可安裝的軟件包 kubeadm.x86_64 1.6.0-0 kubernetes kubeadm.x86_64 1.6.1-0 kubernetes kubeadm.x86_64 1.6.2-0 kubernetes ...... kubeadm.x86_64 1.23.0-0 kubernetes kubeadm.x86_64 1.23.1-0
kubeadm init:在主節(jié)點k8scloude1上初始化 Kubernetes 控制平面節(jié)點
#進行kubeadm初始化 #--image-repository registry.aliyuncs.com/google_containers:使用阿里云鏡像倉庫,不然有些鏡像下載不下來 #--kubernetes-version=v1.21.0:指定k8s的版本 #--pod-network-cidr=10.244.0.0/16:指定pod的網(wǎng)段 #如下報錯:registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0下載不下來,原因為:coredns改名為coredns/coredns了,手動下載coredns即可 #coredns是一個用go語言編寫的開源的DNS服務 [root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.21.0 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0: output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
手動下載coredns鏡像
[root@k8scloude1 ~]# docker pull coredns/coredns:1.8.0 1.8.0: Pulling from coredns/coredns c6568d217a00: Pull complete 5984b6d55edf: Pull complete Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e Status: Downloaded newer image for coredns/coredns:1.8.0 docker.io/coredns/coredns:1.8.0
需要重命名coredns鏡像,不然識別不了
[root@k8scloude1 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 #刪除coredns/coredns:1.8.0鏡像 [root@k8scloude1 ~]# docker rmi coredns/coredns:1.8.0
此時可以發(fā)現(xiàn)現(xiàn)在k8scloude1上有7個鏡像,缺一個鏡像,kubeadm初始化都不能成功
[root@k8scloude1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 9 months ago 126MB registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 9 months ago 120MB registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 9 months ago 50.6MB registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB registry.aliyuncs.com/google_containers/coredns/coredns v1.8.0 296a6d5035e2 14 months ago 42.5MB registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 16 months ago 253MB
重新進行kubeadm初始化
[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.21.0 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 65.002757 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: nta3x4.3e54l2dqtmj9tlry [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.110.130:6443 --token nta3x4.3e54l2dqtmj9tlry \ --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
根據(jù)提示創(chuàng)建目錄和配置文件
[root@k8scloude1 ~]# mkdir -p $HOME/.kube [root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
現(xiàn)在已經(jīng)可以看到master節(jié)點了
[root@k8scloude1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8scloude1 NotReady control-plane,master 5m54s v1.21.0
4.6 添加worker節(jié)點到k8s集群
接下來把另外的兩個worker節(jié)點也加入到k8s集群。
kubeadm init的時候輸出了如下這句:
kubeadm join 192.168.110.130:6443 –token nta3x4.3e54l2dqtmj9tlry –discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
在另外兩個worker節(jié)點執(zhí)行這一命令就可以把節(jié)點加入到k8s集群里。
如果加入集群的token忘了,可以使用如下的命令獲取最新的加入命令token
[root@k8scloude1 ~]# kubeadm token create --print-join-command kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8
在另外兩個節(jié)點執(zhí)行加入集群的token命令
[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 --token 8e3haz.m1wrpuf357g72k1u --discovery-token-ca-cert-hash sha256:9add1314177ac5660d9674dab8c13aa996520028514246c4cd103cf08a211cc8 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在k8scloude1查看節(jié)點狀態(tài),可以看到兩個worker節(jié)點都加入到了k8s集群
[root@k8scloude1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8scloude1 NotReady control-plane,master 8m43s v1.21.0 k8scloude2 NotReady <none> 28s v1.21.0 k8scloude3 NotReady <none> 25s v1.21.0
可以發(fā)現(xiàn)worker節(jié)點加入到k8s集群后多了兩個鏡像
[root@k8scloude2 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB [root@k8scloude3 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 9 months ago 122MB registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 12 months ago 683kB
4.7 部署CNI網(wǎng)絡插件calico
雖然現(xiàn)在k8s集群已經(jīng)有1個master節(jié)點,2個worker節(jié)點,但是此時三個節(jié)點的狀態(tài)都是NotReady的,原因是沒有CNI網(wǎng)絡插件,為了節(jié)點間的通信,需要安裝cni網(wǎng)絡插件,常用的cni網(wǎng)絡插件有calico和flannel,兩者區(qū)別為:flannel不支持復雜的網(wǎng)絡策略,calico支持網(wǎng)絡策略,因為今后還要配置k8s網(wǎng)絡策略networkpolicy,所以本文選用的cni網(wǎng)絡插件為calico!
現(xiàn)在去官網(wǎng)下載calico.yaml文件:
官網(wǎng):https://projectcalico.docs.tigera.io/about/about-calico
搜索框里直接搜索calico.yaml
找到下載calico.yaml的命令
下載calico.yaml文件
[root@k8scloude1 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 212k 100 212k 0 0 44222 0 0:00:04 0:00:04 --:--:-- 55704 [root@k8scloude1 ~]# ls calico.yaml
查看需要下載的calico鏡像,這四個鏡像需要在所有節(jié)點都下載,以k8scloude1為例
[root@k8scloude1 ~]# grep image calico.yaml image: docker.io/calico/cni:v3.21.2 image: docker.io/calico/cni:v3.21.2 image: docker.io/calico/pod2daemon-flexvol:v3.21.2 image: docker.io/calico/node:v3.21.2 image: docker.io/calico/kube-controllers:v3.21.2 [root@k8scloude1 ~]# docker pull docker.io/calico/cni:v3.21.2 v3.21.2: Pulling from calico/cni Digest: sha256:ce618d26e7976c40958ea92d40666946d5c997cd2f084b6a794916dc9e28061b Status: Image is up to date for calico/cni:v3.21.2 docker.io/calico/cni:v3.21.2 [root@k8scloude1 ~]# docker pull docker.io/calico/pod2daemon-flexvol:v3.21.2 v3.21.2: Pulling from calico/pod2daemon-flexvol Digest: sha256:b034c7c886e697735a5f24e52940d6d19e5f0cb5bf7caafd92ddbc7745cfd01e Status: Image is up to date for calico/pod2daemon-flexvol:v3.21.2 docker.io/calico/pod2daemon-flexvol:v3.21.2 [root@k8scloude1 ~]# docker pull docker.io/calico/node:v3.21.2 v3.21.2: Pulling from calico/node Digest: sha256:6912fe45eb85f166de65e2c56937ffb58c935187a84e794fe21e06de6322a4d0 Status: Image is up to date for calico/node:v3.21.2 docker.io/calico/node:v3.21.2 [root@k8scloude1 ~]# docker pull docker.io/calico/kube-controllers:v3.21.2 v3.21.2: Pulling from calico/kube-controllers d6a693444ed1: Pull complete a5399680e995: Pull complete 8f0eb4c2bcba: Pull complete 52fe18e41b06: Pull complete 2f8d3f9f1a40: Pull complete bc94a7e3e934: Pull complete 55bf7cf53020: Pull complete Digest: sha256:1f4fcdcd9d295342775977b574c3124530a4b8adf4782f3603a46272125f01bf Status: Downloaded newer image for calico/kube-controllers:v3.21.2 docker.io/calico/kube-controllers:v3.21.2 #主要是如下4個鏡像 [root@k8scloude1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE calico/node v3.21.2 f1bca4d4ced2 4 weeks ago 214MB calico/pod2daemon-flexvol v3.21.2 7778dd57e506 5 weeks ago 21.3MB calico/cni v3.21.2 4c5c32530391 5 weeks ago 239MB calico/kube-controllers v3.21.2 b20652406028 5 weeks ago 132MB
修改calico.yaml 文件,CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化時候的pod網(wǎng)段一致,注意格式要對齊,不然會報錯
[root@k8scloude1 ~]# vim calico.yaml [root@k8scloude1 ~]# cat calico.yaml | egrep "CALICO_IPV4POOL_CIDR|"10.244"" - name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
不直觀的話看圖片:修改calico.yaml 文件
應用calico.yaml文件
[root@k8scloude1 ~]# kubectl apply -f calico.yaml configmap/calico-config unchanged customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrole.rbac.authorization.k8s.io/calico-node unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget poddisruptionbudget.policy/calico-kube-controllers created
此時發(fā)現(xiàn)三個節(jié)點都是Ready狀態(tài)了
[root@k8scloude1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8scloude1 Ready control-plane,master 53m v1.21.0 k8scloude2 Ready <none> 45m v1.21.0 k8scloude3 Ready <none> 45m v1.21.0
4.8 配置kubectl命令tab鍵自動補全
查看kubectl自動補全命令
[root@k8scloude1 ~]# kubectl --help | grep bash completion Output shell completion code for the specified shell (bash or zsh)
添加source <(kubectl completion bash)到/etc/profile,并使配置生效
[root@k8scloude1 ~]# cat /etc/profile | head -2 # /etc/profile source <(kubectl completion bash) [root@k8scloude1 ~]# source /etc/profile
此時即可kubectl命令tab鍵自動補全
[root@k8scloude1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8scloude1 Ready control-plane,master 59m v1.21.0 k8scloude2 Ready <none> 51m v1.21.0 k8scloude3 Ready <none> 51m v1.21.0 #注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自動補全命令 [root@k8scloude1 ~]# rpm -qa | grep bash bash-completion-2.1-6.el7.noarch bash-4.2.46-30.el7.x86_64 bash-doc-4.2.46-30.el7.x86_64
自此,Kubernetes(k8s)集群部署完畢!
更多關于Centos7安裝部署Kubernetes的資料請關注其它相關文章!