一、日志收集的需求背景:
• 業務發展越來越龐大,服務器越來越多
? • 各種訪問日志、應用日志、錯誤日志量越來越多
? • 開發人員排查問題,需要到服務器上查日志,效率低、權限不好控制
? • 運維需實時關注業務訪問情況
二、容器特性給日志采集帶來的難度:
? • K8s彈性伸縮性:導致不能預先確定采集的目標? • 容器隔離性:容器的文件系統與宿主機是隔離,導致日志采集器讀取日志文件受阻
三、應用程序日志記錄分為兩類:
? • 標準輸出:輸出到控制臺,使用kubectl logs可以看到 (比如Nginx 的日志就是輸出到控制臺)
? • 日志文件:寫到容器的文件系統的文件(比如Tomcat的日志就是寫入到容器中的文件系統的文件)
四、Kubernetes應用日志收集
針對標準輸出:以DaemonSet方式在每個Node上部署一個日志收集程序,采集/var/lib/Docker/containers/目錄下所有容器日志
針對容器中日志文件:在Pod中增加一個容器運行日志采集器,使用emtyDir共享日志目錄讓日志采集器讀取到日志文件
五、EFK Stack日志系統
EFK 是三個開源軟件的縮寫,提供一套完整的企業級日 志平臺解決方案。 分別是:
• Elasticsearch:搜索、分析和存儲數據
• Filebeat :是本地文件的日志數據采集器,可監控日志目錄或特定日志文件,并將它們轉發給Elasticsearch或Logstatsh進行索引、kafka等
• Kibana:數據可視化
六、部署ES 單點+ filebeat + kibana 實現Kubernetes應用日志收集
1、集群信息
主機名 |
IP地址 |
節點信息 |
Master |
192.168.31.61 |
master 節點 8核8G |
Node-1 |
192.168.31.63 |
node 節點 8核12G |
Node-2 |
192.168.31.66 |
node 節點 8核12G |
Node-3 |
192.168.31.67 |
node 節點 8核12G |
NFS |
192.168.31.100 |
nfs 存儲節點 8核12G |
2、軟件版本
軟件名 |
版本 |
備注 |
kubernetes |
v1.18.6 |
|
Elasticsearch |
v7.9.2 |
單點 |
Filebeat |
v7.9.2 |
|
Kibana |
v7.9.2 |
|
Nfs-client-provisioner |
v1.2.8 |
動態PV |
3、部署NFS 服務
# 創建 NFS 存儲目錄
mkdir -p /home/elk
# 安裝nfs服務
yum -y install nfs-utils rpcbind
# 修改配置文件
echo "/home/elk *(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports
# 啟動服務
systemctl start nfs && systemctl start rpcbind
# 設置開機啟動
systemctl enable nfs-server && systemctl enable rpcbind
4、集群所有節點都要安裝nfs-utils
yum -y install nfs-utils
#記住,所有節點都要安裝nfs-utils,否則無法使用pv
5、部署動態PV
5.1、創建NFS 動態PV專屬命名空間
[root@master-1 ~]# kubectl create ns nfs
namespace/nfs created
5.2、使用Helm 部署nfs-client-provisioner
注意事項:
(1)、nfs-client-provisioner部署到剛剛創建的nfs命名空間下
(2)、storageClass.name #指定storageClassName名稱,用于 PVC 自動綁定專屬動態 PV 上
(3)、需要指定NFS服務器的IP 地址(192.168.31.100),以及共享名稱路徑(/home/elk)
#添加helm charts repo
[root@master-1 es-single-node]# helm repo add helm-stable https://charts.helm.sh/stable
[root@master-1 es-single-node]# helm repo update
cat > elastic-client-nfs.yaml << EOF
# NFS 設置
nfs:
server: 192.168.31.100
path: /home/elk
storageClass:
# 此配置用于綁定 PVC 和 PV
name: elastic-nfs-client
# 資源回收策略
#主要用于綁定的PVC刪除后,資源釋放后如何處理該PVC在存儲設備上寫入的數據。
#Retain:保留,刪除PVC后,PV保留數據;
#Recycle:回收空間,刪除PVC后,簡單的清除文件;(NFS和HostPath存儲支持)
#Delete:刪除,刪除PVC后,與PV相連接的后端存儲會刪除數據;(AWS EBS、Azure Disk、Cinder volumes、GCE PD支持)
reclaimPolicy: Retain
# 使用鏡像
image:
repository: kubesphere/nfs-client-provisioner
# 副本數量
replicaCount: 3
EOF
#helm 部署 nfs-client-provisioner
[root@master-1 es-single-node]# helm install elastic-nfs-storage -n nfs --values elastic-client.yaml helm-stable/nfs-client-provisioner --version 1.2.8
5.3、查看 nfs-client-provisioner Pod 運行狀態
[root@master-1 es-single-node]# kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
elastic-nfs-storage-nfs-client-provisioner-78c7754777-8kvlg 1/1 Running 0 28m
elastic-nfs-storage-nfs-client-provisioner-78c7754777-vtpn8 1/1 Running 0 28m
elastic-nfs-storage-nfs-client-provisioner-78c7754777-zbx8s 1/1 Running 0 28m
6、部署單節點Elasticsearch數據庫
6.1、創建EFK 專屬命名空間
[root@master-1 es-single-node]# kubectl create ns ops
namespace/ops created
6.2、創建elasticsearch.yaml
cat > elasticsearch.yaml << EOF
apiVersion: Apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: ops
labels:
k8s-app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
k8s-app: elasticsearch
template:
metadata:
labels:
k8s-app: elasticsearch
spec:
containers:
- image: elasticsearch:7.9.2
name: elasticsearch
resources:
limits:
cpu: 2
memory: 3Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: "discovery.type"
value: "single-node"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx2g"
ports:
- containerPort: 9200
name: db
protocol: TCP
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: es-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-pvc
namespace: ops
spec:
#指定動態PV 名稱
storageClassName: "elastic-nfs-client"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: ops
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
k8s-app: elasticsearch
EOF
[root@master-1 es-single-node]# kubectl apply -f elasticsearch.yaml
deployment.apps/elasticsearch create
persistentvolumeclaim/es-pvc create
service/elasticsearch create
6.3、查看elasticsearch pod,service 運行狀態
[root@master-1 es-single-node]# kubectl get pod -n ops -l k8s-app=elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-97f7d74f5-qr6d4 1/1 Running 0 2m41s
[root@master-1 es-single-node]# kubectl get service -n ops
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP 10.0.0.126 <none> 9200/TCP 2m41s
7、部署kibana 可視化展示
7.1、創建kibana.yaml
cat > kibana.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: ops
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: kibana:7.9.2
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: ELASTICSEARCH_HOSTS
#指定elasticsearch的servicesname,記得加上命名空間.ops
value: http://elasticsearch.ops:9200
- name: I18N_LOCALE
value: zh-CN
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: ops
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30601
selector:
k8s-app: kibana
EOF
[root@master-1 es-single-node]# kubectl apply -f kibana.yaml
deployment.apps/kibana create
service/kibana create
7.2、查看kibana pod,service 運行狀態
[root@master-1 es-single-node]# kubectl get pod -n ops -l k8s-app=kibana
NAME READY STATUS RESTARTS AGE
kibana-5c96d89b65-zgphp 1/1 Running 0 7m
[root@master-1 es-single-node]# kubectl get service -n ops
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kibana NodePort 10.0.0.164 <none> 5601:30601/TCP 7m
7.3、查看kibana dashboard
輸入kibana 地址: http://nodeIP:30601
8、日志收集
8.1、收集容器標準輸出日志
大致思路:
? 以DaemonSet方式在每個Node上部署一個Filebeat 的日志收集程序的Pod,采用hostPath 方式把 /var/lib/docker/containers 掛載到Filebeat 容器中,/var/lib/docker/containers 目錄下的就是每個容器標準輸出的日志
8.1.1 創建 filebeat-kubernetes.yaml
cat > filebeat-kubernetes.yaml << EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: ops
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
output.elasticsearch:
hosts: ['elasticsearch.ops:9200']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-inputs
namespace: ops
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: ops
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: elastic/filebeat:7.9.2
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: inputs
mountPath: /usr/share/filebeat/inputs.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: inputs
configMap:
defaultMode: 0600
name: filebeat-inputs
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: ops
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: ops
labels:
k8s-app: filebeat
EOF
[root@master-1 es-single-node]# kubectl apply -f filebeat-kubernetes.yaml
configmap/filebeat-config create
configmap/filebeat-inputs create
daemonset.apps/filebeat create
clusterrolebinding.rbac.authorization.k8s.io/filebeat create
clusterrole.rbac.authorization.k8s.io/filebeat create
serviceaccount/filebeat create
8.1.2 查看Filebeat pod 運行狀態
[root@master-1 es-single-node]# kubectl get pods -n ops -l k8s-app=node-exporter
NAME READY STATUS RESTARTS AGE
node-exporter-j72cb 1/1 Running 10 13d
node-exporter-k6d7v 1/1 Running 10 13d
node-exporter-vhgns 1/1 Running 10 13d
8.1.3 登陸kibana 管理索引, 添加索引模式
索引管理:
(一般只要有數據入到ES中就會有索引出現 ,如果沒有出現可以試著訪問下業務使其產生日志輸出到ES中)
點擊 左邊的 Stack Management 中的 索引管理 可以看到一個名詞為filebeat-7.9.2-2021.03.01-000001的索引,狀態為open
添加索引模式:
點擊 左邊的 Stack Management 中的索引模式,創建索引模式
輸入索引模式名稱: filebeat-7.9.2-*
表示可以匹配到上面的索引 filebeat-7.9.2-2021.03.01-000001
選擇@timestamp 時間字段
8.1.4 啟動一個nginx 的Pod,驗證日志數據
cat > app-log-stdout.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-log-stdout
spec:
replicas: 3
selector:
matchLabels:
project: stdout-test
app: nginx-stdout
template:
metadata:
labels:
project: stdout-test
app: nginx-stdout
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: app-log-stdout
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
project: stdout-test
app: nginx-stdout
EOF
[root@master-1 es-single-node]# kubectl apply -f app-log-stdout.yaml
deployment.apps/app-log-stdout created
service/app-log-stdout created
8.1.5 查看nginx pod,service 狀態
[root@master-1 es-single-node]# kubectl get pods -l app=nginx-stdout
NAME READY STATUS RESTARTS AGE
app-log-stdout-76fb86fcf6-cjch4 1/1 Running 0 2m34s
app-log-stdout-76fb86fcf6-wcfqm 1/1 Running 0 2m34s
app-log-stdout-76fb86fcf6-zgzcc 1/1 Running 0 2m34s
[root@master-1 es-single-node]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-log-stdout ClusterIP 10.0.0.167 <none> 80/TCP 2m41s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 63d
8.1.6 訪問nginx 的Pod 使其產生nginx 日志
[root@node-1 ~]# curl 10.0.0.167
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
8.1.7 登陸kibana dashboard 檢索nginx 日志
檢索的語句: kubernetes.namespace : "default" and message : "curl"
可以看到有1個 日志被命中了
本期的分享先分享到這里,下期接著分享 EFK 日志系統收集K8s日志之 容器中日志文件