首先我们使用EFK收集Kubernetes集群中的日志,本次实验讲解的是在Kubernetes集群中启动一个Elasticsearch集群,如果企业内已经有了Elasticsearch集群,可以直接将日志输出至已有的Elasticsearch集群。
部署elasticsearch
下载需要的部署文件:
git clone https://gitee.com/qfxcoffee/shield.git
cd shield/sh/k8s/efk
创建EFK所用的命名空间:
kubectl create -f create-logging-namespace.yaml
创建Elasticsearch集群:
[root@control-plane efk-7.10.2]# kubectl create -f es-service.yaml -f es-statefulset.yaml
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
创建Kibana
[root@control-plane efk-7.10.2]# kubectl create -f kibana-deployment.yaml -f kibana-service.yaml
deployment.apps/kibana-logging created
service/kibana-logging created
创建Fluentd:
由于在Kubernetes集群中,可能并不需要对所有的机器都采集日志,因此可以更改Fluentd的部署文件如下,添加一个NodeSelector,只部署至需要采集的主机即可:
[root@control-plane efk-7.10.2]# kubectl get node
NAME STATUS ROLES AGE VERSION
control-plane.minikube.internal Ready control-plane,master 18d v1.23.7
[root@control-plane efk-7.10.2]# kubectl label node control-plane.minikube.internal fluentd=true
[root@control-plane efk-7.10.2]# kubectl get node -l fluentd=true --show-labels
NAME STATUS ROLES AGE VERSION LABELS
control-plane.minikube.internal Ready control-plane,master 18d v1.23.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,fluentd=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane.minikube.internal,kubernetes.io/os=linux,minikube.k8s.io/commit=210b148df93a80eb872ecbeb7e35281b3c582c61,minikube.k8s.io/name=minikube,minikube.k8s.io/primary=true,minikube.k8s.io/updated_at=2024_11_05T14_44_37_0700,minikube.k8s.io/version=v1.34.0,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
创建Fluentd:
[root@control-plane efk-7.10.2]# kubectl create -f fluentd-es-ds.yaml -f fluentd-es-configmap.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.1.1 created
configmap/fluentd-es-config-v0.2.1 created
Fluentd的ConfigMap有一个字段需要注意,在fluentd-es-configmap.yaml最后有一个output.conf:
查看kibana
http://192.168.56.115:32678/kibana/app/home#/
可以看到整个流程已经打通了。