文章目录
🚀 本文内容:如何创建一个属于自己的 chart。
⭐ 详细步骤:
- 使用 helm create mychart 创建 chart,chart 名称为 mychart,默认为 nginx
- 修改 chart,以满足实际的自定义需求
- 使用 helm lint mychart 检查格式
- 使用 helm install 模拟运行/实际运行 chart,验证是否满足功能要求
- 打包并生成 index.yaml
下面的例子,创建一个 Nginx chart。默认创建的 chart 是以 nginx 为例的。
第 1 步:创建 chart
使用 helm create mychart
创建 chart:
helm create mychart
# Creating mychart
查看创建的目录及文件:
mychart/
Chart.yaml
values.yaml
templates/
deployment.yaml
hpa.yaml
ingress.yaml
serviceaccount.yaml
service.yaml
_helpers.tpl # 可被复用的 chart 模板辅助对象
NOTES.txt
tests/
test-connection.yaml
charts/
Chart.yaml
内容不多:
apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
type: application
# chart 版本
version: 0.1.0
# nginx 版本
appVersion: "1.16.0"
values.yaml
可以看到,有很多是默认就存在的配置。
- replicaCount:副本数量
- image:容器镜像相关
- serviceAccount:服务账号
- podAnnotations、podSecurityContext:Pod 相关
- securityContext:安全上下文
- service、ingress
- resources、autoscaling
- nodeSelector、tolerations、affinity
# Default values for mychart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: nginx
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
templates 模板文件
_helpers.tpl 模板辅助文件
_helpers.tpl 是一个模板辅助文件,比如最常见的 chart 名称的处理。
{{/*
Expand the name of the chart.
*/}}
{{- define "mychart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mychart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mychart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "mychart.labels" -}}
helm.sh/chart: {{ include "mychart.chart" . }}
{{ include "mychart.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "mychart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "mychart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "mychart.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "mychart.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
serviceaccount.yaml
创建 ServiceAccount:
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "mychart.serviceAccountName" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
service.yaml
创建 Service:
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "mychart.selectorLabels" . | nindent 4 }}
deployment.yaml
创建 Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
hpa.yaml
创建 HorizontalPodAutoscaler:
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "mychart.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
ingress.yaml
创建 Ingress:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "mychart.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
NOTES.txt
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "mychart.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "mychart.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "mychart.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "mychart.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
tests/test-connection.yaml
Chart Test,用于测试验证 chart 是否按预期运行。也可以具体划分下,放在 tests 目录下【这样可以定义更多的 Test,且能更好的与模板隔离开】。
test 放在 templates/
目录下,其会指定容器并执行特定的任务,任务的定义必须包含测试钩子的注释:"helm.sh/hook": test
。
test 本质就是一个 Helm 钩子,所以类似于 helm.sh/hook-weight
和 helm.sh/hook-delete-policy
的注释也可用于 test。
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "mychart.fullname" . }}-test-connection"
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "mychart.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never
运行方式:
helm install demo demo --namespace default
# 执行 Chart 测试
helm test demo
第 2 步:检查 chart 格式
helm lint mychart/
# ==> Linting mychart/
# [INFO] Chart.yaml: icon is recommended
#
# 1 chart(s) linted, 0 chart(s) failed
第 3 步:模拟运行
helm install 时添加 – dry-run,即可模拟运行。命令如下:
helm install --debug --dry-run goodly-guppy ./mychart
运行日志如下:
- 重点1:Release 基本信息
- 重点2:Values 数据,用户提供的 Values + 最终计算得出的 Values
- 重点3:helm hooks,这里是 test 钩子,执行连接测试的
- 重点4:实际运行的 Kubernetes manifest
- 重点5:打印出的 NOTES.txt (已使用模板渲染过)
install.go:173: [debug] Original chart version: ""
install.go:190: [debug] CHART PATH: /home/userlocal/mychart
#### 重点1:Release 基本信息
NAME: goodly-guppy
LAST DEPLOYED: Mon May 27 14:28:22 2024
NAMESPACE: default
STATUS: pending-install
REVISION: 1
#### 重点2:Values 数据,用户提供的 Values + 最终计算得出的 Values
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations: {}
className: ""
enabled: false
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
tolerations: []
#### 重点3:helm hooks,这里是 test 钩子,执行连接测试的
HOOKS:
---
# Source: mychart/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "goodly-guppy-mychart-test-connection"
labels:
helm.sh/chart: mychart-0.1.0
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['goodly-guppy-mychart:80']
restartPolicy: Never
#### 重点4:实际运行的 Kubernetes manifest
MANIFEST:
---
# Source: mychart/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: goodly-guppy-mychart
labels:
helm.sh/chart: mychart-0.1.0
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: goodly-guppy-mychart
labels:
helm.sh/chart: mychart-0.1.0
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: goodly-guppy-mychart
labels:
helm.sh/chart: mychart-0.1.0
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
template:
metadata:
labels:
app.kubernetes.io/name: mychart
app.kubernetes.io/instance: goodly-guppy
spec:
serviceAccountName: goodly-guppy-mychart
securityContext:
{}
containers:
- name: mychart
securityContext:
{}
image: "nginx:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
#### 重点5:打印出的 NOTES.txt (已使用模板渲染过)
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=goodly-guppy" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
模拟运行或实际运行符合预期后,即可打包上传到仓库了。
第 4 步:打包并上传到仓库
打包并上传
先进行打包,打包后手动将 mychart-0.1.0.tgz 上传到 chart 仓库。
# 第 1 步:打包:未做签名哦
helm package mychart/
# Successfully packaged chart and saved it to: /home/userlocal/mychart-0.1.0.tgz
# 第 2 步:移动到 mychart/:不移动的话,后面生成的 index.yaml 是空的!!!
mv mychart-0.1.0.tgz mychart/
生成 index.yaml 并上传
生成 index.yaml 有两种方式:
- 直接生成,index.yaml 里只有一个 chart
- 合并生产,根据现有仓库的 index.yaml,添加当前 chart 进去
生成仅一个 chart 的 index.yaml
命令如下:
# 生成 index.yaml
helm repo index mychart/ --url http://nexus.hengtiansoft.space/repository/helm-repo/
# 查看 index.yaml
cat mychart/index.yaml
# apiVersion: v1
# entries:
# mychart:
# - apiVersion: v2
# appVersion: 1.16.0
# created: "2024-05-27T15:01:22.810201593+08:00"
# description: A Helm chart for Kubernetes
# digest: 0a07369207529008da67a724dda35b86b4721b3dfe9dffcabb2ce767c2267b9d
# name: mychart
# type: application
# urls:
# - http://nexus.hengtiansoft.space/repository/helm-repo/mychart-0.1.0.tgz
# version: 0.1.0
# generated: "2024-05-27T15:01:22.808723439+08:00"
合并生成 index.yaml
# 跳转到 chart 目录
cd mychart/
# 下载仓库 index.yaml
wget http://nexus.xxx.space/repository/helm-repo/index.yaml
# 合并生成 index.yaml
helm repo index . --merge index.yaml
# 查看合并生成的 index.yaml
cat index.yaml | grep -A 5 mychart
# mychart:
# - apiVersion: v2
# appVersion: 1.16.0
# created: "2024-05-27T15:09:00.3621419+08:00"
# description: A Helm chart for Kubernetes
# digest: 0a07369207529008da67a724dda35b86b4721b3dfe9dffcabb2ce767c2267b9d
# name: mychart
# type: application
# urls:
# - mychart-0.1.0.tgz
# version: 0.1.0
# mysql:
# - apiVersion: v1
# appVersion: 8.0.33
# created: "2023-10-30T02:54:37.437Z"
如果你使用 Nexus 作为 chart 仓库,则直接上传 chart tgz 包,然后 Rebuild Index 即可刷新 index.yaml 文件。
相关博文
1.第 1 篇 Helm 简介及安装
2.第 2 篇 Helm 部署 MySQL【入门案例】
3.第 3 篇 Helm 命令、环境变量、相关目录
4.第 4 篇 Chart 仓库详解
5.第 5 篇 Chart 文件结构详解
6.第 6 篇 自定义 Helm Chart
7.第 7 篇 Helm 部署 Nacos【详细步骤】
8.第 8 篇 Chart 修改入门示例:Nacos
9.第 9 篇 Helm 部署 Seata Server
10.第 10 篇 Chart 修改完美示例:Seata Server
11.第 11篇 Helm 部署 RabbitMQ
12.第 12 篇 Helm 部署 Redis
13.第13 篇 Helm 部署 ElasticSearch