飞道的博客

k8s部署mysql一主两从

290人阅读  评论(0)

完整的脚本下载,前往gitee:https://gitee.com/qiaodaimadewangcai/study-notes/tree/master/k8s/k8s部署mysql主从/yaml

一、k8s部署mysql主从需要解决上面问题

问题

  1. 启动顺序有要求,master节点必须比slave节点先启动
  2. 节点挂掉了,新的pod启动必须使用原先pod的资源
  3. master与slave的配置不一样
  4. master启动之后需要设置主从授权账户,slave需要执行change master命令,以及加入主从的命令
  5. 希望客户账户名密码自己配置
  6. slave需要知道master节点的地址

解决方案

  1. 使用statefulSet可以使得pod副本按照编号顺序进行启动,只需要把pod-0作为master就可以了
  2. 使用pv和pvc解决,通过pvc与pod的标签进行绑定,一个pod对应一个pvc就可以保证重启后的pod依旧使用原先的资源
  3. 使用configmap可以在容器初始化的时候指定需要的配置信息,
  4. 使用initContainer可以在容器初始化的时候执行需要的脚本
  5. 使用secret可以将密码保密
  6. 使用headless service+dns可以让slave节点通过hostname访问master,hostname固定为podName.ServiceName,如:serviceName为mysql,则master的hostname为mysql-0.mysql

部署思路

  1. 编写namespace脚本,创建专门的namespace
  2. 编写configmap,将mysql的配置文件配置到里面
  3. 编写secret脚本,将需要的密码配置在里面
  4. 编写initContainer脚本(备用),根据hostname判断是master还是slave,进而执行对应的命令
  5. 编写pv和pvc脚本,申请磁盘资源(通过storageClass自动进行pv/pvc的创建)
  6. 编写headless service脚本,配置mysql之间的网络关系
  7. 编写StatefulSet脚本,初始化容器

二、部署

部署说明

软件名称 软件版本
mysql v8.0.21
kubernetes v20.10.17
docker v1.23.10

部署条件

  1. 有个k8s集群
  2. k8s集群集成了nfs之类的作为存储抽象

1、编写namespace脚本

01-mysql-namespace.yaml

apiVersion: v1
#创建Namespace类型资源
kind: Namespace
metadata:
  #资源名称
  name: mysql
  #标签为app:mysql
  labels:
    app: mysql

相关命令

#执行命令
kubectl apply -f 01-mysql-namespace.yaml
#查看命名空间
kubectl get ns

2、编写configmap脚本

02-mysql-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
data:
  #这里定义了多个数据信息
  master.cnf: |
    # Master配置
    [mysqld]
    datadir=/var/lib/mysql
    pid-file=/var/run/mysqld/mysqld.pid
    socket=/var/run/mysql/mysql.sock
    log-error=/var/log/mysql/error.log
    log-bin=mysqllog
    skip-name-resolve
    lower-case-table-names=1
    log_bin_trust_function_creators=1
  slave.cnf: |
    # Slave配置
    [mysqld]
    datadir=/var/lib/mysql
    pid-file=/var/run/mysqld/mysqld.pid
    socket=/var/run/mysql/mysql.sock
    log-error=/var/log/mysql/error.log
    super-read-only
    skip-name-resolve
    log-bin=mysql-bin
    lower-case-table-names=1
    log_bin_trust_function_creators=1

 

相关命令

#执行命令
kubectl apply -f 02-mysql-configmap.yaml
#查看mysql命名空间下的configmap
kubectl get cm -n mysql
#查看mysql命名空间下名为mysql的configmap详情
kubectl describe configmap mysql -n mysql

3、编写secret脚本

03-mysql-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mysql-secret
  namespace: mysql
  labels:
    app: mysql
#Opaque 类型的数据是一个 map 类型,要求value是base64编码。
type: Opaque
data:
  password: YTEyMzQ1NiE= #123456转成base64 echo -n "a123456!" | base64
  #主从用的账号
  replicationUser: Y29weQ== #copy
  replicationPassword: YTEyMzQ1NiE= #a123456!

相关命令

#执行命令
kubectl apply -f 03-mysql-secret.yaml
#查看mysql命名空间下的configmap
kubectl get secret -n mysql
#查看mysql命名空间下名为mysql-secret的secret详情
kubectl describe secret mysql-secret -n mysql

4、编写initContainer脚本

ps:编写的脚本,最后会在创建StatefulSet中使用,这里只是为了展示的更清晰

1)将配置文件拷贝到对应的容器中

set -ex
#从pod的hostname中通过正则获取序号,如果没有截取到就退出程序
ordinal=`hostname | awk -F"-" '{print $2}'` || exit 1
#将serverId输入到对应的配置文件中,路径可以随意(与之后的对应上就行),但是文件名不能换
echo [mysqld] > /etc/mysql/conf.d/server-id.cnf
# 由于server-id不能为0,因此给ID加100来避开它
echo server-id=$((100 + $ordinal)) >> /etc/mysql/conf.d/server-id.cnf
if [[ ${ordinal} -eq 0 ]]; then
  # 如果Pod的序号为0,说明它是Master节点,从ConfigMap里把Master的配置文件拷贝到/mnt/conf.d目录下
  cp /mnt/config-map/master.cnf /etc/mysql/conf.d
else
  # 否则,拷贝ConfigMap里的Slave的配置文件
  cp /mnt/config-map/slave.cnf /etc/mysql/conf.d
fi

2)初始化mysql集群

set -ex
cd /var/lib/mysql
#查看是否存在名为mysqlInitOk的文件,我们自己生产的标识文件,防止重复初始化集群
if [ ! -f mysqlInitOk ]; then
  echo "Waiting for mysqld to be ready(accepting connections)"
  #执行一条mysql的命令,查看mysql是否初始化完毕,如果没有就反复执行直到可以运行
    until mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "use mysql;SELECT 1;"; do sleep 1; done
    echo "Initialize ready"
    #判断是master还是slave
    pod_seq=`hostname | awk -F"-" '{print $2}'`
    if [ $pod_seq -eq 0 ];then
      #创建主从账户
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "create user '${MYSQL_REPLICATION_USER}'@'%' identified by '${MYSQL_REPLICATION_PASSWORD}';"
    #设置权限
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "grant replication slave on *.* to '${MYSQL_REPLICATION_USER}'@'%' with grant option;"
    #mysql8使用原生密码
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "ALTER USER '${MYSQL_REPLICATION_USER}'@'%' IDENTIFIED WITH mysql_native_password BY '${MYSQL_REPLICATION_PASSWORD}';"
    #刷新配置
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "flush privileges;"
    #初始化master
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "reset master;"
  else
    #设置slave连接的master
    #mysql-0.mysql.mysql的由来{pod-name}.{service-name}.{namespace}
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e \
    "change master to master_host='mysql-0.mysql.mysql',master_port=3306, \
    master_user='${MYSQL_REPLICATION_USER}',master_password='${MYSQL_REPLICATION_PASSWORD}', \
    master_log_file='mysql-bin.000001',master_log_pos=156;"
    #重置slave
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "reset slave;"
    #开始同步
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "start slave;"
    #改成只读模式
    mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "set global read_only=1;"
  fi
  #运行完毕创建标识文件,防止重复初始化集群
  touch mysqlInitOk
fi

 

5、编写StorageClass相关脚本

采用StorageClass+NFS方式作为网络存储,后续会自动生成pvc和pv

所有的k8s节点上都要安装nfs,NFS搭建移步这里查看:nfs,信息如下

IP: 192.168.56.80
Export PATH: /mnt

1)编写ServiceAccount、ClusterRole、ClusterRoleBinding、Role、RoleBinding脚本管理NFS

04-mysql-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: mysql
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: mysql
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: mysql
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: mysql
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

 

相关命令

#执行命令
kubectl apply -f 04-mysql-rbac.yaml

2)编写StorageClass脚本

05-mysql-nfs-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: mysql-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
volumeBindingMode: WaitForFirstConsumer
parameters:
  archiveOnDelete: "true" #pvc被删除了也需要进行存档

相关命令

#执行命令
kubectl apply -f 05-mysql-nfs-storageclass.yaml
#查看StorageClass信息
kubectl get sc

3)编写nfs-provisioner的Deployment脚本

所有的k8s节点上都要安装nfs,不然这段就会出问题无法运行

06-mysql-nfs-provisioner-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: mysql  #与RBAC文件中的namespace保持一致
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: mysql-nfs-storage  #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
            - name: NFS_SERVER
              value: 192.168.56.80   #NFS Server IP地址
            - name: NFS_PATH  
              value: /mnt    #NFS挂载卷
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.56.80  #NFS Server IP地址
            path: /mnt     #NFS 挂载卷

 

相关命令

#执行命令
kubectl apply -f 06-mysql-nfs-provisioner-deployment.yaml

6、编写Service脚本

07-mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
spec:
  selector:
  	#匹配带有app: mysql标签的pod
    app: mysql
  clusterIP: None
  ports:
  - name: mysql
    port: 3306

相关命令

#执行命令
kubectl apply -f 07-mysql-service.yaml
#查看mysql命名空间下service信息
kubectl get svc -n mysql

7、编写StatefulSet脚本

08-mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: mysql
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  #与mysql-service.yaml中的保持一致
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: mysql:8.0.21
        command: 
        - bash
        - "-c"
        - |
          set -ex
          #从pod的hostname中通过正则获取序号,如果没有截取到就退出程序
          ordinal=`hostname | awk -F"-" '{print $2}'` || exit 1
          #将serverId输入到对应的配置文件中,路径可以随意(与之后的对应上就行),但是文件名不能换
          echo [mysqld] > /etc/mysql/conf.d/server-id.cnf
          # 由于server-id不能为0,因此给ID加100来避开它
          echo server-id=$((100 + $ordinal)) >> /etc/mysql/conf.d/server-id.cnf
          if [[ ${ordinal} -eq 0 ]]; then
            # 如果Pod的序号为0,说明它是Master节点,从ConfigMap里把Master的配置文件拷贝到/mnt/conf.d目录下
            cp /mnt/config-map/master.cnf /etc/mysql/conf.d
          else
            # 否则,拷贝ConfigMap里的Slave的配置文件
            cp /mnt/config-map/slave.cnf /etc/mysql/conf.d
          fi
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: password
        - name: MYSQL_REPLICATION_USER
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationUser
        - name: MYSQL_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationPassword
        volumeMounts:
        - name: conf
          mountPath: /etc/mysql/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      containers:
      - name: mysql
        image: mysql:8.0.21
        lifecycle:
         postStart:
          exec:
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
              #查看是否存在名为mysqlInitOk的文件,我们自己生产的标识文件,防止重复初始化集群
              if [ ! -f mysqlInitOk ]; then
                echo "Waiting for mysqld to be ready(accepting connections)"
                #执行一条mysql的命令,查看mysql是否初始化完毕,如果没有就反复执行直到可以运行
                  until mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "use mysql;SELECT 1;"; do sleep 1; done
                  echo "Initialize ready"
                  #判断是master还是slave
                  pod_seq=`hostname | awk -F"-" '{print $2}'`
                  if [ $pod_seq -eq 0 ];then
                    #创建主从账户
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "create user '${MYSQL_REPLICATION_USER}'@'%' identified by '${MYSQL_REPLICATION_PASSWORD}';"
                  #设置权限
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "grant replication slave on *.* to '${MYSQL_REPLICATION_USER}'@'%' with grant option;"
                  #mysql8使用原生密码
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "ALTER USER '${MYSQL_REPLICATION_USER}'@'%' IDENTIFIED WITH mysql_native_password BY '${MYSQL_REPLICATION_PASSWORD}';"
                  #刷新配置
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "flush privileges;"
                  #初始化master
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "reset master;"
                else
                  #设置slave连接的master
                  #mysql-0.mysql.mysql的由来{pod-name}.{service-name}.{namespace}
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e \
                  "change master to master_host='mysql-0.mysql.mysql',master_port=3306, \
                  master_user='${MYSQL_REPLICATION_USER}',master_password='${MYSQL_REPLICATION_PASSWORD}', \
                  master_log_file='mysql-bin.000001',master_log_pos=156;"
                  #重置slave
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "reset slave;"
                  #开始同步
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "start slave;"
                  #改成只读模式
                  mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "set global read_only=1;"
                fi
                #运行完毕创建标识文件,防止重复初始化集群
                touch mysqlInitOk
              fi
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: password
        - name: MYSQL_REPLICATION_USER
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationUser
        - name: MYSQL_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-secret
              key: replicationPassword
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        - name: run-mysql
          mountPath: /var/run/mysql
        resources:
          requests:
            cpu: 500m
            memory: 2Gi
        #设置存活探针
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        #设置就绪探针
        readinessProbe:
          exec:
            command: ["mysqladmin", "ping", "-uroot", "-p${MYSQL_ROOT_PASSWORD}"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      volumes:
      - name: config-map
        #这个卷挂载到configMap上
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      #与nfs-StorageClass.yaml metadata.name保持一致
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 5Gi
  - metadata: 
      name: conf
    spec:
      accessModes:
      - ReadWriteOnce
      #与nfs-StorageClass.yaml metadata.name保持一致
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 100Mi
  - metadata: 
      name: run-mysql
    spec:
      accessModes:
      - ReadWriteOnce
      #与nfs-StorageClass.yaml metadata.name保持一致
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 100Mi

 

相关命令

#执行命令
kubectl apply -f 08-mysql-statefulset.yaml
#查看mysql命名空间下pvc信息
kubectl get pvc -n mysql
kubectl describe pvc data-mysql-0  -n mysql
#查看mysql命名空间下pv信息
kubectl get pv -n mysql
#查看mysql命名空间下pod节点信息
kubectl get pod -n mysql
#查看mysql命名空间下名为mysql-1节点的mysql从节点状态
kubectl -n mysql exec mysql-1 -c mysql -- bash -c "mysql -uroot -pa123456! -e 'show slave status \G'"

如果使用kubectl get pvc -n mysql查看状态一直是Pending,大概率是使用了k8s1.20以上版本,需要修改 apiserver 的配置文件,重新启用 SelfLink 功能。

vim /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --feature-gates=RemoveSelfLink=false # 增加这行

如果依旧不行,用命令查看nfs-client-provisioner有没有报错

#查看pod名称
kubectl get pods -n mysql
#找到对应的pod查看日志
kubectl logs -f nfs-client-provisioner-xxxxx -n mysql

如果看到类似 unable to create directory to provision new pv: mkdir /persistentvolumes/mysql-data-mysql-0-pvc-1bb47,就在nfs服务器端给挂载的那个文件夹开通权限

chmod -R 777 xxxx
#例如
chmod -R 777 /mnt

转载:https://blog.csdn.net/qq_34886352/article/details/128064376
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场