MySQL分布式高可用-PXC
1.安装yum源(三个节点全部安装)
安装yum源:
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm(linux下载慢的话,可以先下载到Windows,传进来)
[root@yulong-mysql1 ~]# yum localinstall PXC官方源.rpm
2.三个节点开始安装程序(三个节点同步)
[root@yulong-mysql1 ~]# yum install Percona-XtraDB-Cluster-57 -y(在线安装,太慢,我建议采用去官网下载离线包然后本地安装)
官网离线包下载地址:https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/
[root@yulong-mysql1 ~]# mkdir ./pxc
[root@yulong-mysql1 ~]# tar -xvf Percona-XtraDB-Cluster-5.7.28-31.41-r514-el7-x86_64-bundle.tar -C ./pxc
[root@yulong-mysql1 ~]# cd pxc/
[root@yulong-mysql1 pxc]# yum localinstall Percona-XtraDB-Cluster-* -y
3.配置/etc/hosts文件,互相做主机名解析(三个节点同步)
[root@yulong-mysql1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.136.105 yulong-mysql1
192.168.136.106 yulong-mysql2
192.168.136.107 yulong-mysql3
4.启动服务,并且确保服务已经启动成功(三个节点同步)
[root@yulong-mysql1 ~]# systemctl enable mysql
[root@yulong-mysql1 ~]# systemctl start mysql
[root@yulong-mysql1 ~]# systemctl status mysql | grep Active
Active: active (running) since 日 2020-03-08 22:29:34 CST; 22min ago
5.防火墙放行端口,关闭selinux(三个节点同步)
[root@yulong-mysql1 ~]# firewall-cmd --add-port=3306/tcp --permanent
success
[root@yulong-mysql1 ~]# firewall-cmd --add-port=4444/tcp --permanent
success
[root@yulong-mysql1 ~]# firewall-cmd --add-port=4567/tcp --permanent
success
[root@yulong-mysql1 ~]# firewall-cmd --add-port=4568/tcp --permanent
success
[root@yulong-mysql1 ~]# firewall-cmd --reload
success
[root@yulong-mysql1 ~]# setenforce 0
[root@yulong-mysql1 ~]# getenforce
Permissive
6.修改数据库初始密码(三个节点同步)
[root@yulong-mysql1 ~]# grep 'temporary password' /var/log/mysqld.log
2020-03-08T14:29:23.633083Z 1 [Note] A temporary password is generated for root@localhost: _OrqlE7d,uYr
[root@yulong-mysql1 ~]# mysql -uroot -p'_OrqlE7d,uYr'
mysql> alter user 'root'@'localhost' identified by 'Com.123!';
Query OK, 0 rows affected (0.14 sec)
mysql> exit
Bye
7.创建sst同步账号并授权(三个节点同步)
mysql> create user 'sstuser'@'localhost' identified by 'Com.123!';
Query OK, 0 rows affected (0.00 sec)
mysql> grant reload,lock tables, process, replication client on *.* to 'sstuser'@'localhost';
Query OK, 0 rows affected (0.00 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
7.节点一停止MySQL服务,为写集复制配置相关请求
设置配置文件(节点一)
[root@yulong-mysql1 ~]# systemctl stop mysql
[root@yulong-mysql1 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=192.168.136.105
wsrep_cluster_name=pxc-cluster
wsrep_node_name=yulong-mysql1
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:Com.123!"
[root@yulong-mysql1 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
[client]
socket=/var/lib/mysql/mysql.sock
[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
symbolic-links=0
9.引导第一个节点来初始化集群。
[root@yulong-mysql1 ~]# systemctl start mysql@bootstrap.service
#番外:如果想重新自举,必须先kill掉原来的进程,才能再一次自举
[root@yulong-mysql1 ~]# ps aux | grep ^mysql
mysql 29817 2.9 21.3 1361928 212788 ? Sl 18:15 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-provider=/usr/lib64/galera3/libgalera_smm.so --wsrep-new-cluster --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=70317cc3-61d4-11ea-a388-e250f22e8ee5:4
[root@yulong-mysql1 ~]# kill -29817
10.查看自举启动信息(节点一)
[root@yulong-mysql1 ~]# mysql -uroot -p'Com.123!'
mysql> show status like 'wsrep%';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| wsrep_local_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 |
| wsrep_protocol_version | 9 |
| wsrep_last_applied | 4 |
| wsrep_last_committed | 4 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 2 |
| wsrep_received_bytes | 152 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.500000 |
| wsrep_local_cached_downto | 0 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 100, 100 ] |
| wsrep_flow_control_interval_low | 100 |
| wsrep_flow_control_interval_high | 100 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 1320 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.136.105:3306 |
| wsrep_cluster_weight | 1 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 666a19fd-61ec-11ea-90bc-8f6458220847 |
| wsrep_cluster_conf_id | 1 |
| wsrep_cluster_size | 1 |
| wsrep_cluster_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.41(rb3295e6) |
| wsrep_ready | ON |
+----------------------------------+--------------------------------------+
71 rows in set (0.01 sec)
11.开始配置节点二
停止MySQL服务
[root@yulong-mysql2 ~]# systemctl stop mysql
开始更改配置文件
[root@yulong-mysql2 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=192.168.136.106
wsrep_cluster_name=pxc-cluster
wsrep_node_name=yulong-mysql2
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:Com.123!"
[root@yulong-mysql2 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
[client]
socket=/var/lib/mysql/mysql.sock
[mysqld]
server-id=2
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
symbolic-links=0
12.添加节点二到节点一的集群中
[root@yulong-mysql2 ~]# systemctl start mysql
13.设置配置文件(节点三)
[root@yulong-mysql3 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/wsrep.cnf
[mysqld]
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.136.105,192.168.136.106,192.168.136.107
binlog_format=ROW
default_storage_engine=InnoDB
wsrep_slave_threads= 8
wsrep_log_conflicts
innodb_autoinc_lock_mode=2
wsrep_node_address=192.168.136.107
wsrep_cluster_name=pxc-cluster
wsrep_node_name=yulong-mysql3
pxc_strict_mode=ENFORCING
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:Com.123!"
[root@yulong-mysql3 ~]# grep -v '^#' /etc/percona-xtradb-cluster.conf.d/mysqld.cnf
[client]
socket=/var/lib/mysql/mysql.sock
[mysqld]
server-id=3
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log-bin
log_slave_updates
expire_logs_days=7
symbolic-links=0
14.添加节点三到节点一的集群中
[root@yulong-mysql3 ~]# systemctl start mysql
15.任意节点上查看集群情况:
mysql> show status like 'wsrep%';
+----------------------------------+----------------------------------------------------------------+
| Variable_name | Value |
+----------------------------------+----------------------------------------------------------------+
| wsrep_local_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 |
| wsrep_protocol_version | 9 |
| wsrep_last_applied | 4 |
| wsrep_last_committed | 4 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 10 |
| wsrep_received_bytes | 846 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.100000 |
| wsrep_local_cached_downto | 0 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [ 173, 173 ] |
| wsrep_flow_control_interval_low | 173 |
| wsrep_flow_control_interval_high | 173 |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 3088 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_ist_receive_status | |
| wsrep_ist_receive_seqno_start | 0 |
| wsrep_ist_receive_seqno_current | 0 |
| wsrep_ist_receive_seqno_end | 0 |
| wsrep_incoming_addresses | 192.168.136.105:3306,192.168.136.106:3306,192.168.136.107:3306 |
| wsrep_cluster_weight | 3 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | e12b7b36-61ee-11ea-a11c-0b10452ad3e7 |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 70317cc3-61d4-11ea-a388-e250f22e8ee5 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 0 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.41(rb3295e6) |
| wsrep_ready | ON |
+----------------------------------+----------------------------------------------------------------+
71 rows in set (0.00 sec)
16.测试写入数据是否同步
(1)节点一上创建数据库
mysql> create database yulong;
Query OK, 1 row affected (0.00 sec)
(2)节点二查看
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| yulong |
+--------------------+
5 rows in set (0.01 sec)
(3)节点三上查看
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| yulong |
+--------------------+
5 rows in set (0.00 sec)
#我们可以看到,他们已经同步
17.将节点一停掉,然后重新启动,让其回归集群
#首先停止节点一上的sql服务
[root@yulong-mysql1 ~]# systemctl stop mysql
[root@yulong-mysql1 ~]# systemctl stop mysql@bootstrap.service
#节点二yulong库上创建表
mysql> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30));
Query OK, 0 rows affected (0.00 sec)
#然后启动节点一的mysql服务
[root@yulong-mysql1 ~]# systemctl start mysql
[root@yulong-mysql1 ~]# mysql -uroot -p'Com.123!'
mysql> use yulong;
Database changed
mysql> show tables;
+------------------+
| Tables_in_yulong |
+------------------+
| example |
+------------------+
1 row in set (0.00 sec)
#我们查看数据库的wsrep_cluster_size
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.00 sec)
#最后我们可以看到,数据已经同步过来了最新的。并且主机个数是3个。
18.将三台服务器全部关机,然后再开机,并且让集群恢复正常
(1)首先关机顺序:node1先关掉,node2第二个关掉,node3最后一个关掉
(2)现在node1和node2的/var/lib/mysql/grastate.dat中的safe_to_bootstrap为0
(3)node3的/var/lib/mysql/grastate.dat中的safe_to_bootstrap为1
(4)这时候,我们要从最后一个关机的node上,也就是node3重新自举:
[root@yulong-mysql3 ~]# systemctl start mysql@bootstrap.service
(5)node1和node2上启动sql服务
[root@yulong-mysql1 ~]# systemctl start mysql
[root@yulong-mysql2 ~]# systemctl start mysql
(6)进入node3上进行查看集群状态
[root@yulong-mysql3 ~]# mysql -uroot -pCom.123!
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.05 sec)
(7)如果最后一个关机的node无法开机了,那么其他的node上,任选一个node:
将/var/lib/mysql/grastate.dat 中的 safe_to_bootstrap设置为1,然后
让这个node自举启动,其他节点再systemctl start mysql 即可恢复集群
sql2 ~]# systemctl start mysql
(6)进入node3上进行查看集群状态
[root@yulong-mysql3 ~]# mysql -uroot -pCom.123!
mysql> show status like ‘wsrep_cluster_size’;
±-------------------±------+
| Variable_name | Value |
±-------------------±------+
| wsrep_cluster_size | 3 |
±-------------------±------+
1 row in set (0.05 sec)
(7)如果最后一个关机的node无法开机了,那么其他的node上,任选一个node:
将/var/lib/mysql/grastate.dat 中的 safe_to_bootstrap设置为1,然后
让这个node自举启动,其他节点再systemctl start mysql 即可恢复集群
转载:https://blog.csdn.net/weixin_43740875/article/details/104763057