小言_互联网的博客

keepalived 高可用 lb01负载均衡简单实现 keepalived的脑裂 负载均衡----discuz 创建数据库

240人阅读  评论(0)

keepalived 高可用

keepalived高可用只是用来企业内部,解决单点故障的软件

什么是高可用

一般是指2台机器启动着完全相同的业务系统,当有一台机器down机了,另外一台服务器就能快速的接管,对于访问的用户是无感知的。

keepalived是如何实现高可用的

keepalived软件是基于VRRP协议实现的,VRRP是虚拟路由冗余协议,主要用于解决单点故障问题

如何才能做到出现故障自动转移,此时VRRP就出现了,我们的VRRP其实是通过软件或者硬件的形式在Master和Backup外面增加一个虚拟的MAC地址(VMAC)与虚拟IP地址(VIP),那么在这种情况下,PC请求VIP的时候,无论是Master处理还是Backup处理,PC仅会在ARP缓存表中记录VMAC与VIP的信息。

高可用keepalived核心概念

1、如何确定谁是主节点谁是备节点(选举投票,优先级)
2、如果Master故障,Backup自动接管,那么Master恢复后会夺权吗(抢占试、非抢占式)
3、如果两台服务器都认为自己是Master会出现什么问题(脑裂)

keepalived 高可用安装配置

lb01负载均衡实现

#web端
[root@web01 conf.d]# vim linux.com.conf
server {
   
    listen 80;
    server_name _;
    charset utf8;
    
    location / {
   
        root /code/node;
        index index.html;
    }
}
#web02 web03也建立目录并授权
[root@web01 conf.d]# mkdir /code/node -p
[root@web01 conf.d]# chown -R www.www /code

[root@web01 conf.d]# echo "我是web01......." > /code/node/index.html
[root@web02 conf.d]# echo "我是web02......." > /code/node/index.html
[root@web03 conf.d]# echo "我是web03....." > /code/node/index.html
#重启
[root@web01 conf.d]# !sy
systemctl restart nginx 

#负载均衡lb01
[root@lb01 conf.d]# vim lb.conf

upstream http {
   
        server 172.16.1.7:80;
        server 172.16.1.8:80;
        server 172.16.1.9:80;
}
server {
   
        listen 443 ssl;
        server_name _;
        ssl_certificate /etc/nginx/ssl_key/server.crt;
        ssl_certificate_key /etc/nginx/ssl_key/server.key;
        location / {
   
                proxy_pass http://http;
        }
}
server {
   
        listen 80;
        server_name linux.lb.com;
        rewrite (.*) https://$server_name$request_uri;
}
#加入hosts 并访问

环境准备

主机 IP 身份
lb01 172.16.1.5 master
lb02 172.16.1.6 backup
192.168.15.3 VIP

保证lb01和lb02配置完全一致

# lb01和lb02上的nginx配置挂载到nfs上
# 创建一个挂载点
[root@nfs nfs]# mkdir lb
# 授权
[root@nfs nfs]# chown www.www lb/
# 增加挂载点
[root@nfs nfs]# vim /etc/exports
/nfs/lb       172.16.1.0/20(rw,sync,all_squash,anonuid=1000,anongid=1000)
# 重启nfs服务
[root@nfs nfs]# systemctl restart nfs-server rpcbind
# 在lb01和lb02上挂载
[root@lb01 ~]# mount -t nfs 172.16.1.31:/nfs/lb /etc/nginx/conf.d/
# 编写配置
[root@lb02 ~]# cat /etc/nginx/conf.d/http.conf 
upstream http {
   
	server 172.16.1.7:80;
	server 172.16.1.8:80;
	server 172.16.1.9:80;
}
server {
   
	listen 443 ssl;
	server_name _;
	ssl_certificate /etc/nginx/cert/server.crt;
	ssl_certificate_key /etc/nginx/cert/server.key;
	location / {
   
		proxy_pass http://http;
	}
}
server {
   
	listen 80;
	server_name 192.168.15.5;
	rewrite (.*) https://$server_name$request_uri;
}
[root@lb02 ~]# systemctl restart nginx

安装keepalived

[root@lb01 ~]# yum install -y keepalived
[root@lb02 ~]# yum install -y keepalived

抢占式

配置keepalived主节点

#查看配置文件
[root@lb01 ~]# rpm -qc keepalived
/etc/keepalived/keepalived.conf
/etc/sysconfig/keepalived

#配置主节点配置文件
[root@lb01 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   					#全局配置
   router_id lb01				#身份验证
}

vrrp_instance VI_1 {
   
    state MASTER				#状态,只有MASTER和BACKUP,MASTER是主,BACKUP是备
    interface eth0				#网卡绑定,心跳检测
    virtual_router_id 51		#虚拟路由标识,组id,把master和backup判断为一组
    priority 100				#优先级(真正判断是主是从的条件)(值越大优先级越高)
    advert_int 3				#检测状态间隔时间(单位是秒)
    authentication {
   			#认证
        auth_type PASS			#认证方式
        auth_pass 1111			#认证密码指定
    }
    virtual_ipaddress {
   
       192.168.15.3				#虚拟的VIP地址
    }
}



配置keepalived从节点

[root@lb02 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   
   router_id lb02
}

vrrp_instance VI_1 {
   
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
   
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   
        192.168.15.3
    }
}

配置区别

keepalived配置区别 MASTER节点 BACKUP节点
router_id(唯一标识) lb01 lb02
state(角色状态) MASTER BACKUP
priority(优先级) 100 90

启动keepalived并查看

#先启动备节点
[root@lb02 ssl_key]# systemctl start keepalived.service 

#查看ip
[root@lb02 ssl_key]# ip addr | grep 192.168.15.3
inet 192.168.15.3/32 scope global eth0

#启动主节点
[root@lb01 ssl_key]# systemctl start keepalived.service

#查看lb01的ip
[root@lb01 ssl_key]# ip addr | grep 192.168.15.3
inet 192.168.15.3/32 scope global eth0
#查看lb02的ip已经没有了192.168.15.3/32

keepalived绑定日志

#配置keepalived
[root@lb01 ~]# vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -d -S 0"    #其中-S指定syslog的facility

#配置rsyslog抓取日志
[root@lb01 ~]# vim /etc/rsyslog.conf
local0.*		/var/log/keepalived.log

#重启服务
[root@lb01 ~]# systemctl restart keepalived
[root@lb01 ~]# systemctl restart rsyslog

keepalived是抢占式的创建虚拟VIP,抢占式创建虚拟VIP的时候,容易导致网络不稳定

非抢占式

我们一般配置的都是非抢占式的,因为宕机这种行为一次就够了

1.修改节点状态,两边状态都必须是BACKUP
2.两个节点都要加上 nopreempt
3.优先级仍保持不同 priority

#主节点配置
[root@lb01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
   
    router_id lb01
}
vrrp_instance VI_1 {
   
	state BACKUP 		# 如果配置非抢占式VIP,集群状态必须一致
	interface eth0
	virtual_router_id 51
	priority 100
	nopreempt			# 配置非抢占式VIP
	advert_int 3
	authentication {
   
		auth_type PASS
		auth_pass 1111
	}
	virtual_ipaddress {
   
		192.168.15.3

#从节点配置
[root@lb02 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
   
    router_id lb02
}
vrrp_instance VI_1 {
   
	state BACKUP 		# 如果配置非抢占式VIP,集群状态必须一致
	interface eth0
	virtual_router_id 51
	priority 50
	nopreempt			# 配置非抢占式VIP
	advert_int 3
	authentication {
   
		auth_type PASS
		auth_pass 1111
	}
	virtual_ipaddress {
   
		192.168.15.3
	}
}

	

高可用keepalived的脑裂

由于某些原因,导致两台keepalived高可用服务器在指定时间内,无法检测到对方是否存活,各自去调用资源,分配工作,而此时两台服务器都还活着并且在工作。

备用节点如何知道主节点是否down机?

备用节点一直在PING挂载在主节点的VIP.主节点的VIP会给备用节点会回复PONG,证明主节点并没有宕机。如果主节点没有回复,则备用节点则会启动自己的VIP。

脑裂的故障

1.服务器网线松动,网络故障
2.服务器硬件发生损坏,硬件故障
3.主备服务器之间开启了防火墙

开启防火墙

[root@lb01 ~]# systemctl start firewalld
[root@lb02 ~]# systemctl start firewalld

#访问浏览器因为开启防火墙,所以访问不了站点,需要配置开启http服务
[root@lb02 ~]# firewall-cmd --add-service=http
[root@lb02 ~]# firewall-cmd --add-service=https

解决keepalived的脑裂问题的办法




-eq		等于
-ne		不等于
-ge		大于等于
-gt		大于
-le		小于等于
-lt		小于


#干掉一台服务
[root@lb02 ~]# systemctl stop keepalived

#判断是否有脑裂现象

先做免密
[root@lb01 ~]# ssh-kengen -t rsa
[root@lb02 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.6

[root@lb02 ~]# ssh-kengen -t rsa 
[root@lb02 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.5
# 当主节点和从节点都提供服务的时候(脚本探测)

[root@lb01 ~]# systemctl start firewalld
[root@lb02 ~]# systemctl start firewalld

#开启之后会出现脑裂现象
#在lb01部署

[root@lb02 ~]# cat cheak_vrrp.sh
#!/bin/bash
VIP="192.168.15.3"
MASTERIP="172.16.1.6"
BACKUPIP="172.16.1.5"

while true; do

    PROBE='ip a | grep "${VIP}"'
    ssh ${MASTERIP}  "${PROBE}" > /dev/null
    MASTER_STATU=$?
    ssh ${BACKUPIP}  "${PROBE}" > /dev/null
    BACKUP_STATU=$?
    if [[ $MASTER_STATU -eq 0 && $BACKUP_STATU -eq 0 ]];then
        ssh ${BACKUPIP}  "systemctl stop keepalived.service"
    fi
    sleep 2
done

nginx故障切换脚本

域名解析到VIP

nginx默认监听所有IP

两台机部署

#如果nginx宕机,用户请求页面会失败,但是keepalive没有关闭,VIP仍然在nginx挂掉了的机器上,导致影响业务;
#我们应该编写一个脚本,判断nginx状态,如果nginx挂掉,先尝试重启nginx,如果启动不了则关掉keepalived

# nginx检测脚本两台机部署
[root@lb01 ~]# cat web_check.sh 
#!/bin/bash

nginxnum=`ps -ef | grep [n]ginx | wc -l`

if [ $nginxnum -eq 0 ];then
  systemctl start nginx
  sleep 3
  nginxnum=`ps -ef | grep [n]ginx | wc -l`

  if [ $nginxnum -eq 0 ];then
    systemctl stop keepalived.service
  fi
fi

#给脚本添加执行权限
[root@lb01 ~]# chmod +x web_check.sh
#停掉nginx执行脚本测试
[root@lb01 ~]# systemctl stop nginx
[root@lb01 ~]# ./web_check.sh
[root@lb01 ~]# systemctl restart keepalived.service 
[root@1b02 ~]# systemctl restart keepalived.service 


#结论脚本检测nginx是否坏掉 重启  重启不成功干掉keepalived   实现vip漂移无感知


#调用脚本

root@lb01 ~]# vim /etc/keepalived/keepalived.conf 
global_defs {
   
   router_id lb01
}

#每5秒执行一次脚本,脚本执行完成时间不能超过5秒,否则会重新执行脚本,死循环
vrrp_script check_web {
   
    script "/root/web_check.sh"
    interval 5
}

vrrp_instance VI_1 {
   
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
   
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   
        192.168.15.3
    }
    #调用计划脚本
	track_script {
   
    	check_web
	}
}
[root@1b02 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
   
   router_id lb02
}
vrrp_script check_web {
   
    script "/root/web_check.sh"
    interval 5

}
vrrp_instance VI_1 {
   
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 50
    nopreempt
    advert_int 1
    authentication {
   
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
   
        192.168.15.3
    }

      track_script {
   
    	check_web
}
}


#给脚本添加执行权限执行(可以放在后台执行加上&)
[root@lb01 ~]# chmod +x check_web.sh

======================================================================

负载均衡----discuz

web端(01.02.03)

#挂载
[root@web01 conf.d]# mount -t nfs 172.16.1.31:/nfs/web /www/
[root@web01 conf.d]# mount -t nfs 172.16.1.31:/nfs/conf /etc/nginx/conf.d/
#创建证书并推送
[root@web01 ~]# mkdir /etc/nginx/ssl_key
[root@web01 ~]# cd /etc/nginx/ssl_key/

[root@web01 ssl_key]# openssl genrsa -idea -out server.key 2048
[root@web01 ssl_key]# openssl req -days 36500 -x509 -sha256 -nodes -newkey rsa:2048 -keyout server.key -out server.crt

[root@web01 conf.d]#scp -r /etc/nginx/ssl_key/ 172.16.1.5:/etc/nginx/
注:web02 web03 负载均衡都要推送

#统一用户 配置web01 

[root@web01 conf.d]# vim discuz.conf 

server {
   
    listen 80;
    server_name linux.discuz.com;

    location / {
   
        root /www/upload;
        index index.php;
    }

    location ~* \.php$ {
   
        root /www/upload;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS on;
        include fastcgi_params;

    }
}

#配置lb01负载均衡
[root@lb01 conf.d]# vim lb.conf 

upstream web {
   
        server 172.16.1.7:80;
        server 172.16.1.8:80;
        server 172.16.1.9:80;
}

server {
   
        listen 80;
        server_name linux.discuz.com;
         rewrite (.*) https://$server_name$1;
}

server {
   
        listen 443 ssl;
        server_name linux.discuz.com;

        ssl_certificate /etc/nginx/ssl_key/server.crt;
        ssl_certificate_key /etc/nginx/ssl_key/server.key;

        location / {
   
                proxy_pass http://web;
                include proxy_params;
        }
}


#配置host并访问
192.168.15.5  linux.discuz.com

创建数据库

[root@db01 ~]# yum install -y mariadb-server
[root@db01 ~]# systemctl enable --now mariadb
[root@db01 ~]# useradd www
[root@db01 ~]# mkdir /databases
[root@db01 ~]# chown -R www.www /databases/
[root@db01 ~]# mount -t nfs 172.16.1.31:/nfs/database /databases/
[root@db01 ~]# vim mysql_dump.sh
#!/bin/bash
DATE=`date +%F`
BACKUP="/databases"
cd $BACKUP
mysqldump -uroot -p123 --all-databases --single-transaction > mysql-all-${DATE}.sql
tar -czf mysql-all-${DATE}.tar.gz mysql-all-${DATE}.sql
rm -rf mysql-all-${DATE}.sql

[root@db01 ~]# chmod +x mysql_dump.sh
[root@db01 ~]# ./mysql_dump.sh
[root@db01 ~]# crontab -e 
01 00 * * * /databases/mysql_dump.sh
[root@db01 ~]# mv mysql_dump.sh /databases/

创建数据库
[root@db01 ~]# mysqladmin -u root password '123'
登录数据库
[root@db01 ~]# mysql -uroot -p123

MariaDB [(none)]> show databases;
MariaDB [(none)]> create database discuz;
MariaDB [(none)]> grant all privileges on wecenter.* to www@'%''123';

MariaDB [(none)]> use mysql
MariaDB [(none)]> select host,user from user;
MariaDB [(none)]> drop database discuz;


[root@db01 ~]# systemctl restart mariadb

转载:https://blog.csdn.net/yangenguang/article/details/116648687
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场