飞道的博客

kubernetes-----部署多master的二进制集群

565人阅读  评论(0)

目录

一.多master的二进制集群分析

二.实验环境分析

三.具体部署

搭建k8s的单节点集群

搭建master2节点

部署负载均衡

修改node的VIP以及pod的创建

搭建k8s的Dashboard


一.多master的二进制集群分析

  • 区别于单master的二进制集群,多master集群对master做了一个高可用,如果master1宕机,Load Balance就会将VIP转移到master2,这样就保证了master的可靠性。
  • 多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书
     

二.实验环境分析

角色 IP地址 系统与资源 相关组件
master1 192.168.43.101/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler etcd
master2 192.168.43.104/24 centos7.4(2C 2G) kube-apiserver kube-controller-manager kube-scheduler
node1 192.168.43.102/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
node2 192.168.43.103/24 centos7.4(2C 2G) kubelet kube-proxy docker flannel etcd
nginx_lbm 192.168.43.105/24 centos7.4(2C 2G) nginx keepalived
nginx_lbb 192.168.43.106/24 centos7.4(2C 2G) nginx keepalived
VIP 192.168.43.100/24 - -
  • 本实验基于单master基础之上操作,添加一个master2
  • 利用nginx做负载均衡,利用keepalived做负载均衡器的高可用

注:1.9版本之后nginx具有了四层转发的功能(负载均衡),多了stream模块

  • 利用keepalived给master提供的虚拟IP地址,给node访问

三.具体部署

搭建k8s的单节点集群

搭建master2节点

master1的操作

  • 复制相关文件、脚本

  
  1. ##递归复制/opt/kubernetes和/opt/etcd下的所有文件到master中
  2. [root@master ~] # scp -r /opt/kubernetes/ root@192.168.43.104:/opt/
  3. The authenticity of host '192.168.43.104 (192.168.43.104)' can't be established.
  4. ECDSA key fingerprint is SHA256:AJdR3BBN9kCSEk3AVfaZuyrxhNMoDnzGMOMWlP1gUaQ.
  5. ECDSA key fingerprint is MD5:d4:ab: 7b: 82:c3: 99:b8: 5d: 61:f2:dc:af: 06: 38:e7: 6c.
  6. Are you sure you want to continue connecting (yes/no)? yes
  7. Warning: Permanently added '192.168.43.104' (ECDSA) to the list of known hosts.
  8. root@ 192.168 .43 .104 's password:
  9. token.csv 100% 84 5.2KB/s 00: 00
  10. kube-apiserver 100% 934 353.2KB/s 00: 00
  11. kube-scheduler 100% 94 41.2KB/s 00: 00
  12. kube-controller-manager 100% 483 231.5KB/s 00: 00
  13. kube-apiserver 100% 184MB 19.4MB/s 00: 09
  14. kubectl 100% 55MB 24.4MB/s 00: 02
  15. kube-controller-manager 100% 155MB 26.7MB/s 00: 05
  16. kube-scheduler 100% 55MB 31.1MB/s 00: 01
  17. ca- key.pem 100% 1679 126.0KB/s 00: 00
  18. ca.pem 100% 1359 514.8KB/s 00: 00
  19. server- key.pem 100% 1675 501.4KB/s 00: 00
  20. server.pem 100% 1643 649.4KB/s 00: 00
  21. ##master2中需要etcd的证书,否则apiserver无法启动
  22. [root@master ~] # scp -r /opt/etcd/ root@192.168.43.104:/opt/
  23. root@ 192.168 .43 .104 's password:
  24. etcd 100% 516 64.2KB/s 00: 00
  25. etcd 100% 18MB 25.7MB/s 00: 00
  26. etcdctl 100% 15MB 25.9MB/s 00: 00
  27. ca- key.pem 100% 1675 118.8KB/s 00: 00
  28. ca.pem 100% 1265 603.2KB/s 00: 00
  29. server- key.pem 100% 1675 675.3KB/s 00: 00
  30. server.pem 100% 1338 251.5KB/s 00: 00
  31. [root@master ~] #
  32. ##复制执行脚本到master中
  33. [root@master ~] # scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.43.104:/usr/lib/systemd/system/
  34. root@ 192.168 .43 .104 's password:
  35. kube-apiserver.service 100% 282 30.3KB/s 00: 00
  36. kube-controller-manager.service 100% 317 45.9KB/s 00: 00
  37. kube-scheduler.service 100% 281 151.7KB/s 00: 00

master2的操作

  • 基本环境设置

  
  1. ##修改主机名
  2. [root@localhost ~] # hostnamectl set-hostname master2
  3. [root@localhost ~] # su
  4. ##永久关闭安全性功能
  5. [root@master2 ~] # systemctl stop firewalld
  6. [root@master2 ~] # systemctl disable firewalld
  7. Removed symlink /etc/systemd/ system/multi-user.target.wants/firewalld.service.
  8. Removed symlink /etc/systemd/ system/dbus-org.fedoraproject.FirewallD1.service.
  9. [root@master2 ~] # setenforce 0
  10. [root@master2 ~] # sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  • 修改kube-apiserver中的IP地址

  
  1. [root@master2 ~]# cd /opt/kubernetes/cfg/
  2. [root@master2 cfg]# ls
  3. kube-apiserver kube-controller-manager kube-scheduler token.csv
  4. [root@master2 cfg]# vi kube-apiserver
  5. KUBE_APISERVER_OPTS="--logtostderr=true \
  6. --v=4 \
  7. --etcd-servers=https://192.168.43.101:2379,https://192.168.43.102:2379,https://192.168.43.103:2379 \
  8. ##修改bind地址,绑定本地地址
  9. --bind-address=192.168.43.104 \
  10. --secure-port=6443 \
  11. ##修改对外展示的地址
  12. --advertise-address=192.168.43.104 \
  13. --allow-privileged=true \
  14. --service-cluster-ip-range=10.0.0.0/24 \
  15. --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
  16. --authorization-mode=RBAC,Node \
  17. --kubelet-https=true \
  18. --enable-bootstrap-token-auth \
  19. --token-auth-file=/opt/kubernetes/cfg/token.csv \
  20. --service-node-port-range=30000-50000 \
  21. --tls-cert-file=/opt/kubernetes/ssl/server.pem \
  22. --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
  23. --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  24. --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  25. --etcd-cafile=/opt/etcd/ssl/ca.pem \
  26. --etcd-certfile=/opt/etcd/ssl/server.pem \
  27. --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 开启服务,并且验证

  
  1. ##开启apieserver服务
  2. [root@master2 cfg] # systemctl start kube-apiserver.service
  3. [root@master2 cfg] # systemctl enable kube-apiserver.service
  4. Created symlink from /etc/systemd/ system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/ system/kube-apiserver.service.
  5. ##开启控制管理器服务
  6. [root@master2 cfg] # systemctl start kube-controller-manager.service
  7. [root@master2 cfg] # systemctl enable kube-controller-manager.service
  8. Created symlink from /etc/systemd/ system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/ system/kube-controller-manager.service.
  9. ##开启调度器服务
  10. [root@master2 cfg] # systemctl start kube-scheduler.service
  11. [root@master2 cfg] # systemctl enable kube-scheduler.service
  12. Created symlink from /etc/systemd/ system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/ system/kube-scheduler.service.
  13. ##将执行脚本添加入全局变量
  14. [root@master2 cfg] # echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile
  15. [root@master2 cfg] # source /etc/profile
  16. ##查看集群节点,说明master2添加成功
  17. [root@master2 cfg] # kubectl get node
  18. NAME STATUS ROLES AGE VERSION
  19. 192.168. 43.102 Ready <none> 26h v1. 12.3
  20. 192.168. 43.103 Ready <none> 26h v1. 12.3
  21. [root@master2 cfg] #

注:能够添加master节点的前提是在部署单节点时,在server-csr.json中指定添加的地址,要不然生成不了证书,也就添加不了

部署负载均衡

以下操作在nginx_lbm和nginx_lbb中都操作,并且以nginx_lbm为例

  • 编辑keepalived的配置文件的模板

  
  1. ##keepalibed.conf到nginx_lbm和nginx_lbb
  2. [root@nginx_lbm ~] # ls
  3. anaconda-ks.cfg initial-setup-ks.cfg keepalived.conf 公共 模板 视频 图片 文档 下载 音乐 桌面
  4. [root@nginx_lbm ~] # cat keepalived.conf
  5. ! Configuration File for keepalived
  6. global_defs {
  7. # 接收邮件地址
  8. notification_email {
  9. acassen@firewall.loc
  10. failover@firewall.loc
  11. sysadmin@firewall.loc
  12. }
  13. # 邮件发送地址
  14. notification_email_from Alexandre.Cassen@firewall.loc
  15. smtp_server 127.0. 0. 1
  16. smtp_connect_timeout 30
  17. router_id NGINX_MASTER
  18. }
  19. vrrp_script check_nginx {
  20. script "/usr/local/nginx/sbin/check_nginx.sh"
  21. }
  22. vrrp_instance VI_1 {
  23. state MASTER
  24. interface eth 0
  25. virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
  26. priority 100 # 优先级,备服务器设置 90
  27. advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
  28. authentication {
  29. auth_type PASS
  30. auth_pass 1111
  31. }
  32. virtual_ipaddress {
  33. 10.0. 0. 188/ 24
  34. }
  35. track_script {
  36. check_nginx
  37. }
  38. }
  39. [root@nginx_lbm ~] #
  • 关闭安全性功能

  
  1. systemctl stop firewalld .service
  2. setenforce 0
  • 编辑nginx的源,并且安装nginx

  
  1. [root@nginx_lbm ~] # cat /etc/yum.repos.d/nginx.repo
  2. [nginx]
  3. name=nginx repo
  4. baseurl= http:/ /nginx.org/packages /centos/ 7/$basearch/
  5. gpgcheck= 0
  6. ##重新加载yum 仓库
  7. [root@nginx_lbm ~] # yum list
  8. ##安装nginx
  9. [root@nginx_lbm ~] # yum install nginx -y
  • 编辑nginx配置文件,添加负载均衡功能,并且启动服务

  
  1. ##配置负载均衡功能
  2. [ root@nginx_lbm ~] # vi /etc/nginx/nginx.conf
  3. ##在第12行以下插入以下内容
  4. 12 stream {
  5. 13
  6. 14 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
  7. 15 access_log / var/log/nginx/k8s-access.log main;
  8. 16
  9. 17 upstream k8s-apiserver {
  10. 18 #此处为master1的ip地址
  11. 19 server 192.168 .43 .101: 6443;
  12. 20 #此处为master2的ip地址
  13. 21 server 192.168 .43 .102: 6443;
  14. 22 }
  15. 23 server {
  16. 24 listen 6443;
  17. 25 proxy_pass k8s-apiserver;
  18. 26 }
  19. 27 }
  20. ##检查配置文件
  21. [ root@nginx_lbm ~] # nginx -t
  22. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
  23. nginx: configuration file /etc/nginx/nginx.conf test is successful
  24. ##编辑主页面,以区别master和backup
  25. [ root@nginx_lbm ~] # cd /usr/share/nginx/html/
  26. [ root@nginx_lbm html] # ls
  27. 50x.html index.html
  28. [ root@nginx_lbm html] # vi index.html
  29. <h1>master</h1>或者<h1>backup</h1>
  30. ##开启nginx服务
  31. [ root@nginx_lbm html] # systemctl start nginx
  32. ##安装keepalived服务
  33. [ root@nginx_lbm html] # yum install keeepalived -y
  34. ##覆盖keepalived的配置文件
  35. [ root@nginx_lbm ~] # cp keepalived.conf /etc/keepalived/keepalived.conf
  36. cp:是否覆盖 "/etc/keepalived/keepalived.conf"? yes

nginx_lbm的操作

  • 配置keepalived服务

  
  1. [root@nginx_lbm ~] # cat /etc/keepalived/keepalived.conf
  2. ! Configuration File for keepalived
  3. global_defs {
  4. # 接收邮件地址
  5. notification_email {
  6. acassen@firewall.loc
  7. failover@firewall.loc
  8. sysadmin@firewall.loc
  9. }
  10. # 邮件发送地址
  11. notification_email_from Alexandre.Cassen@firewall.loc
  12. smtp_server 127.0. 0. 1
  13. smtp_connect_timeout 30
  14. router_id NGINX_MASTER
  15. }
  16. ##指定keepalived服务关闭脚本
  17. vrrp_script check_nginx {
  18. script "/etc/nginx/check_nginx.sh"
  19. }
  20. vrrp_instance VI_1 {
  21. state MASTER ##在nginx_lbm,设置为master
  22. interface ens33 ##指定网卡名
  23. virtual_router_id 51 ##24行,vrrp路由ID实例,每个实例是唯一的
  24. priority 100 ##在master中优先级为100,backup优先级为90
  25. advert_int 1
  26. authentication {
  27. auth_type PASS
  28. auth_pass 1111
  29. }
  30. virtual_ipaddress {
  31. 192.168. 43.100/ 24 ##指定VIP
  32. }
  33. track_script {
  34. check_nginx
  35. }
  36. }
  37. [root@nginx_lbm ~] # vi /etc/nginx/check_nginx.sh ##这个keepalived服务关闭脚本需要自行创建
  38. count=$(ps -ef | grep nginx |egrep -cv "grep|$$")
  39. #统计数量
  40. if [ "$count" -eq 0 ];then
  41. systemctl stop keepalived
  42. fi
  43. #匹配为0,关闭keepalived服务
  44. [root@nginx_lbm ~] # chmod +x /etc/nginx/check_nginx.sh ##添加权限
  45. ##启动keepalived服务
  46. [root@nginx_lbm ~] # systemctl start keepalived.service
  • 查看vip

  
  1. [root@nginx_lbm ~] # ip addr
  2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  3. link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
  4. inet 127.0. 0. 1/ 8 scope host lo
  5. valid_lft forever preferred_lft forever
  6. inet6 :: 1/ 128 scope host
  7. valid_lft forever preferred_lft forever
  8. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  9. link/ether 00:0c: 29: 92: 43: 7a brd ff:ff:ff:ff:ff:ff
  10. inet 192.168. 43.105/ 24 brd 192.168. 43.255 scope global ens33
  11. valid_lft forever preferred_lft forever
  12. inet 192.168. 43.100/ 24 scope global secondary ens33
  13. valid_lft forever preferred_lft forever
  14. inet6 fe8 0::ba5a: 8436: 895c: 4285/ 64 scope link
  15. valid_lft forever preferred_lft forever
  16. 3: virbr 0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
  17. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff
  18. inet 192.168. 122.1/ 24 brd 192.168. 122.255 scope global virbr 0
  19. valid_lft forever preferred_lft forever
  20. 4: virbr 0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr 0 state DOWN qlen 1000
  21. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff

nginx_lbb的操作

  • 配置keepalived服务

  
  1. ##编辑keepalived的配置文件
  2. [root@nginx_lbb ~] # vi /etc/keepalived/keepalived.conf
  3. ! Configuration File for keepalived
  4. global_defs {
  5. # 接收邮件地址
  6. notification_email {
  7. acassen@firewall.loc
  8. failover@firewall.loc
  9. sysadmin@firewall.loc
  10. }
  11. # 邮件发送地址
  12. notification_email_from Alexandre.Cassen@firewall.loc
  13. smtp_server 127.0. 0. 1
  14. smtp_connect_timeout 30
  15. router_id NGINX_MASTER
  16. }
  17. vrrp_script check_nginx {
  18. script "/etc/nginx/check_nginx.sh"
  19. }
  20. vrrp_instance VI_1 {
  21. state BACKUP ##不同于nginx_lbm,此处的state为BACKUP
  22. interface ens33
  23. virtual_router_id 51
  24. priority 90 ##优先级为90,低于nginx_lbm
  25. advert_int 1
  26. authentication {
  27. auth_type PASS
  28. auth_pass 1111
  29. }
  30. virtual_ipaddress {
  31. 192.168. 43.100/ 24
  32. }
  33. track_script {
  34. check_nginx
  35. }
  36. }
  37. [root@nginx_lbb ~] # vi /etc/nginx/check_nginx.sh
  38. count=$(ps -ef | grep nginx |egrep -cv "grep|$$")
  39. if [ "$count" -eq 0 ];then
  40. systemctl stop keepalived
  41. fi
  42. [root@nginx_lbb ~] # chmod +x /etc/nginx/check_nginx.sh
  43. ##开启服务
  44. [root@nginx_lbb ~] # systemctl start keepalived.service
  • 查看vip

验证负载均衡器的高可用

  • 关闭nginx_lbm中的nginx服务

  
  1. ##主动关闭nginx服务
  2. [root@nginx_lbm ~] # pkill nginx
  3. ##查看nginx和keepalived状态
  4. [root@nginx_lbm ~] # systemctl status nginx
  5. ● nginx.service - nginx - high performance web server
  6. Loaded: loaded ( /usr/lib /systemd/system /nginx.service; disabled; vendor preset: disabled)
  7. Active: failed (Result: exit-code) since 三 2020-04-29 08:40:27 CST; 6s ago
  8. Docs: http:/ /nginx.org/en /docs/
  9. Process: 4085 ExecStop= /bin/kill - s TERM $MAINPID (code=exited, status= 1/FAILURE)
  10. Main PID: 1939 (code=exited, status= 0/SUCCESS)
  11. ##keepalived服务也被自动关闭
  12. [root@nginx_lbm ~] # systemctl status keepalived.service
  13. ● keepalived.service - LVS and VRRP High Availability Monitor
  14. Loaded: loaded ( /usr/lib /systemd/system /keepalived.service; disabled; vendor preset: disabled)
  15. Active: inactive (dead)
  16. 4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) Send...
  17. 4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
  18. 4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
  19. 4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
  20. 4月 29 08:35:44 nginx_lbm Keepalived_vrrp[2204]: Sending gratuitous ARP o...
  21. 4月 29 08:40:27 nginx_lbm Keepalived[2202]: Stopping
  22. 4月 29 08:40:27 nginx_lbm systemd[1]: Stopping LVS and VRRP High Availab....
  23. 4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) sent...
  24. 4月 29 08:40:27 nginx_lbm Keepalived_vrrp[2204]: VRRP_Instance(VI_1) remo...
  25. 4月 29 08:40:28 nginx_lbm systemd[1]: Stopped LVS and VRRP High Availabi....
  26. Hint: Some lines were ellipsized, use -l to show in full.
  27. ##查看地址发现,没有vip
  28. [root@nginx_lbm ~]# ip a
  29. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  30. link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
  31. inet 127.0. 0. 1/ 8 scope host lo
  32. valid_lft forever preferred_lft forever
  33. inet6 :: 1/ 128 scope host
  34. valid_lft forever preferred_lft forever
  35. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  36. link/ether 00:0c: 29: 92: 43: 7a brd ff:ff:ff:ff:ff:ff
  37. inet 192.168. 43.105/ 24 brd 192.168. 43.255 scope global ens33
  38. valid_lft forever preferred_lft forever
  39. inet6 fe8 0::ba5a: 8436: 895c: 4285/ 64 scope link
  40. valid_lft forever preferred_lft forever
  41. 3: virbr 0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
  42. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff
  43. inet 192.168. 122.1/ 24 brd 192.168. 122.255 scope global virbr 0
  44. valid_lft forever preferred_lft forever
  45. 4: virbr 0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr 0 state DOWN qlen 1000
  46. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff
  • 查看nginx_lbm和nginx_lbb的vip

上述界面的出现说明,双机热备成功

  • 恢复vip

  
  1. ##在nginx_lbm中启动nginx和keepalived服务
  2. [root@nginx_lbm ~] # systemctl start nginx
  3. [root@nginx_lbm ~] # systemctl start keepalived
  4. ##再次查看地址信息,发现vip回到了nginx_lbm
  5. [root@nginx_lbm ~] # ip a
  6. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
  7. link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
  8. inet 127.0. 0. 1/ 8 scope host lo
  9. valid_lft forever preferred_lft forever
  10. inet6 :: 1/ 128 scope host
  11. valid_lft forever preferred_lft forever
  12. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
  13. link/ether 00:0c: 29: 92: 43: 7a brd ff:ff:ff:ff:ff:ff
  14. inet 192.168. 43.105/ 24 brd 192.168. 43.255 scope global ens33
  15. valid_lft forever preferred_lft forever
  16. inet 192.168. 43.100/ 24 scope global secondary ens33
  17. valid_lft forever preferred_lft forever
  18. inet6 fe8 0::ba5a: 8436: 895c: 4285/ 64 scope link
  19. valid_lft forever preferred_lft forever
  20. 3: virbr 0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
  21. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff
  22. inet 192.168. 122.1/ 24 brd 192.168. 122.255 scope global virbr 0
  23. valid_lft forever preferred_lft forever
  24. 4: virbr 0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr 0 state DOWN qlen 1000
  25. link/ether 52: 54: 00: 72: 80:f5 brd ff:ff:ff:ff:ff:ff

修改node的VIP以及pod的创建

修改node1和node2配置文件

  • 以node2为例

  
  1. [ root@node2 ~] # cd /opt/kubernetes/cfg/
  2. [ root@node2 cfg] # ls
  3. bootstrap.kubeconfig kubelet.config kube-proxy.kubeconfig
  4. flanneld kubelet.kubeconfig
  5. kubelet kube-proxy
  6. [ root@node2 cfg] # vi bootstrap.kubeconfig
  7. server: https: //192.168.142.20:6443
  8. #改为Vip的地址
  9. [ root@node2 cfg] # vi kubelet.kubeconfig
  10. server: https: //192.168.142.20:6443
  11. #改为Vip的地址
  12. [ root@node2 cfg] # vi kube-proxy.kubeconfig
  13. server: https: //192.168.142.20:6443
  14. #改为Vip的地址
  15. [ root@node2 cfg] # grep 100 *
  16. bootstrap.kubeconfig: server: https: //192.168.43.100:6443
  17. kubelet.kubeconfig: server: https: //192.168.43.100:6443
  18. kube-proxy.kubeconfig: server: https: //192.168.43.100:6443
  19. ##重启服务
  20. [ root@node2 cfg] # systemctl restart kubelet.service
  21. [ root@node2 cfg] # systemctl restart kube-proxy.service
  • 在nginx_lbm上查看nginx的日志,看是否有node访问

  
  1. [root@nginx_lbm ~]# cd /var/ log/nginx/
  2. [root@nginx_lbm nginx]# ls
  3. access. log error. log k8s-access. log
  4. [root@nginx_lbm nginx]# tail -f k8s-access. log
  5. 192.168 .43 .102 192.168 .43 .101: 6443 - [ 29/Apr/ 2020: 08: 49: 41 + 0800] 200 1119
  6. 192.168 .43 .102 192.168 .43 .102: 6443, 192.168 .43 .101: 6443 - [ 29/Apr/ 2020: 08: 49: 41 + 0800] 200 0, 1119
  7. 192.168 .43 .103 192.168 .43 .102: 6443, 192.168 .43 .101: 6443 - [ 29/Apr/ 2020: 08: 50: 08 + 0800] 200 0, 1120
  8. 192.168 .43 .103 192.168 .43 .101: 6443 - [ 29/Apr/ 2020: 08: 50: 08 + 0800] 200 1121

在master上创建pod并且测试

  • 创建pod

  
  1. [root@master ~] # kubectl run nginx --image=nginx
  2. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
  3. deployment.apps/nginx created
  • 查看pod状态

  
  1. [root@master ~] # kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. nginx-dbddb74b8- 8qt6q 1/ 1 Running 0 24 m
  • 绑定群集中的匿名用户赋予管理员权限(解决日志不可看问题)
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  • 查看pod日志

  
  1. ##在对应的node1上访问,因为这个pod创建在node1上
  2. [root@node1 ~]# curl 172.17.36.2
  3. <!DOCTYPE html>
  4. <html>
  5. <head>
  6. <title>Welcome to nginx! </title>
  7. <style>
  8. body {
  9. width: 35em;
  10. margin: 0 auto;
  11. font-family: Tahoma, Verdana, Arial, sans-serif;
  12. }
  13. </style>
  14. </head>
  15. <body>
  16. <h1>Welcome to nginx! </h1>
  17. <p>If you see this page, the nginx web server is successfully installed and
  18. working. Further configuration is required. </p>
  19. <p>For online documentation and support please refer to
  20. <a href="http://nginx.org/">nginx.org </a>. <br/>
  21. Commercial support is available at
  22. <a href="http://nginx.com/">nginx.com </a>. </p>
  23. <p> <em>Thank you for using nginx. </em> </p>
  24. </body>
  25. </html>
  26. [root@node1 ~]#
  27. ##在master上查看日志
  28. [root@master ~]# kubectl logs nginx-dbddb74b8-8qt6q
  29. 172.17.36.1 - - [29/Apr/2020:13:37:24 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
  30. [root@master ~]#
  • 查看Pod网络

  
  1. [root@master ~]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  3. nginx-dbddb74b8-8qt6q 1/ 1 Running 0 30m 172.17. 36.2 192.168. 43.102 < none>

搭建k8s的Dashboard

master1上的操作

  • 上传yaml文件

  
  1. ##创建dashboard目录
  2. [root@master ~] # mkdir dashboard
  3. [root@master ~] # cd dashboard/
  4. ##创建文件
  5. [root@master dashboard] # ls
  6. dashboard-configmap.yaml dashboard-rbac.yaml dashboard-service.yaml
  7. dashboard-controller.yaml dashboard-secret.yaml k8s-admin.yaml
  • 加载、创建所需文件,注意顺序如下:

  
  1. ##在/root/dashboard目录下操作
  2. #授权访问api
  3. kubectl create -f dashboard-rbac.yaml
  4. #进行加密
  5. kubectl create -f dashboard-secret.yaml
  6. #配置应用
  7. kubectl create -f dashboard-configmap.yaml
  8. #控制器
  9. kubectl create -f dashboard-controller.yaml
  10. #发布出去进行访问
  11. kubectl create -f dashboard-service.yaml
  • 查看创建在指定的kube-system命名空间下的pod

  
  1. [root@master dashboard]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. kubernetes-dashboard -65f974f565-bwmlx 1/ 1 Running 0 47s
  4. ##查看如何访问dashboard
  5. [root@master dashboard]# kubectl get pods,svc -n kube-system
  6. NAME READY STATUS RESTARTS AGE
  7. pod/kubernetes-dashboard -65f974f565-bwmlx 1/ 1 Running 0 82s
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. service/kubernetes-dashboard NodePort 10.0 .0 .199 <none> 443: 30001/TCP 70s

由上可知,访问dashboard可以访问:

https://node的IP地址:30001/

比如:https://192.168.43.102:30001/

但是访问dashboard需要令牌,所以下面还需要生成令牌

  • 生成自签证书

  
  1. ##制作证书脚本
  2. [root@master dashboard] # vi dashboard-cert.sh
  3. {
  4. "CN": "Dashboard",
  5. "hosts": [],
  6. "key": {
  7. "algo": "rsa",
  8. "size": 2048
  9. },
  10. "names": [
  11. {
  12. "C": "CN",
  13. "L": "BeiJing",
  14. "ST": "BeiJing"
  15. }
  16. ]
  17. }
  18. EOF
  19. K8S_CA=$1
  20. cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
  21. kubectl delete secret kubernetes-dashboard-certs -n kube- system
  22. kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube- system
  23. ##执行脚本
  24. [root@master dashboard] # bash dashboard-cert.sh /root/k8s/k8s-cert/
  25. [root@master dashboard] # ls
  26. dashboard-cert.sh dashboard.csr dashboard.pem dashboard-service.yaml
  27. dashboard-configmap.yaml dashboard-csr.json dashboard-rbac.yaml k8s-admin.yaml
  28. dashboard-controller.yaml dashboard-key.pem dashboard-secret.yaml
  29. ##在控制器yaml文件中添加证书,注意yaml文件的格式,使用空格
  30. [root@master dashboard] # vi dashboard-controller.yaml
  31. #在47行下追加以下内容
  32. - --tls-key-file=dashboard-key.pem
  33. - --tls-cert-file=dashboard.pem
  34. ##重新部署控制器
  35. [root@master dashboard] # kubectl apply -f dashboard-controller.yaml
  • 生成登录token

  
  1. ##生成令牌
  2. [root@master dashboard] # kubectl create -f k8s-admin.yaml
  3. ##将令牌保存
  4. [root@master dashboard] # kubectl get secret -n kube-system
  5. NAME TYPE DATA AGE
  6. dashboard-admin-token- 4zpgd kubernetes.io/service-account-token 3 66 s
  7. default-token-pdn6p kubernetes.io/service-account-token 3 39h
  8. kubernetes-dashboard-certs Opaque 11 11 m
  9. kubernetes-dashboard-key-holder Opaque 2 15 m
  10. kubernetes-dashboard-token- 4whmf kubernetes.io/service-account-token 3 15 m
  11. ##查看token,并且复制
  12. [root@master dashboard] # kubectl describe secret dashboard-admin-token-4zpgd -n kube-system
  13. Name: dashboard-admin-token- 4zpgd
  14. Namespace: kube- system
  15. Labels: <none>
  16. Annotations: kubernetes.io/service-account.name: dashboard-admin
  17. kubernetes.io/service-account.uid: 36095d9f- 89bd- 11ea-bb1a- 000c29ce5f24
  18. Type: kubernetes.io/service-account-token
  19. Data
  20. ====
  21. ca.crt: 1359 bytes
  22. namespace: 11 bytes
  23. token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNHpwZ2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzYwOTVkOWYtODliZC0xMWVhLWJiMWEtMDAwYzI5Y2U1ZjI0Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.wRx71hNjdAOuaG8EPEr_yWaAmw_CF-aXwVFk7XeXwW2bzDLRh0RfQV- 7nyBbw-wcPVXLbpoWNSYuHFS0vXHWGezk9ssERnErDXjE164H0lR8LkD1NekUQqB8L9jqW9oAZrZ0CkAxUIuijG14BjbAIV5wXmT1aKsK2sZTC0u-IjDcIT2UhjU3LvSL0Fzi4zyEvfl5Yf0Upx6dZ7yNpUd13ziNIP4KJ5DjWesIK- 34IG106Kf6y1ehmRdW1Sg0HNvopXhFJPAhp-BkEz_SCmsf89_RDNVBTBSRWCgZdQC78B2VshbJqMRZOIV2IprBFhYKK6AeOY6exCyk1HWQRKFMRw
  24. [root@master dashboard] #

登录dashboard

 

 

 

 


转载:https://blog.csdn.net/qq_42761527/article/details/105827311
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场