飞道的博客

Kubernetes 一篇文章教你yum快速搭建K8s

502人阅读  评论(0)

环境如下


 Centos 7.X

master 192.168.179.104
node 192.168.179.103       192.168.17.101
etcd 192.168.179.102

Kubernetes集群组件:
– etcd 一个高可用的K/V键值对存储和服务发现系统
– flannel 实现夸主机的容器网络的通信
– kube-apiserver 提供kubernetes集群的API调用
– kube-controller-manager 确保集群服务
– kube-scheduler 调度容器,分配到Node
– kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
– kube-proxy 提供网络代理服务,将service与pod打通。 

关闭防火墙服务,避免与docker容器的防火墙规则冲突。

  1. # systemctl stop firewalld
  2. # systemctl disable firewalld

关闭selinux:
修改/etc/selinux/config为SELINUX=disabled
重启后配置生效。不建议临时关闭,防止机器重启失效。

 

ETCD节点 



  
  1. [ root@localhost ~] # vim /etc/etcd/etcd.conf
  2. [ root@localhost ~] # cd /etc/etcd/
  3. [ root@localhost etcd] # ls
  4. etcd.conf
  5. [ root@localhost etcd] # cp etcd.conf etcd.conf.bak
  6. [ root@localhost etcd] # grep -vE "#|^$" etcd.conf
  7. ETCD_DATA_DIR= "/var/lib/etcd/default.etcd"
  8. ETCD_LISTEN_CLIENT_URLS= "http://127.0.0.1:2379,http://192.168.179.102:2379"
  9. ETCD_NAME= "default"
  10. ETCD_ADVERTISE_CLIENT_URLS= "http://127.0.0.1:2379,http://192.168.179.102:2379"
  11. #和bind一样绑定哪块网卡和端口,其实就是监听的网卡,因为我有两块网卡一块ens32地址192.168.179.104 一块lo网卡127.0.0.1
  12. ETCD_LISTEN_CLIENT_URLS
  13. #ETCD服务器对外宣告端口,
  14. ETCD_ADVERTISE_CLIENT_URLS
  15. #这里是两块网卡都监听2379端口,所以写上两块网卡
  16. [ root@localhost etcd] # systemctl restart etcd
  17. [ root@localhost etcd] # netstat -tpln | grep 2379
  18. tcp 0 0 192.168 .179 .102: 2379 0.0 .0 .0:* LISTEN 10564/etcd
  19. tcp 0 0 127.0 .0 .1: 2379 0.0 .0 .0:* LISTEN 10564/etcd
  20. #检查etcd集群成员列表,这里只有一台
  21. [ root@localhost ~] # etcdctl member list
  22. 8e9e05c52164694d: name= default peerURLs=http: //localhost:2380 clientURLs=http://127.0.0.1:2379,http://192.168.179.102:2379 isLeader=true
  23. #检查etcd cluster状态
  24. [ root@localhost ~] # etcdctl cluster-health
  25. member 8e9e05c52164694d is healthy: got healthy result from http: //127.0.0.1:2379
  26. cluster is healthy
  27. 配置防火墙
  28. firewall-cmd --zone= public -- add-port= 2379/tcp --permanent
  29. firewall-cmd --zone= public -- add-port= 2380/tcp --permanent
  30. firewall-cmd --reload
  31. firewall-cmd --list-all

 

Master节点配置 apiserver|config



  
  1. [root@localhost ~] # yum install kubernetes-master flannel -y
  2. -----------------------------------------------------------------------------------------
  3. #apiserver监听在8080端口,所以该机器不能启动tomcat
  4. [root@localhost ~] # grep -vE "#|^$" /etc/kubernetes/apiserver
  5. KUBE_API_ADDRESS= "--insecure-bind-address=0.0.0.0"
  6. KUBE_ETCD_SERVERS= "--etcd-servers=http://192.168.179.102:2379"
  7. KUBE_SERVICE_ADDRESSES= "--service-cluster-ip-range=10.254.0.0/16"
  8. KUBE_ADMISSION_CONTROL= "--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
  9. KUBE_API_ARGS= ""
  10. #api服务监听的网卡地址
  11. KUBE_API_ADDRESS= "--insecure-bind-address=0.0.0.0"
  12. #连接etcd数据库,如果etcd是集群,后面接着写多个
  13. #KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.179.102:2379,http://192.168.179.103:2379"
  14. KUBE_ETCD_SERVERS= "--etcd-servers=http://192.168.179.102:2379"
  15. #VIP的网段,后期为VIP做负载均衡用的
  16. KUBE_SERVICE_ADDRESSES= "--service-cluster-ip-range=10.254.0.0/16"
  17. #会话控制的一些模块,ServiceAccount删除,因为提供用户名密码登入,这里不使用认证
  18. KUBE_ADMISSION_CONTROL= "--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
  19. -----------------------------------------------------------------------------------------
  20. #config是k8s系统配置
  21. [root@localhost ~] # grep -vE "#|^$" /etc/kubernetes/config
  22. KUBE_LOGTOSTDERR= "--logtostderr=true"
  23. KUBE_LOG_LEVEL= "--v=0"
  24. KUBE_ALLOW_PRIV= "--allow-privileged=true"
  25. KUBE_MASTER= "--master=http://192.168.179.104:8080"
  26. #错误日志打印是否开启,会打印到message日志里面
  27. KUBE_LOGTOSTDERR= "--logtostderr=true"
  28. #修改为对外IP,API地址和端口
  29. KUBE_MASTER= "--master=http://192.168.179.104:8080"
  30. #开启超级特权,启动docker有--privileged=true以支持更多命令
  31. KUBE_ALLOW_PRIV= "--allow-privileged=true"
  32. -----------------------------------------------------------------------------------------
  33. #先启动apiserver剩下两个顺序任意
  34. [root@localhost kubernetes] # systemctl start kube-apiserver
  35. [root@localhost kubernetes] # systemctl start kube-controller-manager
  36. [root@localhost kubernetes] # systemctl start kube-scheduler
  37. [root@localhost kubernetes] # ps -ef | grep kube
  38. kube 15584 1 4 21:55 ? 00:00:02 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://192.168.179.102:2379 --insecure-bind-address=0.0.0.0 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
  39. kube 15601 1 4 21:55 ? 00:00:00 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://192.168.179.104:8080
  40. kube 15614 1 6 21:56 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://192.168.179.104:8080
  41. [root@localhost kubernetes] # netstat -tpln | grep kube
  42. tcp6 0 0 :::10251 :::* LISTEN 15614/kube-schedule
  43. tcp6 0 0 :::6443 :::* LISTEN 15584/kube-apiserve
  44. tcp6 0 0 :::10252 :::* LISTEN 15601/kube-controll
  45. tcp6 0 0 :::8080 :::* LISTEN 15584/kube-apiserve

 

Node节点配置 config|kubelet



  
  1. [root@localhost ~] # yum install kubernetes-node docker flannel *rhsm* -y
  2. -------------------------------------------------------------------------------------------
  3. [root@localhost ~] # grep -vE '^$|#' /etc/kubernetes/config
  4. KUBE_LOGTOSTDERR= "--logtostderr=true"
  5. KUBE_LOG_LEVEL= "--v=0"
  6. KUBE_ALLOW_PRIV= "--allow-privileged=true"
  7. KUBE_MASTER= "--master=http://192.168.179.104:8080"
  8. #如果api server是其他端口,这里也需要修改为其他端口
  9. KUBE_MASTER= "--master=http://192.168.179.104:8080"
  10. -------------------------------------------------------------------------------------------
  11. [root@localhost ~] # grep -vE '^$|#' /etc/kubernetes/kubelet
  12. KUBELET_ADDRESS= "--address=0.0.0.0"
  13. KUBELET_HOSTNAME= "--hostname-override=192.168.179.103"
  14. KUBELET_API_SERVER= "--api-servers=http://192.168.179.104:8080"
  15. KUBELET_POD_INFRA_CONTAINER= "--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  16. KUBELET_ARGS= ""
  17. #写上对外IP,不能写成127.0.0.1
  18. KUBELET_HOSTNAME= "--hostname-override=192.168.179.103"

  
  1. [ root@localhost ~] # systemctl start kubelet
  2. [ root@localhost ~] # systemctl start kube-proxy
  3. [ root@localhost ~] # ps -ef | grep kube
  4. root 7545 1 4 10 :40 ? 00 :00:01 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.179.104:8080 --address=0.0.0.0 --hostname-override=192.168.179.103 --allow-privileged=true --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
  5. root 7624 1 2 10 :41 ? 00 :00:00 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.179.104:8080
  6. [ root@localhost ~] # netstat -tpln | grep kube
  7. tcp 0 0 127.0 .0 .1 :10248 0.0 .0 .0 :* LISTEN 7545 /kubelet
  8. tcp 0 0 127.0 .0 .1 :10249 0.0 .0 .0 :* LISTEN 7624 /kube-proxy
  9. tcp6 0 0 :::10255 :::* LISTEN 7545 /kubelet
  10. tcp6 0 0 :::4194 :::* LISTEN 7545 /kubelet
  11. tcp6 0 0 :::10250 :::* LISTEN 7545 /kubelet
  12. -----------------------------------------------------------------------------------------
  13. [ root@localhost kubernetes] # kubectl get node
  14. NAME STATUS AGE
  15. 192.168 .179 .103 Ready 36s
  16. #在另外一个node节点启动kubelet,kube-proxy。可以看到两个节点
  17. [ root@localhost kubernetes] # kubectl get node
  18. NAME STATUS AGE
  19. 192.168 .179 .101 Ready 8s
  20. 192.168 .179 .103 Ready 2m

 

Master Node Flanneld网络配置


打通集群节点之间通信 ,安装在master node上都需要部署


  
  1. #修改两个node节点和master flanneld配置,修改为Etcd节点的IP
  2. [root@localhost ~] # grep -vE "^$|#" /etc/sysconfig/flanneld
  3. FLANNEL_ETCD_ENDPOINTS= "http://192.168.179.102:2379"
  4. FLANNEL_ETCD_PREFIX= "/atomic.io/network"

  
  1. #启动flanneld网络会卡在这
  2. [root@localhost kubernetes] # systemctl start flanneld
  3. ^C
  4. #/atomic.io/network 因为这个key没有,所以卡在这
  5. [root@localhost etcd] # etcdctl ls /
  6. /registry
  7. #在etcd里面创建key value,以后docker主机的IP设置在哪个网段
  8. [root@localhost etcd] # etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
  9. { "Network": "172.17.0.0/16"}
  10. [root@localhost etcd] # etcdctl get /atomic.io/network/config
  11. { "Network": "172.17.0.0/16"}
  12. [root@localhost etcd] # etcdctl member list
  13. 8e9e05c52164694d: name= default peerURLs=http: //localhost:2380 clientURLs=http:/ /127.0.0.1:2379,http:// 192.168 .179 .102: 2379 isLeader= true
  14. [root@localhost etcd] # etcdctl get /atomic.io/network/config
  15. [root@localhost etcd] # etcdctl cluster-health
  16. member 8e9e05c52164694d is healthy: got healthy result from http:// 127.0 .0 .1: 2379
  17. cluster is healthy
  18. #Master Node节点启动flanneld网络
  19. [root@localhost ~] # systemctl start flanneld
  20. [root@localhost ~] # systemctl restart docker

  
  1. #可以看到flannel0网卡的IP就是从etcd数据库里面读取的,同时mater和node节点都在172.17.0.0网段,可以互相通信了,flanneld网络将整个集群网络打通了
  2. master 节点
  3. [ root@localhost ~] # ifconfig
  4. ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  5. inet 192.168 .179 .104 netmask 255.255 .255 .0 broadcast 192.168 .179 .255
  6. inet6 fe80::831c:6df1:a633:742a prefixlen 64 scopeid 0x20 <link>
  7. ether 00 :0c:29:a7:ff:f7 txqueuelen 1000 (Ethernet)
  8. flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
  9. inet 172.17 .48 .0 netmask 255.255 .0 .0 destination 172.17 .48 .0
  10. inet6 fe80::3402:860c:c93e:afe3 prefixlen 64 scopeid 0x20 <link>
  11. node1 节点 以后docker容器的ip就是172.17.35.0网段
  12. [ root@localhost ~] # ifconfig
  13. docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  14. inet 172.17 .35 .1 netmask 255.255 .255 .0 broadcast 0.0 .0 .0
  15. ether 02 :42:ff:4a:3b:38 txqueuelen 0 (Ethernet)
  16. ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  17. inet 192.168 .179 .103 netmask 255.255 .255 .0 broadcast 192.168 .179 .255
  18. inet6 fe80::f54d:5639:6237:2d0e prefixlen 64 scopeid 0x20 <link>
  19. flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
  20. inet 172.17 .35 .0 netmask 255.255 .0 .0 destination 172.17 .35 .0
  21. inet6 fe80::b557:3e9f:1253:3674 prefixlen 64 scopeid 0x20 <link>
  22. node2 节点 以后docker容器的ip就是172.17.14.0网段
  23. [ root@localhost ~] # ifconfig
  24. docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  25. inet netmask 255.255 .255 .0 broadcast 0.0 .0 .0
  26. ether 02 :42:5e:6d:3b:d3 txqueuelen 0 (Ethernet)
  27. ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  28. inet 192.168 .179 .101 netmask 255.255 .255 .0 broadcast 192.168 .179 .255
  29. inet6 fe80::eb42:2f23:95cb:44b6 prefixlen 64 scopeid 0x20 <link>
  30. flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
  31. inet 172.17 .14 .0 netmask 255.255 .0 .0 destination 172.17 .14 .0
  32. inet6 fe80::40fb:e70:39e5:b80c prefixlen 64 scopeid 0x20 <link>
  33. [ root@localhost etcd] # etcdctl ls /atomic.io/network/subnets
  34. /atomic.io/network/subnets/172.17.14.0-24
  35. /atomic.io/network/subnets/172.17.48.0-24
  36. /atomic.io/network/subnets/172.17.35.0-24

  
  1. #互相ping一下看是否可以通
  2. [ root@localhost ~] # ping 172.17.14.0
  3. PING 172.17 .14 .0 (172.17.14.0) 56 (84) bytes of data.
  4. 64 bytes from 172.17.14.0: icmp_seq=1 ttl=62 time=1.49 ms
  5. ^C
  6. --- 172.17 .14 .0 ping statistics ---
  7. 1 packets transmitted, 1 received, 0 % packet loss, time 0ms
  8. rtt min/avg/max/mdev = 1.496 /1.496/1.496/0.000 ms
  9. [ root@localhost ~] # ping 172.17.14.1
  10. PING 172.17 .14 .1 (172.17.14.1) 56 (84) bytes of data.
  11. 64 bytes from 172.17.14.1: icmp_seq=1 ttl=62 time=0.937 ms
  12. ^C
  13. --- 172.17 .14 .1 ping statistics ---
  14. 1 packets transmitted, 1 received, 0 % packet loss, time 0ms
  15. rtt min/avg/max/mdev = 0.937 /0.937/0.937/0.000 ms

到此整个集群配置完成


  
  1. [ root@localhost ~] # kubectl get pod --namespace=default
  2. No resources found.
  3. [ root@localhost ~] # kubectl get pod --namespace=kube-system
  4. No resources found.
  5. [ root@localhost ~] # kubectl get nodes
  6. NAME STATUS AGE
  7. 192.168 .179 .101 Ready 54m
  8. 192.168 .179 .103 Ready 56m

 


转载:https://blog.csdn.net/qq_34556414/article/details/108427620
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场