由于对网络的了解不够深入,本篇逻辑不清楚,只是放了几个关联的命令,至于更详细的原因,留待以后研究。
容器跨主机网络
一个正常的 3 节点容器网络网卡和路由信息如下。
节点一 192.168.200.204
网卡信息:
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.42.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::ec2c:5ff:fed1:2900 prefixlen 64 scopeid 0x20<link>
ether ee:2c:05:d1:29:00 txqueuelen 0 (Ethernet)
RX packets 8019 bytes 1873186 (1.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8213 bytes 789044 (770.5 KiB)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
路由信息:
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens33
10.42.0.10 0.0.0.0 255.255.255.255 UH 0 0 0 cali72476272766
10.42.0.11 0.0.0.0 255.255.255.255 UH 0 0 0 cali53fb75e9ddc
10.42.0.12 0.0.0.0 255.255.255.255 UH 0 0 0 cali9238d4857d8
10.42.0.13 0.0.0.0 255.255.255.255 UH 0 0 0 cali93e550b9ebf
10.42.1.0 10.42.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.42.2.0 10.42.2.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.200.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
节点二 192.168.200.205
网卡信息:
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.42.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::9c71:24ff:fe0e:7787 prefixlen 64 scopeid 0x20<link>
ether 9e:71:24:0e:77:87 txqueuelen 0 (Ethernet)
RX packets 9483 bytes 839101 (819.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11321 bytes 1165150 (1.1 MiB)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
路由信息:
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens33
10.42.0.0 10.42.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.42.1.0 10.42.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.42.2.18 0.0.0.0 255.255.255.255 UH 0 0 0 calia386a0ae7e9
10.42.2.19 0.0.0.0 255.255.255.255 UH 0 0 0 calid8146ec96f0
10.42.2.20 0.0.0.0 255.255.255.255 UH 0 0 0 calid96aa26afb9
10.42.2.21 0.0.0.0 255.255.255.255 UH 0 0 0 cali125284d5716
10.42.2.22 0.0.0.0 255.255.255.255 UH 0 0 0 cali92f020e8f12
10.42.2.23 0.0.0.0 255.255.255.255 UH 0 0 0 cali6ffdb16a4e8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.200.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
节点三 192.168.200.206
网卡信息:
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.42.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::80ce:7cff:fe3e:9136 prefixlen 64 scopeid 0x20<link>
ether 82:ce:7c:3e:91:36 txqueuelen 0 (Ethernet)
RX packets 12756 bytes 1068523 (1.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10074 bytes 1792199 (1.7 MiB)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
路由信息:
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens33
10.42.0.0 10.42.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.42.1.17 0.0.0.0 255.255.255.255 UH 0 0 0 cali013edd0c23e
10.42.1.18 0.0.0.0 255.255.255.255 UH 0 0 0 cali6cf9397d176
10.42.1.19 0.0.0.0 255.255.255.255 UH 0 0 0 calid4e960f5b99
10.42.1.20 0.0.0.0 255.255.255.255 UH 0 0 0 cali121e24acd4f
10.42.1.21 0.0.0.0 255.255.255.255 UH 0 0 0 cali8fb356abfba
10.42.1.22 0.0.0.0 255.255.255.255 UH 0 0 0 cali8097f1d812c
10.42.2.0 10.42.2.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.200.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
简单分析
以 192.168.200.204 为例,当前 flannel.1 的 IP 为 10.42.0.0
,和另外两个节点有如下的路由:
Destination Gateway Genmask Flags Metric Ref Use Iface
10.42.1.0 10.42.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.42.2.0 10.42.2.0 255.255.255.0 UG 0 0 0 flannel.1
查看 ARP 记录:
[root@localhost ~]# ip neigh show dev flannel.1
10.42.2.0 lladdr 9e:71:24:0e:77:87 PERMANENT
10.42.1.0 lladdr 82:ce:7c:3e:91:36 PERMANENT
查看对于 MAC 地址的网桥信息:
[root@localhost ~]# bridge fdb show flannel.1
82:ce:7c:3e:91:36 dev flannel.1 dst 192.168.200.206 self permanent
9e:71:24:0e:77:87 dev flannel.1 dst 192.168.200.205 self permanent
容器 Veth Pair 关系查看
当单个宿主机内容器网络不通的时候,可以通过本节的命令排查问题。
文中引用信息来自张磊的 深入剖析Kubernetes
张磊: Veth Pair 设备的特点是:它被创建出来后,总是以两张虚拟网卡(Veth Peer)的形式成对出现的。并且,从其中一个“网卡”发出的数据包,可以直接出现在与它对应的另一张“网卡”上,哪怕这两个“网卡”在不同的 Network Namespace 里。
运行下面命令启动一个容器:
docker run --name busybox --rm -it busybox /bin/sh
容器内查看网络信息:
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:788 (788.0 B) TX bytes:125 (125.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
打开另一个宿主机的 SSH 连接,通过下面脚本查看和当前容器匹配的 Veth Pair 设备:
[root@localhost ~]# curl -Ssl https://raw.githubusercontent.com/micahculpepper/dockerveth/master/dockerveth.sh | sh
CONTAINER ID VETH NAMES
2c54348455eb vethd030a3f busybox
在宿主机查看该网卡信息:
vethd030a3f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::8c6c:c9ff:fefb:56aa prefixlen 64 scopeid 0x20<link>
ether 8e:6c:c9:fb:56:aa txqueuelen 0 (Ethernet)
RX packets 3 bytes 167 (167.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11 bytes 900 (900.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
通过 brctl 查看网桥信息:
[root@localhost ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02422990970c no vethd030a3f
如果系统没有 brctl 可以安装
yum install bridge-utils
。
可以看到该设备插在了 docker0 上。
张磊: 一旦一张虚拟网卡被“插”在网桥上,它就会变成该网桥的“从设备”。从设备会被“剥夺”调用网络协议栈处理数据包的资格,从而“降级”成为网桥上的一个端口。而这个端口唯一的作用,就是接收流入的数据包,然后把这些数据包的“生杀大权”(比如转发或者丢弃),全部交给对应的网桥。
转载:https://blog.csdn.net/isea533/article/details/100147829