Kubernetes 开启iptables后 规则添加以及验证 | 您所在的位置:网站首页 › iptables添加端口 › Kubernetes 开启iptables后 规则添加以及验证 |
K8s开启iptables后规则设置和验证
1.Master node需要配置的iptables规则
1.1 INPUT链
iptables的表,链,端口对应的服务地址 Protocoliptables tableipables chainPort RangePurposeUsed byUDPfilterINPUT8472flannelnetworkTCPfilterINPUT6443Kubernetes API serverALL nodeTCPfilterINPUT2379-2380etcd server client APIkube-apiserver,etcdTCPfilterINPUT10250kubelet APIControl plane, Self ,kubectl execTCPfilterINPUT10251kube-schedulerselfTCPfilterINPUT10252kube-controller-managerself 1.1.1 pod通讯通过flannel的udp端口8472通讯 1.1.1.1包封装图启动node2的iptables,确定8472udp端口不通的情况,在node1的pod上测试node2上的pod是否能ping通 # kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 162m 10.244.1.48 xen11-196 my-nginx-75897978cd-64zj2 1/1 Running 0 4h15m 10.244.1.39 xen11-196 my-nginx-75897978cd-bbrtm 1/1 Running 0 23d 10.244.1.29 xen11-196 my-nginx-75897978cd-t7s6n 1/1 Running 0 142m 10.244.2.2 xen11-197Pod IP列表 podnode10.244.2.2xen11-19710.244.1.48xen11-196启动iptables systemctl start iptables确认UDP端口不通 #nc -z -v -u 172.25.11.197 8472 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 172.25.11.197:8472. Ncat: No route to host.启动busybox测试,并测试 kubectl run -i --tty busybox --image=busybox --restart=Never 在前端执行ping在后端执行ping kubectl exec busybox -- ping 10.244.0.6测试结果是无法ping通。 1.1.1.3 通的情况在node2上添加udp规则 iptables -I INPUT -p udp --dport 8472 -j ACCEPT # flannel端口用于封装10.244.0.0/16网段的IP包,协议UDP,端口8472 iptables -I INPUT -p udp --dport 8472 -j ACCEPT 1.1.2开通kubectl exec端口权限 # 开通kubectl exec端口权限10250 # 如果不开通就会报如下错 kubectl exec -it my-nginx-75897978cd-t7s6n sh Error from server: error dialing backend: dial tcp 172.25.11.197:10250: connect: no route to host iptables -I INPUT -p tcp --dport 10250 -j ACCEPT # 开通后就能执行kubectl exec # 示例修改nginx的首页内容,便于测试 # kubectl exec -it my-nginx-75897978cd-t7s6n -- bash -c "echo 10.244.2.2 >> /usr/share/nginx/html/index.html" # 1.2 Forward 链具体看包封装图 由于cni0 是pod网桥IP地址,flannel.1 是路由网关地址,flannel.1 是包封装网卡地址,数据包需要Forward才能转发,打开iptables forward 注意同时在/etc/sysctl.conf 里面打开 net.ipv4.ip_forward = 1 iptables -I FORWARD -s 10.244.0.0/16 -j ACCEPT iptables -I FORWARD -d 10.244.0.0/16 -j ACCEPT这样pod之间跨主机才能互相ping通,同时宿主机才能ping通到跨主机的pod 1.3 Output 链iptables 默认没开启OUTPUT限制,如果客户严格限制,需要加上OUTPUT过滤,需要开通10.244.0.0(pod)网段 iptables -I OUTPUT -s 10.244.0.0/16 -j ACCEPT 2.Worker node需要配置的iptables规则 2.1 Input 规则 Protocoliptables tableipables chainPort RangePurposeUsed byUDPfilterINPUT8472flannelnetworkTCPfilterINPUT10250kubelet APIControl plane, Self ,kubectl execTCPfilterINPUT30000-3*****NodePortServiceAll 2.2 Forward 链具体看包封装图 由于cni0 是pod网桥IP地址,flannel.1 是路由网关地址,flannel.1 是包封装网卡地址,数据包需要Forward才能转发,打开iptables forward 注意同时在/etc/sysctl.conf 里面打开 net.ipv4.ip_forward = 1 iptables -I FORWARD -s 10.244.0.0/16 -j ACCEPT iptables -I FORWARD -d 10.244.0.0/16 -j ACCEPT这样pod之间跨主机才能互相ping通,同时宿主机才能ping通到跨主机的pod 2.3 Output 链iptables 默认没开启OUTPUT限制,如果客户严格限制,需要加上OUTPUT过滤,需要开通10.244.0.0(pod)网段 同时需要加上连接Master node 的OUTPUT过滤 iptables -I OUTPUT -s 10.244.0.0/16 -j ACCEPT iptables -I OUTPUT -d MasterIP/32 -j ACCEPT #如果是集群,还需要加上集群地址和其他Masternode地址 3.kubernetes NAT规则查看kubernetes iptables NAT规则举例,这部分有kublet-proxy设置,无需自己设置 iptables -S -t nat|grep 8088 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.103.228.172/32 -p tcp -m comment --comment "default/nginx-8080: cluster IP" -m tcp --dport 8088 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.103.228.172/32 -p tcp -m comment --comment "default/nginx-8080: cluster IP" -m tcp --dport 8088 -j KUBE-SVC-LTTOZPFESA6UOUYL iptabes -S -t nat KUBE-MARK-MASQKUBE-SERVICES为 kubernets 自定义链: [root@xen11-195 ~]# iptables -S KUBE-SERVICES -t nat -N KUBE-SERVICES -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.100.43.239/32 -p tcp -m comment --comment "kube-system/traefik:http-redirect cluster IP" -m tcp --dport 1080 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.100.43.239/32 -p tcp -m comment --comment "kube-system/traefik:http-redirect cluster IP" -m tcp --dport 1080 -j KUBE-SVC-DP4KP26T3K75I2A2 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.140.25/32 -p tcp -m comment --comment "monitoring/alertmanager-main:web cluster IP" -m tcp --dport 9093 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.97.140.25/32 -p tcp -m comment --comment "monitoring/alertmanager-main:web cluster IP" -m tcp --dport 9093 -j KUBE-SVC-NAZP4SD6XLP35COK -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.109.140.226/32 -p tcp -m comment --comment "monitoring/prometheus-k8s:web cluster IP" -m tcp --dport 9090 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.109.140.226/32 -p tcp -m comment --comment "monitoring/prometheus-k8s:web cluster IP" -m tcp --dport 9090 -j KUBE-SVC-IFO32E4YIRUTZPGJ -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.97.96.87/32 -p tcp -m comment --comment "monitoring/prometheus-adapter:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.97.96.87/32 -p tcp -m comment --comment "monitoring/prometheus-adapter:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-GRVIJZ6QHJZF73YT -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.108.234.116/32 -p tcp -m comment --comment "default/my-nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.108.234.116/32 -p tcp -m comment --comment "default/my-nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-BEPXDJBUHFCSYIC3 -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -miptables 实现负载均衡: [root@xen11-195 ~]# iptables -S KUBE-SVC-LTTOZPFESA6UOUYL -t nat -N KUBE-SVC-LTTOZPFESA6UOUYL -A KUBE-SVC-LTTOZPFESA6UOUYL -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-2YRI2HTTSM2JTT2G -A KUBE-SVC-LTTOZPFESA6UOUYL -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7TB3S7FLLR7UHYH4 -A KUBE-SVC-LTTOZPFESA6UOUYL -j KUBE-SEP-3RIUDCBRFGYQJL4F最后追踪到pod IP地址 [root@xen11-195 ~]# iptables -S KUBE-SEP-2YRI2HTTSM2JTT2G -t nat -N KUBE-SEP-2YRI2HTTSM2JTT2G -A KUBE-SEP-2YRI2HTTSM2JTT2G -s 10.244.2.13/32 -j KUBE-MARK-MASQ -A KUBE-SEP-2YRI2HTTSM2JTT2G -p tcp -m tcp -j DNAT --to-destination 10.244.2.13:80 [root@xen11-195 ~]# iptables -S KUBE-SEP-7TB3S7FLLR7UHYH4 -t nat -N KUBE-SEP-7TB3S7FLLR7UHYH4 -A KUBE-SEP-7TB3S7FLLR7UHYH4 -s 10.244.2.14/32 -j KUBE-MARK-MASQ -A KUBE-SEP-7TB3S7FLLR7UHYH4 -p tcp -m tcp -j DNAT --to-destination 10.244.2.14:80 4. Iptables 数据流供参考 |
CopyRight 2018-2019 实验室设备网 版权所有 |