More simply, PureLB either uses the LoadBalancing functionality provided natively by k8s and/or combines k8s LoadBalancing with the routers Equal Cost Multipath (ECMP) load-balancing.
Instead of thinking of PureLB as advertising services, think of PureLB as attracting packets to allocated addresses with KubeProxy forwarding those packets within the cluster via the Container Network Interface Network (POD Network) between nodes.
PureLB uses a library called Memberlist to provide local network address failover faster than standard k8s timeouts would require. If you plan to use local network address and have applied firewalls to your nodes, it is necessary to add a rule to allow the memberlist election to occur. The port used by Memberlist in PureLB is Port 7934 UDP/TCP, memberlist uses both TCP and UDP, open both.
$ kubectl apply -f purelb/purelb-complete.yaml namespace/purelb created customresourcedefinition.apiextensions.k8s.io/lbnodeagents.purelb.io created customresourcedefinition.apiextensions.k8s.io/servicegroups.purelb.io created serviceaccount/allocator created serviceaccount/lbnodeagent created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/allocator created podsecuritypolicy.policy/lbnodeagent created role.rbac.authorization.k8s.io/pod-lister created clusterrole.rbac.authorization.k8s.io/purelb:allocator created clusterrole.rbac.authorization.k8s.io/purelb:lbnodeagent created rolebinding.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/purelb:allocator created clusterrolebinding.rbac.authorization.k8s.io/purelb:lbnodeagent created deployment.apps/allocator created daemonset.apps/lbnodeagent created error: unable to recognize "purelb/purelb-complete.yaml": no matches for kind "LBNodeAgent"in version "purelb.io/v1"
Please note that due to Kubernetes’ eventually-consistent architecture the first application of this manifest can fail. This happens because the manifest both defines a Custom Resource Definition and creates a resource using that definition. If this happens then apply the manifest again and it should succeed because Kubernetes will have processed the definition in the mean time.
$ kubectl get deploy -n purelb NAME READY UP-TO-DATE AVAILABLE AGE allocator 1/1 1 1 10m [root@tiny-cilium-master-188-1 purelb]# kubectl get ds -n purelb NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE lbnodeagent 3 3 3 3 3 kubernetes.io/os=linux 10m
$ kubectl get --namespace=purelb servicegroups.purelb.io No resources found in purelb namespace. $ kubectl get --namespace=purelb lbnodeagent.purelb.io NAME AGE default 55m
$ ip addr show kube-lb0 15: kube-lb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 12:27:b1:48:4e:3a brd ff:ff:ff:ff:ff:ff inet6 fe80::1027:b1ff:fe48:4e3a/64 scope link valid_lft forever preferred_lft forever
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:layer2-ippool name:nginx-lb-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer
---
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:layer2-ippool name:nginx-lb2-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer
---
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:layer2-ippool name:nginx-lb3-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer
$ kubectl apply -f nginx-quic-lb.yaml namespace/nginx-quic unchanged deployment.apps/nginx-lb created service/nginx-lb-service created service/nginx-lb2-service created service/nginx-lb3-service created
$ kubectl get svc -n nginx-quic NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-lb-service LoadBalancer 10.188.54.81 10.31.188.64 80/TCP 101s nginx-lb2-service LoadBalancer 10.188.34.171 10.31.188.65 80/TCP 101s nginx-lb3-service LoadBalancer 10.188.6.24 10.31.188.66 80/TCP 101s
$ ansible cilium -m command -a "ip addr show eth0" 10.31.188.11 | CHANGED | rc=0 >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:3c:88:cb brd ff:ff:ff:ff:ff:ff inet 10.31.188.11/16 brd 10.31.255.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 10.31.188.64/16 brd 10.31.255.255 scope global secondary eth0 valid_lft forever preferred_lft forever inet 10.31.188.66/16 brd 10.31.255.255 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe3c:88cb/64 scope link valid_lft forever preferred_lft forever
10.31.188.12 | CHANGED | rc=0 >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:32:a7:42 brd ff:ff:ff:ff:ff:ff inet 10.31.188.12/16 brd 10.31.255.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe32:a742/64 scope link valid_lft forever preferred_lft forever
10.31.188.1 | CHANGED | rc=0 >> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:69:0a:ab brd ff:ff:ff:ff:ff:ff inet 10.31.188.1/16 brd 10.31.255.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 10.31.188.65/16 brd 10.31.255.255 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe69:aab/64 scope link valid_lft forever preferred_lft forever
2.5 指定VIP
同样的,需要指定IP的话我们可以添加spec:loadBalancerIP:字段来指定VIP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:layer2-ippool name:nginx-lb4-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer loadBalancerIP:10.31.188.100
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:bgp-ippool name:nginx-lb5-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer
---
apiVersion:v1 kind:Service metadata: annotations: purelb.io/service-group:bgp-ippool name:nginx-lb6-service namespace:nginx-quic spec: allocateLoadBalancerNodePorts:false externalTrafficPolicy:Cluster internalTrafficPolicy:Cluster selector: app:nginx-lb ports: -protocol:TCP port:80# match for service access port targetPort:80# match for pod access port type:LoadBalancer loadBalancerIP:10.189.100.100
[tinychen /root/ansible]# ansible cilium -m command -a "ip addr show kube-lb0" 10.31.188.11 | CHANGED | rc=0 >> 19: kube-lb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether d6:65:b8:31:18:ce brd ff:ff:ff:ff:ff:ff inet 10.189.0.0/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet 10.189.100.100/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet6 fe80::d465:b8ff:fe31:18ce/64 scope link valid_lft forever preferred_lft forever 10.31.188.12 | CHANGED | rc=0 >> 21: kube-lb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether aa:10:d5:cd:2b:98 brd ff:ff:ff:ff:ff:ff inet 10.189.0.0/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet 10.189.100.100/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet6 fe80::a810:d5ff:fecd:2b98/64 scope link valid_lft forever preferred_lft forever 10.31.188.1 | CHANGED | rc=0 >> 15: kube-lb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 12:27:b1:48:4e:3a brd ff:ff:ff:ff:ff:ff inet 10.189.0.0/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet 10.189.100.100/32 scope global kube-lb0 valid_lft forever preferred_lft forever inet6 fe80::1027:b1ff:fe48:4e3a/64 scope link valid_lft forever preferred_lft forever
最后我们查看路由器上面的路由表,可以确定ECMP开启成功
1 2 3 4 5 6 7 8 9 10 11 12 13 14
tiny-openwrt-plus# show ip route Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP, T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR, f - OpenFabric, > - selected route, * - FIB route, q - queued, r - rejected, b - backup t - trapped, o - offload failure
K>* 0.0.0.0/0 [0/0] via 10.31.254.254, eth0, 00:08:51 C>* 10.31.0.0/16 is directly connected, eth0, 00:08:51 B>* 10.189.0.0/32 [20/0] via 10.31.188.11, eth0, weight 1, 00:00:19 * via 10.31.188.12, eth0, weight 1, 00:00:19 B>* 10.189.100.100/32 [20/0] via 10.31.188.11, eth0, weight 1, 00:00:19 * via 10.31.188.12, eth0, weight 1, 00:00:19
Depending on the router and its configuration, load balancing techniques will vary however they are all generally based upon a 4 tuple hash of sourceIP, sourcePort, destinationIP, destinationPort. The router will also have a limit to the number of ECMP paths that can be used, in modern TOR switches, this can be set to a size larger than a /24 subnet, however in old routers, the count can be less than 10. This needs to be considered in the infrastructure design and PureLB combined with routing software can help create a design that avoids this limitation. Another important consideration can be how the router load balancer cache is populated and updated when paths are removed, again modern devices provide better behavior.
PureLB的社区热度不如MetalLB和OpenELB,也没有加入CNCF,只表示CNCF提供了一个slack通道给用户进行交流(The CNCF have generously provided the PureLB community a Slack Channel in the Kubernetes workspace.)