LTLnetworker | IT hálózatok, biztonság, Cisco

               IT networks, security, Cisco

Load balancer topology design (Cisco ACE, F5 BIG-IP LTM)

Posted by ltlnetworker on April 12, 2014


Adding a load balancer to an existing network is easy. You just open the vendor’s quick start guide, connect some cables to the server segment, maybe some to the core network. Load balancer configuration includes assigning IP addresses, defining virtual servers and adding server pools. Practically you are done, all the rest you need to do is adding some static routes to some servers or tweaking some NAT setting on the load balancer.

Actually, I don’t say this is evil. Such setups can work for long times with moderate risks and operation principles may be well defined and documented. Even if it can cause problems for network redesign or firewall projects and I estimate slightly higher opex as the tricky load balancer topology most be considered at all changes, still, I can accept such a method in some cases.

However, I am a networker and I prefer creating a design that reflects general best practice of networking.
<arspoetica>If we have a dedicated appliance (or virtual appliance) for load balancing I regard it as a Layer2 or Layer3 hop in the packet flow (or a vertex in the graph, as you like it). In addition, I expect flexibility and scalability. If a second or third segment also wants to access the service on the virtual IP (VIP) it must not imply creating new VLANs or subinterfaces or new VIPs on the load balancer. If the web server admins who have been totally unconcerned about client IP addresses (even when asked) now start to request that the web application should see the real client IP addresses then the load balancer topology must support that without redesigning the whole structure.</arspoetica> Actually, I tend to recommend such designs where even the real servers have the chance to access other servers’ load balanced services on the same subnet (just in case). That’s why you should ask questions and propose a design phase if you overhear some talk on a load balancer to be installed next week by WeBalance Ltd.

I have worked with Cisco ACE and F5 BIG-IP load balancers. Both are capable of virtualization: contexts (ACE) or routing domains (F5) can be created on the physical device, let’s call them LB instances in this article. The concept is not identical at the two vendors: an F5 routing domain is composed of a subset of VLAN interfaces and an isolated routing table (much like a Cisco VRF) while each Cisco ACE context has its own configuration, management plane and allocated resources.

Although there are plenty of possible topology designs I will focus two main categories of them being the most common and suitable for the most scenarios.

  1. Parallel or standaside mode
  2. Inline mode

(I use these terms although they are not commonly used yet. This article could be the reference point if they gain popularity. (-: )

Parallel or standaside mode:
36-L3-noNAT-parallel

Inline mode:
35-L3-noNAT-inline

The internal clients usually cannot access the load-balanced services in parallel mode because the return traffic avoids the load balancer. Load balancers usually do not tolerate asymmetric traffic. The TCP engine cannot do its tasks and the various Layer 7 functions are also unable to work if the server-to-client packets do not go through the load balancer.

Both parallel and inline mode can be implemented as routed (Layer3) or tranparent (Layer2, bridged). Routed mode means the load balancer has different subnets on its external and internal interface. In transparent mode the load balancer splits a subnet into two VLANs and bridges between them therefore it is easy to insert the device without modifying the IP addressing.

This is a comparison of the two models:

Parallel or standaside mode Inline mode
routing clumsy routing straightforward routing
real servers’ default gateway load balancer Layer 3 switch or firewall
real servers’ static routes routes covering enterprise subnets pointing to Layer 3 switch or firewall on each server are necessary not necessary
Internet clients’ access to VIP possible possible
internal clients’ access to VIP only with source NAT possible
real servers’ access to same-segment VIP only with source NAT only with source NAT
transparent (bridged) mode N/A possible
routed mode mandatory possible
traffic traversing the load balancer only the load balanced traffic all server traffic entering or exiting the load balanced segment
real servers’ normal traffic flows via the Layer3 switch or firewall flows via the load balancer
load balancer scaling smaller model may be sufficient must be scaled to cope with normal and load-balanced traffic

If we choose an inline mode – routed mode combination the load balancer (instance) has to route real servers’ normal traffic (e. g. management traffic to servers), that is, to forward all packets between the server subnet and the core subnets just like a router. Cisco ACE performs this function without any extra settings. On F5 BIG-IP we have to add wildcard virtual servers because only the load-balanced traffic is forwarded by default. I was advised to create one wildcard virtual server for the inbound and one for the outbound direction.

The virtual server called rd2-router-inbound is enabled on the LB_DMZEXT (external) side, the destination is the server subnet, the source is 0.0.0.0. The virtual server called rd2-router-outbound is enabled on the Server-11 (internal) side, the destination is the outside world, the source is the server subnet. Note the elegant way to configure the routing domain by the %2 suffix. :-/ (Probably a single router VS could also work with 0.0.0.0 and enabled on multiple VLANs.) The load balancer has its own IP address on either side (called self IP address in F5 BIG-IP).

31-rd2-router-inbound
30-rd2-router-outbound

To enable load balancing for internal clients in parallel mode source NAT must be applied. I would say this is a clear disadvantage to this setup.

33-L3-NAT-parallel

Everything is simple and possible in inline mode – transparent mode:

32-L2-noNAT-inline

If we have multiple server segments we can decide whether a single device/instance with multiple segments is used or a context is created for each segment. The figure shows an example of parallel mode – multiple segments. It has a drawback that it does not allow load balanced VIP access from one server segment to another due to asymmetric traffic. For such a demand you’d better use a single instance in inline mode or dedicated instances in transparent mode.

38b-L3-noNAT-parallel-multiple

Internet clients’ load-balanced traffic and the internal normal traffic can be depicted as flowing on two separate traffic planes if parallel mode is used. The traffic of the two planes cannot mix that’s why the load balancer is not capable of serving internal clients without source NAT.

37-L3-noNAT-parallel-3D

Advertisements

2 Responses to “Load balancer topology design (Cisco ACE, F5 BIG-IP LTM)”

  1. Eric said

    Whats your opinion of using “Vlan group “bridged mode” in your both of your scenarios?

  2. Actually, ‘VLAN group’ is the correct term for connecting VLANs using identical subnets with F5 BIG-IP LTM and for performing Layer 2 forwarding between them. If the VLAN group is translucent (default setting) or transparent, it is more or less the same as bridged mode with Cisco ACE.

    As you can see in the table, VLAN group (bridged mode) is not an option with parallel or standaside mode. (It could be set up with a few tricks but it would be no use at all.) For inline mode with F5, VLAN group is what I meant when I wrote bridged mode,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: