LTLnetworker | IT hálózatok, biztonság, Cisco

               IT networks, security, Cisco

Management network topology and asymmetric routing

Posted by ltlnetworker on August 16, 2015


We all want a management network or at least a management VLAN. Regarding those who say they have none, actually they do have a VLAN for management, it is probably just shared with ordinary users (i. e. it is not dedicated). But most IT people prefer a dedicated VLAN that is not used for other kind of traffic and preferably not reachable for users.

In this article we use this definition:
a management VLAN or management network is a dedicated segment for network management traffic which can be used for:

  • administering your network devices (aka device access: switches, routers, firewalls via telnet, ssh, https etc.)
  • collecting monitoring information (syslog, SNMP etc.)
  • hosting syslog, monitoring and management servers (Nagios, Tivoli, Cisco Prime etc.)
  • AAA traffic (RADIUS or TACACS+ to Cisco ACS/ISE)

A similar approach may be used for defining server management network/VLAN which can be used for

  • administering your servers (RDP, telnet, ssh, etc.) on a dedicated NIC (other than the applications NIC)
  • managing the hypervisor under your virtual servers (e. g. VMware VMkernel interfaces)
  • collecting monitoring information (syslog, SNMP etc.)
  • accessing the out-of-band port such as iLO, CIMC, IMM port

We focus on network management VLAN but we will see an example for the placement of a server management network too. Usually the server management network is much simpler to design as most servers residing in this VLAN do not act as a layer3 gateway, that is why they do not create additional routing paths. And we will see that most challenges and troubles originate from the fact that some network devices act as a layer3 gateway (especially L3 switches, routers and firewalls). From now on, the term management VLAN or management network refers to the network management segment excluding server management segment. In some places we will discuss the physically distinct out-of-band management network with dedicated switches but the focus is on the management VLAN.

There are some basic characteristics that make a network easy to operate:

  • no asymmetric paths due to routing
  • no need for static routes on hosts, a single default gateway is sufficient for each server or PC
  • no need for dynamic routing protocols on hosts (e. g. RIP listener)

Note: the second item restricts the use of multiple NICs (multiple IP addresses) in a server. We will see how much.

Where is the management VLAN located in a typical enterprise network? Let’s see some examples:

m10-typical-LAN

Everything is simple until things are simple. (-: In this network the layer2 switches and some servers are located in the management VLAN. The only layer3 device residing in this VLAN is the core switch which is the default gateway in the management VLAN. There are no routing problems as the single layer3 exit point is the core switch. The firewall is managed via its inside interface so management traffic and user traffic is mixed on the transit VLAN 9.

What is the difference if the Cisco ASA’s management interface is connected to the management VLAN?

m11-asymm-internal

The management workstations and servers can access all network devices now including the ASA within the segment. But now we have multiple Layer3 devices connected to the management VLAN and this causes routing issues. It is not difficult to find cases where the return traffic chooses a different path. While layer3 switches allow this kind of asymmetric traffic, a firewall does not. If the workstation tries to access the firewall’s management address 192.168.6.11 the return traffic would choose a different path backwards so the connection will be blocked by the firewall.

The next figure shows what happens if a server in the management network tries to communicate with an Internet server or you try to reach the management station from a remote access VPN (RAVPN) client. The red inbound traffic would choose a different path on the ASA so the connection is broken:

m13-asymm-external

The root cause of the problem is that multiple layer3 devices have the management subnet as a connected route in their routing tables so there is a shortcut when return packets would leave the firewall. It does not fix the problem if the management-only command is set on the ASA’s Management0/0 interface because it does not remove the connected route from the ASA’s routing table. Actually the command blocks the transit traffic between Man0/0 and other interfaces but this is not what we need. Return packets are still hijacked due to the connected 192.168.6.0 route and not forwarded on the symmetric path.

It does not solve the problem if you change the default gateway to the firewall’s address (192.168.6.11). Then another direction becomes asymmetric. Sessions towards internal subnets will suffer from firewall blocking due to the asymmetric routing. (No figure, it is a homework for the reader.) This time it is not the return traffic that is dropped on the firewall but all the forward packets as the ASA does not see all 3 packets of the TCP 3-way handshake.

I consider this a general rule: multiple layer3 exit points (or entry points) in a subnet/VLAN is always a bad design as it results in asymmetric routing, redirects or it requires static routes on hosts. (As long as you do not have a firewall among these devices, e. g. you just define a management VLAN interface on multiple layer3 switches then it results in similar asymmetric flows but at least switches will not block traffic for that reason. I still don’t think it is a good idea.)

What are the possible solutions for this routing anomaly?

  1. Tweak the routing tables on the hosts in management VLAN. With careful tuning, you can add static routes of internal subnets towards the switch and a default route towards the firewall (external subnets are reachable via the firewall). This does not scale well and it is rather clumsy.
  2. You may decide to avoid any routing and perform all management tasks on a jump server. One NIC is in the management VLAN and the other NIC is in a routed server VLAN.  Probably sooner or later some servers will still need to receive or send traffic outside the management VLAN (forwarding syslogs, downloading patches, authentication from AD server etc.) and this solution cannot fulfil all requirements.
  3. Avoid such asymmetric loops by restricting the number of layer3 exit points to one. For the most secure and flexible result I suggest this exit point should be an interface (or VLAN subinterface) on the firewall. For best results, do not choose ASA’s Management0/0 interface as it is often restricted to 100 Mbit/s and it cannot be the default gateway for the internal CX and FirePOWER module. Another interface is just as good and performs all functions perfectly. This is my preferred alternative so let’s discuss it further.

So in option 3 the management VLAN is routed exclusively via the firewall and then security isolation is solved at once. Ordinary users and servers are not able to access switches or management servers.

Management VRF in layer3 devices

Layer2 switches (e. g. Catalyst 2960) or layer3 switches in layer2 mode (e. g. Catalyst 3750-X with no ip routing command) are easy to place into the management network:

interface Vlan 2
no shutdown
ip address …
ip default-gateway …

But how should we manage our layer3 devices (switches and routers)? The answer I prefer is: use a management VRF in them to avoid the connected route in the global routing table. If you assign the management IP address to a dedicated VRF then it does not interconnect the management address with any user subnets in the global VRF so there is no shortcut. Usually I choose the name nm (stands for network management) for this VRF as it is short and easy to type (you will soon regret using any uppercase letter in VRF names and long names too). The figure shows the switch with such a vrf created:

m25-switch-vrf-nm

So a VLAN interface (SVI) is created and assigned to vrf nm. Actually it behaves like a separate host inside the switch connected to the firewall-separated management VLAN. The default gateway in this VRF is the firewall of course. Additionally, you should block telnet and ssh from any subnets except the management subnet so that noone can log in via the switch’s global VRF IP addresses.

access-class 4 in use-vrf

(Actually we cannot restrict the interface on which the switch allows ssh, only the client subnets. The management plane is accessible through all the switch’s own IP addresses all SVI’s and layer3 interfaces including the management VRF, the nm VRF and through all the SVI’s in the global VRF.)

Note that if this is the same switch that routes the management VLAN then interface VLAN2 must be an SVI in the global routing table and you cannot connect vrf nm to VLAN2 inside the chassis. This figure illustrates the impossible setup:

m08b-switch-vrf-nm

However, there is already a factory management VRF in most Cisco switches with a dedicated management Ethernet port. Why not use that port and VRF? Unfortunately the vrf mgmtVrf (the name varies among Cisco switch families) is restricted to that out-of-band (OOB) port. A VLAN interface cannot be assigned to that vrf. So if you really want to use that VRF then you need a cable to connect the OOB port to a regular switchport. It can be a port even on the same switch although you may feel uneasy about it. It does not matter actually. Remember, this is not a real OOB network access, just a method to connect the management VRF:

m08c-switch-OOB

As the switch has two layer3 interfaces in the same VLAN with the same built-in MAC addresses, one of them must be changed to avoid a conflict.
If you prefer your own vrf you can assign a front switchport to it and set up a pseudo-OOB port like this:
m08d-switch-pseudoOOB
Usually it is preferred to protect the management network with a firewall. Let’s create a routed ASA (sub)interface for the management VLAN but not the factory Management0/0 interface:

m06-ASA-switch-vrf-nm


This is the setup I regard as universal.
There is no asymmetric routing, the management network has a single exit point (ASA G0/0), and it is firewall-protected.

As the switch is now managed in the vrf nm, the common management protocols (SNMP, syslog, RADIUS etc.) needs to be configured in vrf flavour:

aaa group server radius ISE
server-private 192.168.6.54 auth-port 1812 acct-port 1813 timeout 6 key key123
ip vrf forwarding nm
logging source-interface Vlan6 vrf nm
logging host 192.168.6.60 vrf nm
snmp-server user admin netmanRO vrf nm v3 auth sha Cisco123 priv aes 128 Cisco123
snmp-server trap-source Vlan6
snmp-server host 192.168.6.71 vrf nm version 3 priv admin

NTP can be an exception as the layer3 switch is often used as a NTP server by application servers so we may choose running NTP in the global vrf (use a loopback address as the NTP address for clients).

Of course, you can build an out-of-band management network with one or more dedicated switches. This is the intended way to use the OOB port:

m07-ASA-OOBdedicated-switch

Caution: the OOB management port of the Catalyst 3560/3750 switch is not assigned to a separate VRF. It has a built-in filter blocking the traffic flow between front ports and rear management port but it still affects the routing.

I still don’t suggest managing the ASA on M0/0 even in this case. Even if the management switch is out-of-band you will find it convenient to route the subnet via the firewall. (Sooner or later there will be a demand to access that network from another subnet. It is not easy to keep it really out-of-band.)

So connecting ASA’s management interface to the network can be separated from the decision how you manage your ASA firewall. For that decision it is important to keep in mind that you always have to initiate a management connection (ssh, https) from the zone of the targeted IP address of the ASA. Management access to an interface other than the one from which you entered the ASA is not supported. (9.4 CLI config guide) [Cisco online docs are very informative so I’m quoting some sentences word by word, shown in green] For example, if you chose G0/0 as your ASA interface in the management VLAN, your management host cannot be located on G0/2 interface, you can only initiate a management connection from the right direction (from G0/0 zone). The only exception to this rule is through a VPN connection.

Management interface in Cisco ASA

Non-X models’ (ASA 5510, 5520, 5540, 5580) management interface is a Fast Ethernet interface designed for management traffic only, and is specified as Management0/0. (7.0 config guide) It does not have a dedicated vrf unfortunately. You can, however, use it for through traffic if desired if you remove the management-only command. The 9.0 CLI config guide adds a bit more information:
By default, the Management 0/0 interface is configured for management-only traffic (the management-only command). For supported models in routed mode, you can remove the limitation and pass through traffic. If your model includes additional Management interfaces, you can use them for through traffic as well. The Management interfaces might not be optimized for through-traffic, however.

Actually, you may start wondering what advantage the Management interface has compared to using a VLAN subinterface on a normal Gigabit interface to manage the firewall…

ASA X-models’ (5506-X .. 5555-X) management interfaces are not capable of through traffic, i. e. the management-only command cannot be removed. However, you can use any interface as a dedicated management-only interface by configuring it for management traffic, including an EtherChannel interface (CLI 9.4 config guide) The Management interface has the following characteristics:

  • No through traffic support
  • No subinterface support
  • No priority queue support
  • No multicast MAC support
  • The software module shares the Management interface. Separate MAC addresses and IP addresses are supported for the ASA and module. You must perform configuration of the module IP address within the module operating system. However, physical characteristics (such as enabling the interface) are configured on the ASA.

There is a matrix on management interface numbering in these guides as not all models use the Management0/0 interface name.

UPDATE:
Cisco ASA software version 9.5 has introduced a separate routing table for management-only interfaces. However, testing 9.5.1 shows that IP addresses from the same subnet still cannot be assigned to a management and a data interface.

ASA CX, IPS and FirePOWER (SFR) software modules

These modules share the management interface with the ASA. In other words, the module’s management interface is placed behind the physical ASA management interface and the traffic must flow through the physical ASA interface. Cisco CX and FirePOWER configuration guides contain some basic advices on how you should plan the management connections. The 9.1 guide’s CX chapter shows two setups: If you have an inside router and If you do not have an inside router:
m26-CCO-inside-router   m27-CCO-do-not-have-inside-router

The first scenario is effectively the same as in the figure m11 described above. The Cisco guide states that you can overcome the asymmetric routing issue by adding a static route on the ASA to reach the management network through the inside router. It could only work if the connected route could be overridden by the static but according to my lab test the device is not capable of that. However, we could use the longest match rule for a dirty trick and add the subnet in smaller parts (e. g. two /25 routes) but I haven’t tested that.

As for the second scenario (if you do not have an inside router), there is no separate management network, still it resembles my preferred choice shown above. You can use even if you do have a separate management network:

  • ASA is managed from a normal interface instead of the Management0/0 interface
  • ASA’s Management0/0 is not used for the firewall (no nameif, no ip address), it is used by the module only.
  • the software module’s default gateway (and everyone else’s default gateway in that VLAN) is an ASA normal (sub)interface

In the ASA 9.4 CLI guide’s FirePOWER chapter this scenario is the only recommended network deployment already. I am glad to see that Cisco engineers finally came to a similar conclusion as I did. For the ASA 5506-X, 5508-X, and 5516-X, the default configuration contains no nameif and no ip address under the management interface.

ASA CX, IPS hardware modules

If you use a hardware module (such as in ASA 5585-X) it has an external interface for management. Assuming your management network is set up in a good topology the module interface can be simply connected to the management VLAN and it will be accessible from the necessary places.

Server management

This chapter describes those interfaces on a server that are regarded as dedicated management interfaces. Today’s servers almost always have an iLO/CIMC/IMM port providing all the physical console functions to access remotely (monitor, booting, keyboard, virtual CD/DVD media, power button etc.) While these ports could be placed into a dedicated out-of-band server management network, the server management network in this article contains VMware vSphere ESXi hosts (vmk0) and other hypervisor management interfaces additionally. This network should be in a firewall DMZ too and the complete picture looks like this:

m14-ASA-switch-vrf-nm-sm

Out-of-band management network

As we mentioned, a real out-of-band management network consists of dedicated OOB switches. It is confined to a layer2 domain: a single site or multiple layer2-connected sites. If you have multiple layer3-connected sites then a separate OOB network is needed on each site. It is not likely that you create a dedicated OOB WAN network (dedicated OOB routers and telco lines) to interconnect these OOB networks.

Accessing your devices

You need to log on to network devices even if your management network is not routed or not accessible from any point of the network. As long as you stay in the management network (either OOB or just a VLAN) it is trivial. But most of the time you are not in the management network. What solutions do you have?

  • Setting up a dedicated physical management station at a convenient place that is easy and quick to reach. The PC is placed into the management network. But even if it is close, it is not convenient to move data between your workstation and this PC.
  • Setting up a dedicated virtual machine in the management network. By connecting to the virtual console it is possible to reach the devices. If the hypervisor supports cut and paste on the console, data transfer can be accomplished.
  • Setting up a (physical or virtual) jump server (terminal server) with two NICs. One NIC is in the management network, the other NIC connects to the company network. Only the second NIC has a default gateway. The jump server provides access within the management-VLAN. This is a good solution for login access but not sufficient for other kind of management traffic. If syslog or AD traffic needs to reach servers outside the management network then this management subnet still need to be routed (symmetrically) even if the jump server is not dependent of this routing. The preferred method ‘single exit on firewall’ described above can be combined with a jump server. The jump server is not a layer3 device, noone uses it as a next hop so the connected subnet is not a problem and does not cause asymmetric routing. This setup is shown in the next figure:

m28-jump

The analysis so far can be reversed: a suboptimal / badly designed / shortsighted topology may support the IT staff’s needs for a long time, provided that

  • login is performed only from a dedicated workstation
  • that specific workstation has no IP communications needs to other or external subnets
  • a jump server is available with two NICs or has an out-of-band access method (or the management station itself is a jump server)

As the main point is to separate the data plane VRF from the management traffic’s VRF there is an alternative: move all your data plane traffic from the global routing table into another VRF. The global VRF is dedicated for management traffic and everything else uses the data VRF. The obvious advantage is the simple non-vrf formats of the syslog, snmp-server, access-class etc. commands. But you must use the vrf data on all other routed interfaces as well as in all show commands during troubleshooting. You do not ping any more. Instead, you ping vrf data 10.1.1.1. (UPPERCASE and long VRF names are a big mistake, have I already mentioned?)

About the isolated/floating/out-of-band/nonrouted management network concept

As I mentioned in some places above, the starting concept often targets an idealistic management network that is neatly isolated from practically everything. Let me give some examples when this concept will encounter challenges:

  • Cisco Secure ACS needs to communicate with AD servers if it uses external Active Directory database for AAA
  • Cisco ISE: same situation
  • syslog servers or generally SIEM systems may be located outside of management VLAN as their collection scope is larger than just the network devices

In most cases you will be forced to give up the isolated/nonrouted way sooner or later so you had better be prepared and create a flexible structure from the beginning.

Multiple firewalls on the network

Things are getting complicated as soon as you have an additional firewall that needs to be managed somehow. The firewall-isolated management network concept shown above cannot be simply extended. Unlike layer3 switches, some firewalls (such as pre-9.5 Cisco ASA and Check Point) cannot separate their management port into a vrf. Multiple firewalls connected to the same management network (or zone or DMZ) cause layer3 loops and asymmetric routing as I already described. The result is similar asymmetric flows so Cisco ASA blocks it:

m15-2fw-asymm-internal

The same setup with Check Point firewalls:

m15cp-2fw-asymm-internal

fw21 blocks the connection because of Anti-Spoofing on eth0. An IP packet with a 192.168.60.x source address cannot arrive on eth0 management interface. It is recorded in the logs:

m34-Tracker-m15-2fw-asymm-internal

But if Anti-Spoofing is disabled on the interfaces (strongly discouraged), the connection succeeds. Asymmetric traffic is allowed on Check Point as the connection table does not contain which was the ingress and egress interface of the initial packet. So Check Point firewalls block asymmetric traffic as long as Anti-Spoofing is properly configured.

Most Check Point deployments use a dedicated interface for management so that the Security Management Server accesses all security gateways directly. A gateway which is located behind another gateway would impose a risk that policy install traffic or logs cannot traverse the intermediate firewall and it could become unmanageable in a critical situation:

m16-2CPfw-notdirect

Should we create a separate Check Point management zone for each gateway and a separate NIC in the management server for each zone? Well, this would be quite a good solution to all the problems described so far but Check Point products are not intended to leverage a multi-interface SMS. While such an SMS could reach all the gateways via the correct NIC, logging traffic probably would not. Looking at the Logs page in gateway properties shows that a management server and its IP address is listed. No mentioning of a second, third etc. IP address, all you can see is the primary address:

m18-CPsg-LogServers

The gateway probably would not be able to send the logs to the closest IP address of the server so this setup is not working. (However, it is possible to have a dedicated interface for firewall management and another interface for jumping.)

A Check Point firewall can be managed on any regular interface used for other traffic but usually there is a dedicated firewall management zone where the Security Management Server (SmartCenter) is located. We place the management server in the CPmgmt zone. Now we decide to use a jump server to access the management server to avoid asymmetric routing. The jump server can even run the SmartConsole client:

m19-2CP-SmartConsole-jump

The result is that the purple CPD traffic will really be symmetric (subnet-local in fact) but the management server’s non-local traffic (towards checkpoint.com for example) will still be asymmetric. (Remember, it is forbidden to add static routes to servers in this article.)

But the jump server could be the Check Point management server itself by adding a second NIC as in this figure:

m20-2CP-SMSjump

All the Check Point management traffic stays within the CPmgmt subnet so the default gateway can be on the second interface. So we can accomplish a reasonable setup with Check Point gateways even if they do not have a separate VRF for management traffic. But this is only true as long as the security gateway communicates exclusively with the management server. That is, you log in to the gateways via ssh or WebUI/https only from the management server. (WebUI needs a browser so it is possible only with a management server on Windows platform.) As soon as you need to access the gateways from another subnet (or you set up direct syslog sending from the gateway to a syslog server) you have to determine which firewall interface points towards to target and use its IP address instead of the CPmgmt IP address. So you cannot use the firewall’s CPmgmt address as a universal address for management.

So a typical Check Point security gateway is not capable of separating the management interface into a dedicated VRF. However, in case of VSX, it is possible to use a dedicated VSX for management and handle all the other zones in another VSX. All the VSXs can be managed via the management VSX. (Note: VSX needs extra licenses.) A similar workaround with Cisco ASA is to use multiple contexts.

Check Point ClusterXL

At this point you might think a simple Check Point cluster inherently contains the possibility of an asymmetric routing problem as multiple layer3 routing devices are attached to the same segments. In fact, you might have experienced that you can log in to the active firewall but not to the standby firewall. The problem is that you target the management interface rather than the closest interface. In other words: a connection from one side of the ClusterXL destined to the physical IP address of a non-active cluster member on the other side of the ClusterXL fails:

m23-CP-cluster-noasymm

But this issue has a different reason: the cluster is treated as one layer3 entity and the active member does not forward such an IP packet via a regular interface. As per sk42733, the packets can be forwarded to the standby member via the sync link by tuning a certain kernel parameter if the standby firewall needs to be accessible from the other side.

Cisco Web Security Appliance

Cisco WSA can be configured to use a dedicated management interface. In this case, separate routing tables are used: Management routing table and Data routing table. You have to add a default route for each. Due to this architecture, WSA will not cause asymmetric traffic.

Cisco Email Security Appliance

Cisco ESA has a management interface and one or more data interfaces.But there is no separate routing table (VRF) for management, a single system-wide default route is configured. Consequently, the asymmetric routing problems described above should be considered.

F5 BIG-IP LTM

BIG-IP’s management interface routing is mainly separated from the data plane (TMM routes). It is not an entirely separate routing table but a subtable that has priority when dealing with management traffic. As the F5 document states:

Management routes are routes that the BIG-IP system uses to forward traffic through the management interface. For traffic sourced from the management address, the system prefers management routes over TMM routes, and uses the most specific matching management route. If no management route is defined or matched, the system uses the most specific matching TMM route.

Some other informative F5 docs:

https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/tmos_management_guide_10_1/tmos_routes.html
https://support.f5.com/kb/en-us/solutions/public/10000/200/sol10239.html

FortiGate firewall

FortiOS supports multiple VDOMs and there is a built-in feature to choose which VDOM is the management VDOM. However, the management VDOM needs to have Internet access for FortiGuard services. The VDOMs can even be interconnected inside the chassis by virtual inter-VDOM links. It is worth examinig the asymmetric routing risks in this complex structure.

With a single Root VDOM the same asymmetric routing problems can occur as we saw in the figures with two Cisco ASAs or two Check Point firewalls.

It is possible to create other VDOMs in addition to the Root VDOM. Let’s examine the routing if Root VDOM is the management VDOM (with Internet access) and all other interfaces are placed into a Data VDOM. There is an inter-VDOM link to provide Internet access to the zones:

m15fg-2fw-asymm-internal

Again, we have a management segment with multiple connected layer3 routing devices (the Root VDOMs are routing entities). So there is no surprise in the similar asymmetric routing result. FortiGate firewalls do not tolerate asymmetric traffic by default although there is a command to permit it. But it is more advisable to create a good design.

Fortinet helps you do that easily. They have created a function very similar to having an out-of-band management port. It is called dedicated management port:

This port is in the hidden VDOM dmgmt-vdom, which cannot be made the management VDOM. Therefore, the dedicated management port supports CLI access for configuration but does not permit management traffic such as firmware update or unit registration.

This figure shows that management traffic handling is shared between the Root (management) VDOM and the dedicated management VDOM:

m24-2fg-dmgmt

You need one firewall (or layer3 switch if no protection is required) which routes the management network and which is the default gateway for all nodes in the segment. So we leave this central FortiGuard with a single VDOM. All the other FortiGate firewalls should be set to use the dedicated management network.

This is the config for the dedicated management port and VDOM:

#global_vdom=1
#dedicated-management=dmgmt-vdom
config system dedicated-mgmt
set status enable
set interface “mgmt”
set default-gateway 192.168.6.1
config system interface
edit “mgmt”
set vdom “dmgmt-vdom”
set ip 192.168.6.21 255.255.255.0
set allowaccess ping https ssh
set type physical
set dedicated-to management

Takeaways

To sum it up, the key takeaways are the following principles and I recommend these as the best practice:

  • Each layer3 segment should have a single exit point (routing device). If not, problems arise, the simplest form is described in this article of Network Guy. Exit point means any layer3 forwarding device with an interface in this segment that has at least another interface in another subnet in the same VRF.
  • Each layer3 routing device should be connected to the management network on a dedicated VRF (or OOB management port). A single exception is the device that routes the management network. Cisco ASA and Check Point firewalls are not very suitable for that.
  • Do not use multiple layer3 network interfaces in a server. (NIC teaming is OK, it does not mean multiple layer3 subnets.) All challenges can be solved by connecting the server to a single subnet. A multihomed server would need special handling of routing.
  • If a multihomed (multi-IP-address) server is absolutely inevitable, it should have only one interface that wants to communicate beyond the local subnet. (I. e. it should only have a single static route in the routing table and that is the default gateway.)
  • If there is a layer3 segment with multiple exit points, it causes no asymmetric routing problems as long as the interfaces connected here only communicate within the local subnet (That’s why a single-homed Check Point Security Management Server in such a segment will fail and its symmetric traffic cannot be guaranteed: it wants to connect to checkpoint.com as well as internal SIEM server etc.)
  • The management network should have a single exit point (preferably a normal firewall interface), all other layer3 devices’ management port connecting to this VLAN should have a dedicated routing instance (VRF, VDOM)
  • UPDATE: As Cisco ASA 9.5 now supports a separate routing table for management interface, Check Point remains the only one among the major firewall vendors lacking this feature.

UPDATE:
Das Blinken Lichten has posted a valuable article on the same topic from the server standpoint. As far as I understand, VRFs are called namespaces in Linux.

 

Advertisements

3 Responses to “Management network topology and asymmetric routing”

  1. Note that the ASA introduced a separate management routing table in version 9.5(1). Per the release notes: “To segregate and isolate management traffic from data traffic, the ASA now supports a separate routing table for management-only interfaces.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: