In many respects, a dual firewall topology is similar to that of an industrial process control system's "one out of two" (1oo2) protection schemes. In 1oo2 schemes, either of two sensors identifying a failure will trigger an alert. Conversely, both must identify a condition as safe for the process to proceed. This 1oo2 protection scheme has been used effectively to mitigate risk in industrial process control systems for many years.
Network security can benefit from the lessons learned in the evolution of process control systems. In an industrial process control system, it was recognized long ago that the failure of a single critical input from a sensor that signals an unsafe condition could have catastrophic results. In an effort to mitigate this risk, industrial process control system designers devised a scheme whereby, instead of relying on a single sensor measuring a process variable, two separate sensors were used and each sensor had a "vote" on whether conditions were safe or not. The voting logic of the industrial process control system would consider the vote of each sensor and, if both sensors did not agree that conditions were safe, the system would initiate a safe shutdown process to prevent a catastrophic failure. Hence, to continue normal operations, both of the two sensors must agree conditions are safe.
A dual firewall topology is similar to an industrial process control system 1oo2 voting scheme in that both firewalls must agree that a received packet does not pose a security risk (conditions are safe) or the packet is denied and not permitted to be passed to the protected network. Hence, to continue normal operations (allowing packets to pass through the firewall), both of the two firewalls (sensors) must agree conditions are safe.
I have seen a clear increase in the use of dual firewall topology in the enterprise network security environment. Unfortunately, many of the deployments I have seen include a critical error that eliminates most, if not all, of the risk mitigation capability normally found in a properly designed topology. While they have, indeed, used two firewalls in series, the system designer has made the error of using a packet filtering firewall in front of an application proxy firewall in the mistaken assumption that this dual firewall topology will increase their risk mitigation. The bottom line in this topology is that all that has been accomplished is a decrease in reliability and manageability with no increase in risk mitigation.
Let me explain why I believe this topology is incorrect and why many are now living unknowingly with a false sense of security derived from relying on the dual firewall topology described above. Clearly, hackers have exhausted the available "protocol level" attacks up through layer 4. Today, the majority of attacks launched against private enterprise networks via the Internet are application level attacks. In a dual firewall topology where a packet filtering firewall is in front of an application proxy firewall, an application level attack simply passes through the first firewall completely unchecked and your only defense is the second firewall. There is no increased risk mitigation when the first firewall never inspects the payload of the packet and you are relying completely on the second firewall as your defense.
Some might argue that in this topology there is an increase in security because the attacker has to break through the first firewall and is then confronted by a second layer of defense provided by the application level firewall. The logic in this argument fails because, in fact, the attacker does not have to "break" through the first firewall in order to pass his application level attack. The attack simply passes through the open packet filtered ports of the first firewall without detecting the application level attack. It is as if the application level attack did not exist. The only potential for risk mitigation is in the second firewall's application proxy. As far as the attacker is concerned during an application level attack in this topology, the first firewall does not exist.
The only possible benefit to the enterprise in the topology described above would be that, by screening packets, the first firewall may enhance the performance of the second firewall. If you only allow those services supported by the application proxy firewall (second in line) to be passed by the packet filtering firewall (first in line) you eliminate the CPU load of having to screen all of the packets on the second firewall. Personally, I believe your money would be better spent purchasing a faster hardware platform for the application level firewall than spending money on a packet filtering firewall in order to reduce the load on the application level firewall.
Reliability is also a consideration in a dual firewall topology. With two firewalls in series, you are reducing your overall reliability. A failure of either firewall, whether it is a failure of the firewall hardware or failure due to an attack directed against a vulnerability in the firewall software or underlying operating system, can shut down your Internet connectivity. A firewall is not necessarily the "holy grail." Firewalls themselves are not immune to vulnerabilities, and the risk increases when running a firewall on top of a commercial operating system because of the associated vulnerabilities observed in respective operating systems.
Multiple firewalls in a 1oo2 topology – Getting it right
Increased risk mitigation is clearly attainable in a 1oo2 topology through the use of a multiple firewall topology between the public Internet and private networks. However, in order to attain this higher risk mitigation there are some simple rules that must be followed.
- Both firewalls must inspect all seven layers of the OSI model.
- Using a packet filter firewall that inspects packets only up to layer 4 of the OSI model as your first firewall and a firewall that inspects all seven layers of the OSI model as your second firewall effectively eliminates any risk mitigation at the same time it decreases overall reliability and manageability when compared to using a single standalone firewall.
- The inspection methodologies must use disparate technology.
- Using two firewalls that inspect all seven layers of the OSI model but rely on the same software and inspection methodology provides little, if any, risk mitigation while at the same time it decreases overall reliability when compared to using a standalone firewall.
- The firewalls must operate on top of disparate operating systems.
- Using the same operating system on both firewalls reduces risk mitigation since a single exploit of the operating system can take out both firewalls.
With current technology, industrial process control system designers have actually gone further in increasing risk mitigation and have effectively solved the reduced reliability issues in a one out of two (1oo2) voting scheme by developing two out of three (2oo3) voting schemes that afford redundancy in the voting logic. Inherently, 2oo3 voting schemes offer measurably higher risk mitigation while increasing overall reliability through redundancy of key failure points in the system. Here, if any two of three sensors fail (or report a failure), the process is halted.
In network security, a 2oo3 firewall topology is likely to be too complex and expensive to deploy and manage. At a minimum, however, we can learn from the designers of the industrial 2oo3 scheme and obtain a cost-effective increase in reliability, at least with respect to the firewall hardware, through the use of redundancy in 1oo2 multiple firewall topologies.
By using high availability technology to create two pairs of redundant firewalls in a 1oo2 voting scheme, you can mitigate a majority of the reliability issues related to firewall hardware while providing higher risk mitigation.
The ability to easily manage your 1oo2 firewall topology is critical to its long term success. You need to be able to manage both firewalls together as if they were one in order to minimize configuration issues and errors. Using a centralized management scheme on the 1oo2 firewall topology, the administrator only has to deal with learning a single GUI and managing a single security policy. If a change is made on the central manager to the "single policy," it is automatically published to both firewalls in their respective proper data formats.
In order to meet the requirements for disparity of the filtering methodology and disparity of the underlying operating system, network security system designers have typically had to source firewalls from separate firewall vendors. Managing firewalls from different vendors can be problematic, since most commercial product vendors are not willing to share their intellectual property with competing application level firewall vendors. Historically this has resulted in the inability of most application firewall vendors to offer a centralized management product that was capable of managing products from multiple vendors. However, this is now beginning to change. Industry consolidation and the development of next generation firewall technologies have led some vendors to develop management capabilities that could handle their existing products and their next generation products as well as products acquired through consolidation. These vendors are now able to offer 1oo2 firewall topology solutions that meet the guidelines for disparity in the firewall technology and disparity in the operating system along with comprehensive centralized management.
Using two firewalls in a multi-layer dual firewall topology (1oo2) can afford a beneficial increase in risk mitigation without negatively impacting reliability and manageability. However, unless done properly, there will be no appreciable increase in risk mitigation and, further, it will cause a decrease in reliability and manageability.
Due to consolidation in the firewall industry as well as development of next-generation firewalls, it is possible today for the network security designer to acquire a bundled multi-layer dual firewall topology (1oo2) system from a single vendor that will meet all of the requirements of a properly configured topology, including redundancy, while providing a single management interface to reduce the management burden.
Paul A. Henry MCP+I, MCSE, CFSA, CFSO, CCSA, CCSE, CISM, CISSP, CISA, ISSAP, is Senior Vice President of CyberGuard Corp.