Firewalls are humming along, intrusion detection systems are installed and the incident response team is ready for action. This will probably go a long way towards creating a more secure enterprise computing environment.
Let's look at it from the prevention-detection-response model. Prevention is mostly likely handled by the organization's firewalls and hardened hosts and applications. Apparently, intrusion detection systems (IDS) seek to provide detection, while a team of experts armed with forensic and other security tools provides response. Admittedly, the above picture is a grand simplification, but the separation between prevention, detection and response is still artificial to a large degree. Firewalls greatly help in detection by providing logs of allowed and denied connections, IDS can be configured to respond to incidents automatically, and security professionals are at the core of all three components.
The complex interplay between prevention detection and response is further complicated by the continuous decision-making process: "what to respond to?", "how to prevent an event?", etc. Such decisions are based on the information provided by the security infrastructure components. Paradoxically, the more security devices one deploys, the more firewalls are blocking messages, the more detection systems are sending alerts, the harder it is to make the right decisions about how to react.
What are the common options for optimizing the security decisions made by the company IT executives? The security information flow needs to be converted into a decision. Attempts to create a fully automated solution for making such a decision, some even based on artificial intelligence, have not yet reached a commercially viable stage. The problem is thus to create a system to reduce the information flow sufficiently and then to provide some guidance to the system's human operators in order to make the right security decision.
In addition to facilitating decision-making in case of a security event (defined as a single communication instance from a security device) or an incident (defined as a confirmed attempted intrusion or other attack), reducing the information flow is required for implementing security benchmarks. Assessing the effectiveness of deployed security controls is an extremely valuable part of an organization security program. Such an assessment can be used to calculate a security return on investment (ROI) and to enable other methods for marrying security and business needs.
The commonly utilized scenarios can be loosely categorized into install-and-forget (unfortunately, all too common), manual data reduction (or, reliance on a particular person to extract and analyze the meaningful audit records) and in-house automation tools (such as scripts and utilities aimed at processing the information flow). Let us briefly look at advantages and disadvantages of the above methods.
Is there a chance that that the first approach - deploying and leaving the security infrastructure unsupervised - has a business justification? Indeed, some people do drive their cars without mandatory car insurance, but companies are unlikely to be moved by the same reasons that motivate the reckless drivers. Most SC Magazine readers have probably heard "Having a firewall does not provide 100 percent security" many times. In fact, previously unknown exploits and new vulnerabilities are less of a threat to firewalls than company employees. Technology solutions are rarely effective against social and human problems. Advanced firewalls can probably be made to mitigate the threat from new exploits, but not from the firewall administrator's mistakes and deliberate tampering from the inside of the protected perimeter. In addition, a total lack of feedback on security technology performance will prevent a company from taking a proactive stance against new threats and adjusting its defenses against the flood of attacks hitting its bastions.
Does relying on human experts to understand your security information and to provide effective response guidelines based on the gathered evidence constitute a viable alternative to doing nothing? Two approaches to the problem are possible. First, a security professional can study the evidence after the security incident. Careful examination of evidence collected by various security devices will certainly shed light on the incident and will likely help to prevent the recurrence. However, in case extensive damage is done, it is already too late and prevention of future incidents of the same kind will not return the stolen intellectual property or disappointed business partners. Expert response after the fact has a good chance of being delayed in the age of fast, automated attack tools.
The second option is to review the accumulated audit trail data periodically. A simple calculation is in order. A single border router will produce several messages on a busy network, and so will the firewall. Adding host messages from several servers will increase the flow to possibly dozens per second. Now if one is to scale this to an average company network infrastructure, the information flow will increase a hundredfold. No human expert or team will be able to review, let alone analyze, the incoming flood of signals.
But what if a security professional chooses to automate the task by writing a script or a program to alert him or her of the significant events? Such program may help with data collection (centralized syslog server or a database) and alerting (email, pager, voice mail). However, a series of important questions arises. Collected data will greatly help with an incident investigation, but what about the timeliness of the response? Separating meaningful events from mere chaff is not a trivial task, especially in a large multi-vendor environment. Moreover, even devices sold by a single vendor might have various event prioritization schemes. Thus designing the right data reduction and analysis scheme that optimizes the security decision process might require significant time and capital investment and still not reach the set goals due to a lack of the specific analysis expertise.
In addition, alerting on raw event data (such as "if you see a specific IDS signature, send an email") will quickly turn into the "boy that cried wolf" story, with pagers screaming for attention and not getting it. In light of the above problems with prioritization, simply alerting on 'high-priority' events is not a solution. Indeed, IDS systems can be tuned to provide fewer alerts, but to effectively tune the system one needs access to the whole feedback provided by the security infrastructure and not just to raw IDS logs. For example, outside and inside firewall logs are very useful for tuning the IDS deployed in the demilitarized zone (DMZ).
Overall, it appears that simply investing in more and more security devices does not create more security. One needs to keep in close touch with the deployed devices, and the only way to do it is by using special-purpose automated tools to analyze all the information they produce and to draw meaningful conclusions aimed to optimize the effectiveness of the IT defenses. While having internal staff write code to help accumulate data and map it might be acceptable in immediate-term situations in small environments, the maintenance, scalability and continued justification for such systems likely has a very low ROI. In fact, it has caused the birth of security information management (SIM) products that have, as their only focus, the collection and correlation of this data. It all comes down to prioritizing time to respond and the tolerance for headcount additions as volume and data complexity grow.
Anton Chuvakin is a senior security analyst with netForensics (www.netforensics.com), a security information management software company that provides real-time network security monitoring solutions.