Security tools are essential in the information security framework defined by the ISO 15408 standard known as the Common Criteria (CC) for IT Security Evaluation. The ISO 15408 standard provides 'common criteria' to evaluate the security of IT applications, systems and products. The objective is to prevent unauthorized disclosure, modification or loss of use - similar to the classic security framework of confidentiality, integrity and availability. This security framework can be used to set objectives, assess risks and design controls.
In this article we approach information security from the inside out. We will discuss security tools used to protect data integrity and the related business processes within an organization. Then, we look at security tools used to protect the confidentiality of systems and networks internally and at the perimeter of an organization's environment. The security tools discussed here address the following from the inside out: the business process and the supporting applications which are installed on a platform (e.g. web server), an underlying operating system which is connected to an internal network (LAN) and to the outside through a network perimeter. Each layer of IT infrastructure requires different security, controls and tools.
This article is not a comprehensive survey of security tools. Our aim in this article is to discuss how available security tools can be used within the Common Criteria framework. We chose a select list of useful tools that we felt could best illustrate these concepts.
ISO 15408 presents two categories of security requirements, functional and assurance requirements, which are used to develop a security strategy. An application, system or network can be designed with minimum or maximum functional security, with a corresponding assurance level.
Security functional requirements are grouped into 11 classes with a specific security objective and a desired behavior. The 11 classes are: security audit, identification and authentication, resource utilization, cryptographic support, security management, "target of evaluation" access, communication, privacy, trusted path/channels, user data protection and protection of security functions.
Assurance requirements are based on the effectiveness of the security functions. The eight assurance requirements are configuration management, guidance documents, vulnerability assessment, delivery and operation, life cycle support, assurance maintenance, development, and testing.
Evaluation assurance levels (EAL) are predefined packages of the above assurance requirements. Each level has an incremental security capability and indicates increased confidence in the IT component. EAL1 indicates that a product was functionally tested only. EAL7 indicates that the product has been subjected to maximum testing and the design has been thoroughly validated. This level is associated with critical information and high risk.
The common criteria are elaborated on by the Open Web Application Security Project (OWASP) in the area of application security. OWASP is an industry group started in 2001 that works to develop secure application practices. It focuses specifically on application vulnerabilities, many of which correspond to the common criteria.. Both the common criteria requirements and the OWASP classifications will be used in our discussion of security tools.
Application security and data integrity tools
Data integrity is relevant to software applications and data as well as the supported business process. The integrity of operating system and network device configurations is also critical to a secure environment. Heavy dependence on software applications and data integrity can be found in the banking and brokerage industries, hospitals and medical practice, and the military. The consequences of any compromise in these areas could be the loss of life, money or physical security.
Recent statistics published in the 2002 CSI/FBI Annual Computer Crime and Security Survey show a high rate of attacks against web sites and the transaction processing that enable e-commerce. Of the organizations surveyed, 38 percent suffered unauthorized access or misuse on their web sites within the last twelve months. It was unknown to 21 percent if there had been unauthorized access or misuse. Theft of transaction information was reported by 12 percent.
According to the Gartner Group, 75 percent of cyberattacks are at the application level. Sanctum, a vendor discussed below, reports that 97 percent of web sites studied were found to be vulnerable.
Today, applications are spread across networks, systems and protocols with many interfaces and access points. Users and data input may not be secure. This applies to internally developed applications as well as off-the-shelf software. The general conclusion is that applications, and especially web applications, are vulnerable.
There are a number of specific risks to data integrity. Proprietary and confidential data may be modified, destroyed or disclosed. Configurations of systems, applications and networks may be modified resulting in vulnerability. The threats to mission-critical applications could be malicious outsiders or insiders as well as unintentional or accidental events. Most application vulnerabilities are preventable with secure design, development, deployment and testing.
In addition to the Common Criteria and the ISO 15408 standard, OWASP has further defined application vulnerabilities. A useful derivative of the OWASP vulnerability classes has been developed by @Stake. Nine major categories of application vulnerabilities have been identified: administrative interfaces, authentication and access control, configuration management, cryptographic algorithms, information gathering, input validation, parameter manipulation, sensitive data handling and session management. Each category consists of numerous specific vulnerabilities to be prevented, detected and corrected by the appropriate security controls.
Application-level threat detection and response tools let an organization monitor how users are interacting with applications in order to detect any suspicious activity and stop it before it escalates. Suspicious user activity may include successive authentication failures, which might indicate password cracking; successive access denials which might indicate site probing; or transaction authorization denials which might indicate fraud.
The discussion below on data integrity, application security, and confidentiality uses the standard and classification scheme described above. Security tools are selected based on specific risks and vulnerabilities described above.
Application vulnerability scanners
Application vulnerability scanners can identify application vulnerabilities relating to almost any of the categories defined by OWASP. These tools address many of the Common Criteria functional and assurance requirements including security audit, vulnerability assessment and assurance maintenance.
WebSleuth is a free open source tool that can be used to audit the security of web applications and code. It consists of a set of Visual Basic routines that analyze the vulnerabilities of web sites. Many of the specific application and web risks described by OWASP can be controlled, including parameter manipulation, information leakage and input validation.
The tool can be used to test for parameter manipulation including cookies, form fields, URL query strings. Sensitive information that may be leaked on comment lines or in metatags and easily accessible by an attacker can be tested. Vulnerabilities associated with input validation can be tested, including cross-site scripting and client-side validation.
A built-in reporting utility compiles the links, images, scripts, comments and metatags of a web page. The actual coding is presented. WebSleuth can be a very useful set of tools for auditing the security and vulnerabilities of web sites and applications.
Figure 1: Websleuth
Some of the commercial vulnerability scanners available today can do a great job in identifying security weaknesses associated with web servers, applications and sites. Two scanners which have similar functionality are N-Stealth and Qualysguard.
N-Stealth is a vulnerability assessment product that scans web servers to identify security problems and weaknesses that might allow an attacker to gain privileged access. The software comes with an extensive vulnerability database against which it matches information from the web server. The IP address of a local or remote web server is entered and the scanning is fully automated.
A similar product for assessing vulnerabilities of web servers and applications is the Qualysguard scanner from Qualys. This is also a fully automated scanner where an IP address and a few basic parameters are entered. The interface is quite easy to use masking the power of its backend database containing vulnerabilities that are updated daily.
These tools function as a shield to inspect traffic between web browser and server. Suspicious activity directed at a web server and associated applications is detected and blocked.
Specific Common Criteria requirements addressed by these tools include resource utilization, target of evaluation access, trusted path/channels, user data protection and protection of security functions. These tools address OWASP-defined vulnerabilities relating to administrative interfaces, authentication and access control, configuration management, input validation, parameter manipulation and session management.
Sanctum's Appshield monitors web traffic between browser and web server. Sanctum has developed its proprietary dynamic policy recognition engine (DPRE) to define operational policy for web sites not by using signatures similar to intrusion detection systems, but by recognizing 'normal' behavior and rejecting all other use of the system. AppShield prevents, detects and alerts administrators of an attack based on the DPRE policy. This concept is to define the proper behavior and then to enforce it.
The DMZ/Shield from Ubizen uses a different strategy. This product can be considered an application-layer firewall. All traffic between browser and web server is filtered. The DMZ/Shield uses a two-step process. The first step is to examine all incoming HTTP requests for validity and correct formation. Known patterns of malicious activity are blocked as well as unknown formats. The second step is a policy check, where only recognized and needed HTTP requests are passed to the web server. Only certain parameters or certain formatted data may be allowed or denied. Policies and rules can be set to allow or deny any component of the HTTP protocol including headers, post data and method.
Configuration and file integrity
Threats to the configuration of an application or the underlying platform are very real. Specific Common Criteria requirements addressed by these tools include security audit, resource utilization, user data protection, protection of security functions, configuration management and assurance maintenance. Some of the vulnerabilities described by OWASP that are addressed by tools below include compromised administrative interfaces, access controls and parameter settings.
'Kernel-level rootkits' can be used maliciously to alter the web platform system files and launch a range of malicious code. Trojan horses such as sniffers, hidden processes and files can be inserted into a system. A system may appear to be operating normally but in fact may have been modified and compromised. Such malicious code may impact processing or may reveal confidential information. A solution can be found in file-integrity checking tools such as Tripwire and Pitbull.
Tripwire for Web Pages is used to monitor and detect changes to web pages. The integrity of web pages is guarded by a calculation using the MD5 algorithm, resulting in a 'hash value' which identifies the web page. Before a web page is served out to a requesting browser, the original hash value which is stored in a secure database is compared to the current calculated value which will indicate the integrity of the web page. If the contents of a file remain the same, then each time the MD5 hash value is calculated, the result is the same. But if the file has changed, no matter how subtly, the MD5 hash value also changes. If the integrity of the web page is intact, the page is served.
Tripwire for Servers and Network Devices monitors for any changes to configurations at the application or system level. A 'hash value' is calculated from a configuration of a server or network device. This hash value is then stored securely. Any subsequent unauthorized change to the configuration will cause an alert and an immediate reconfiguration back to the original configuration.
Pitbull from Argus Systems uses a different strategy for configuration and file integrity. System elements such as processes, files, users, and network interfaces are each isolated from other elements on the same machine. Each distinct element is referred to as a domain. An application can thus be confined to itself. If an application is compromised, no other areas of a system can be compromised. A machine can hold multiple domains each secure in its own 'secure application environment.' This is a strategy of containment. The domain model used by Pitbull can protect mission-critical applications through containment and access is restricted to authorized users and processes.
Application development quality assurance
The development of an application using a methodology such as the system development life cycle should incorporate security considerations from the very beginning. The vast majority of application defects and vulnerabilities are design related and can be prevented with better application design. This contrasts with network vulnerabilities, which are mostly configuration related.
Many of the Common Criteria assurance requirements are naturally addressed with good quality assurance practices, including configuration management, vulnerability assessment, assurance maintenance, development and testing. Almost all of the OWASP vulnerabilities can be avoided with good planning, design and testing.
Cenzic's Hailstorm is a quality assurance-testing tool which uses a unique methodology of 'fault injection.' Based on an inventory of applications, services, systems and networks, pre-defined faults are 'injected' into the testing environment to expose real and potential failures. Applications are exposed to hostile scenarios and the performance is observed.
Some of the faults that are injected into the test environment include unfiltered user input, command injection, information leakage, invalid application states, cross-site scripting, improper configuration and evasion of firewall, packet filter and IDS systems. These faults correspond to many of the OWASP categories of application vulnerabilities.
Hailstorm essentially simulates an attack and attempts to exploit various vulnerabilities. This product is useful in application development and the associated QA testing. After such rigorous development and testing, specific vulnerabilities can be assessed accurately.
Fredric Greene, CISSP, CPA, MCSE, CCNA is the president of Greene Security & Audit (www.greenesecurity.com). Richard Rabinowitz is president of BitSavvy LLC, an IT Security and Infrastructure consulting firm (www.bitsavvy.com)