In the beginning, software vendors thought that they could handle security vulnerabilities as they handle software bugs using their regular support process. Unfortunately, it's not so easy. Software security vulnerabilities are not like other software defects. Vulnerabilities have a timeline and they are not simply triggered by random user events.
Once attackers know how to exploit a vulnerability, they will actively attack vulnerable computers until it is patched. This “window of vulnerability” has gotten smaller with auto-update and patch-management solutions, but attackers have also gotten quicker at delivering new exploits with toolkits and bot networks. Having a window of vulnerability at all, however, is the problem.
The software industry's manufacturing process is very different from the one used by other high-tech industries such as semiconductors or aircraft engines, which strive for six sigma or 3.4 defects per one million opportunities. The profitability of the software industry had benefitted in the past from its ability to quickly release imperfect software and to fix the biggest problems later in the field. Now everything has changed, with the knowledge that motivated attackers can find and exploit software security vulnerabilities before they are patched. “Ship, pray and patch,” as a development methodology, is dead.
Michael Howard, a security expert at Microsoft, asks the following question to the Microsoft development teams: "What are you doing to reduce the chance an engineer will add a new security bug to the system?" It is important to have this attitude. Unless you are proactively trying to reduce security defects, they will inevitably appear.
Security defects are part of software development just as much as contaminants in silicon wafers are part of semiconductor manufacturing and cracks in titanium castings are part of aircraft manufacturing. All software developers -- ISVs, outsourcers and internal development teams -- need to take a cue from these other industries.
They need to implement techniques that create fewer defects at the point that a piece of code is created, and detect and correct the defects that arise as the individual code components are assembled together to make a complete software package.
The cost of software security processes
Cost is always a concern for any software that is developed as part of a business, so we need to carefully implement processes in the most cost-effective way. One factor affecting cost is the stage during the development lifecycle that the security prevention, or detection technique, is performed. It is cheapest to detect and correct defects as early in the process as practical.
For instance, if a design flaw such as choosing a poor authorisation scheme is not detected until final testing, there will likely be many hundreds of lines of code that will need to be changed and retested. If this design flaw is detected during a threat-modeling exercise after the design is complete but before the code is written, there is no code to test and change.
On the other hand, you don't want a process that detects flaws in the design or the code while it is undergoing a high rate of modification, or “churn.” You don't want to waste time detecting and correcting flaws in pieces of code that will be eliminated or changed radically; you want to wait until the design (or a major component of it) is complete. The same goes for the implementation or coding stage. The best policy is a gated process where certain analysis must take place at the optimal time and flaws are remediated (or the risk accepted) before the next stage of development takes place.
The major processes that reduce security defects are: Secure design; Threat modeling; Static code analysis; Security testing
Secure design requires the designers of the software to be educated in application security principles. This will teach them how to properly implement security features, to minimise and harden the attack surface, and layer multiple security mechanisms into the design.
After the design is complete, threat modeling is performed to determine whether the design mitigates the risks posed by the software's functionality. Threat modelling allows the designers to understand the attacker's point of view: What parts of the software are easiest to compromise? What are the impacts? This enables the threats to be enumerated and prioritised. A check is then made to see if the threat is mitigated with a security control. If it isn't properly mitigated, the design must be changed or the risk assumed.
Static code analysis
In the past, static code analysis was performed manually and called “secure code review.” A secure code review performed by experts gives great results, but it is very costly due to the time and expertise involved. Thankfully, today there are static analysis tools that can find the majority of the categories of coding errors with an accuracy approaching that of a manual review at a small fraction of the cost of a secure-code review.
Static analysis tools are best used when functionality for the modules of the software is complete, and they are integrated and linked together into a whole. A milestone build of the software is selected and analysed using static analysis. The results of the static analysis are triaged for security flaws that will truly impact the security of the software. The triaged flaws then should be entered into the defect-tracking system and treated as “must-fix” bugs. A future build of the software, such as the beta build, should be re-analysed to verify that the original set of security flaws has been fixed. Inevitably new security flaws will be discovered and they too should be triaged, fixed and verified.
It would be great if all flaws could be found with full automation using static techniques, but that isn't the case. Security testing uses dynamic techniques, which are needed to exercise the software much like an attacker would. Sometimes it is called application (or product) penetration testing. A great benefit of security testing is the environment in which the software is installed is taken into account.
Security testing is similar to standard software testing except, instead of verifying that the functionality of the program works as intended, the testing is attempting to get the software to perform functions it wasn't designed to do. This takes a bit of a mental shift on the part of a traditional software tester. This “negative testing,” as opposed to functional testing, requires a prioritised approach. With functional testing you know when you are done because you have tested each feature, but how do you test all the cases the software is not supposed to do?
Security testing can be prioritised using the output of the threat modeling. Threat modeling can rank the areas of the program where an exploit would be easiest -- areas closest to the attack surface and areas that anonymous users can attack. It also ranks the highest impacts to the program. For example, loss of confidential data is a higher impact than a DoS. Plotting these two priorities on a quadrant chart gives us a prioritised list of areas of the software to test.
Web applications should be tested with web application scanning. This can find the majority of the OWASP Top 10 most important web vulnerabilities. There are a few issues in the OWASP Top 10 that are related to authorization that require some manual testing. If you don't have a web application you are certainly going to have to perform manual security testing, because full automation isn't available.
Assurance level should dictate effort
All software doesn't require the same amount of security process during development. High assurance applications such as flight-control systems, banking and trading applications or operating systems demand all of the security processes listed above.
Medium-assurance software, such as back office applications or low-volume consumer software doesn't require as much security process because they are lower risk. If you are building medium or low risk software, automated static analysis and web application scanning (if necessary) is adequate.
If you are not doing anything to reduce security flaws during your development cycle, you certainly have them in your software. Introducing any security process will be beneficial to securing your code. It is usually easier to begin with testing towards the end of the development cycle and work your way earlier in the cycle by adding static code analysis, threat modeling and then secure design.
- Chris Wysopal is chief technology officer at Veracode.
See original article on SC Magazine US
Building security into your software-development lifecycle
By Chris Wysopal on Feb 18, 2008 4:08PM