This has left applications vulnerable and under threat of manipulation, especially as according to Gartner, 70 percent of successful attacks occur through the application, rather than through the network or operating system. Cyber intruders often succeed in breaking into enterprise applications, which can contain sensitive customer information such as credit card details or financial data. This means that large parts of companies' corporate IT infrastructures are exposed to massive potential damage – in reputation and cost.
Many organizations are unaware that their web applications in particular are vulnerable to attack. Users with malicious intent don't need a lot of time or knowledge to vandalize websites or use applications as a gateway into an enterprises' infrastructure. Legislation aimed at motivating enterprises to secure their web applications has also added fuel to the fire. Under the U.K. Data Protection Act for example, companies could be liable to face legal action as a result of problems with application security because, unbeknown to them, they are exposing confidential data.
This isn't about phishing or identity fraud, it's about the way web applications are developed. Far too often they are developed without security being considered or being given priority from the start. Building security into the design of applications isn't as hard as it sounds – there are documented application architectures that promote security best practices. The real challenge is when the code is being written. Application security is not an area in application development that factors as a high priority, rather functionality and delivery comes first. But it is possible to build secure applications without compromising usability.
Applications consist of thousands of lines of code and therefore the coding process is the obvious place to start addressing security. Security must be built into the code from the beginning. By definition this means that testing for security loopholes becomes a priority. The emphasis needs to change from just building functionality, to building secure functionality. There are coding techniques that produce perfectly functional results, yet offer intruders a way to use the application or the underlying operating system to gain unauthorized access. One example is that most operating systems offer multiple ways to designate the same logical location in the file system. A developer may write some code that closes a path to that specific location – however this will only prevent hackers from getting access if they try to get in using that same path. There are many other paths that they could use. The only way to protect against this type of weakness is to identify all possible routes or paths and protect each of them individually.
Insecure coding practices can also creep in when developers are trying to get around persistent errors in applications. For example, often the Validate Request attribute of pages at the front end of a Web application is set to false by a developer to get around the problem of the page crashing when the validation request is set to true. Although by doing this the developer has not changed the functionality of the application, they have introduced a security vulnerability, as an intruder can then use data entry fields to try and send commands to the application, database or operating system, that could allow them access to corporate data they shouldn't see.
These are just a few examples of how the coding practices of today are resulting in insecure web-based applications. There are hundreds of different practices for dozens of different application scenarios that are constantly changing and evolving as hackers become more sophisticated and new vulnerabilities are found. It is impractical to think developers can keep abreast of all of these. Even if they could, most organizations cannot afford to re-train all their developers in writing secure code. However, what can be done is to arm developers with tools and processes that enable them to produce better quality code. They should be given tools that check their code against known security errors and that analyse internal calls and data transfers within the application, as well as the operating system and software environment in which it operates.
Improving the quality of code from a security perspective will undoubtedly make your applications more secure. However, you can't stop here; you also need to test against security loopholes. It's not unheard for organizations to hire professional hackers to test the application by trying to break into it. However, this is not always possible from a cost and resource point of view. In addition, this type of activity will prove very expensive and essentially is not done over the lifetime of an application. Security vulnerabilities can arise at any point within an application's life-cycle and hence organizations need to find ways to continually test against security loopholes. One way to do this is through automating attack simulation. You can use software that creates and pretends to attack the application with known paths of intrusion. This software can be updated and re-run on the application across its lifecycle to ensure you are constantly testing your application against known attacks.
Although it's virtually impossible to create a 100 percent secure application, by changing development and testing practices, organizations can significantly reduce the number of security vulnerabilities that exist in many of the applications developed today. Organizations should take note, otherwise they'll soon find that security holes in their applications impact their ability to do business and potentially land them in legal hot water.
The author is technology support manager, Compuware.