Perimeter security is changing fast

By on

Most security solutions today are built around attempting to protect the vulnerability of the PC and, or the server, by attempting to keep “bad” things outside of the network security perimeter. But, with the changing and disappearing perimeter - security now needs to be intrinsic in every system and for every user.

With the changing and disappearing perimeter - the model must change from the black list approach of trying to exclude everything that may be harmful to your network, to a more proactive white list, allowing secure access from anywhere. As evidenced by today's security problems, black list approaches simply do not work!

If you want to secure your home, you change the locks and only give keys and access codes to your family and friends. You don't try to list all of the undesirable people that could possibly want to get into your house and try to get the keys back from them. However, many of the current information security solutions attempt to list and block all of the potential threats to the corporate network. This will never work because the threat is always changing and you're never going to have a complete list.

The white list approach identifies what is safe and permitted and stops everything else, which is far easier than trying to identify all potential risks to your system.  Applying the concept of 'default deny' has strong appeal: by default, any unused port or service should be disabled, and only enabled when required.

It is much simpler to eliminate the vulnerability of the PC and/or server, by placing security within the computer itself and only allowing what is known and authorised and denying everything else.

In this model, only specific applications may be run by certain users,
nothing else; only specific external devices can connect and store data, nothing else!

End users can't be expected to disable unused ports and services. In fact, most wouldn't have a clue about the volume of TCP and UDP ports available on their machines. By running seemingly innocuous applications, users can unwittingly open the door for crackers and viruses.  By applying a white list you can prevent users from launching unauthorised software and prohibit the running of all executable files that may carry viruses, trojans and worms or "backdoor" programmes such as spyware.

Backing such a solution with an audit of attempts to do what is not allowed (such as the attempt to launch spyware or a backdoor program), as well as running an audit and log of what is allowed (including data copied to external devices) rounds out the way to eliminate the vulnerability of today's PC's and servers in the business environment.

The problem with current anti virus solutions is that they act as an
insurance policy against the last known attack, rather than providing a proactive solution to protecting the corporate network.

A 2002 Gartner Study, "Dissolution of the Security Perimeter," noted that even with anti-virus and firewalls in place, worms and viruses all required the execution of an application or the use of I/O devices to infect the system. In fact the Sasser worm turned off personal firewalls. We therefore need to focus on controlling which executables and I/O devices are allowed, and prevent any other code executing on host machines within the corporate environment.

Taking a white list approach is a much simpler and more effective solution. Key benefits of the white list approach are that it prevents intrusion, internally or externally and is simpler, cheaper and more effective than the detect and repair approach required of AV and other blacklist solutions. It doesn't need immediate and constant updates.  By providing digital signatures on authorised software your systems are intrinsically isolated from the threat of the next "Sasser."

Bob Johnson is chief operating officer at SecureWave

www.securewave.com

Copyright © SC Magazine, US edition
Tags:

Most Read Articles

Log In

Username:
Password:
|  Forgot your password?