Harvard study probes Denial of Service attacks

By on
Harvard study probes Denial of Service attacks

Offers advice for smaller organisations.

Distributed Denial of Service (DDoS) Attacks have emerged as one of the more vexing issues network administrators face, according to a study by the Berkman Centre for Internet Society at Harvard University.

Major websites such as Amazon, Facebook and Google have the resources to reduce outages to less than a few hours. But the study focused on the vast majority of Government and smaller independent media and human rights sites to draw out broad range of solutions to keep a site live.

Part of a year long study on DDoS attacks, the Berkman report included interviews with administrators from Australia, China, Vietnam, Russia on their experiences and lessons learned.

It found that most DDos attacks are perpetrated via botnets. But it also noted a trend in "volunteer" attacks, where multiple attackers cooperate and combine forces - such as Anonymous' attack on Australian Government sites in February this year.

DDoS was seen as an efficient means to make cyber-protest. While many attacks could be identified with dissidents, the study also found attacks that suggested government involvement.

The study also notes the variety of DDoS tactics such as "amplification". This technique can trick other computers into turning the attacker's single stream of traffic into a flood of thousands or millions of streams. It cites the example of DNS amplification which involves the attacker request to a DNS server seeming to come from the target web server.

"The DNS server does what it's supposed to do and delivers a chunk of domain name information to the computer that (putatively) requested it: the target computer. The information delivered is much larger than the size of the request. Some attacks can leverage DNS servers to amplify their traffic by a factor of 76:1," the study noted.

Another technique is to exploit slow-loading pages such as intensive query searches. As few as five machines executing simultaneous searches have crippled websites, that otherwise served almost a million page views a day.

The study also pointed to the importance of understanding how content management applications should be set up. CMS applications such as WordPress and Drupal can become inherently vulnerable to DDoS attacks in their default configurations.

DDoS attacks are often accompanied by intrusions, defacements, filtering, and even off-line attacks, the study said.

Most network attacks involved the use of botnets to generate sufficient incoming traffic. Other indicators suggested that the botnets used were rented. Two interview subjects reported that attacks began and ended at the top of an hour, suggesting that a botnet had been rented for a specific duration.

The study found that many of the most effective mitigation strategies against DDoS such as packet filtering, rate limiting, "scrubbing"* and dynamic rerouting are unavailable to inexperienced administrators or administrators using shared hosting solutions.

The Berkman study makes a range of recommendations including:

  1. Consider whether your website may be the target of a DDoS attack in advance of an attack and how you will respond,
  2. For organisations with little technical expertise and few financial resources, the appropriate path is to get hosting from websites on Blogger, WordPress, or other major hosting platforms. These provide high levels of DDoS resistance at no financial cost to the organisation.
  3. Organisations with expertise constraints but fewer financial constraints should consider using existing DDoS-resistant hosts, with the caveat that few can guarantee uptime in the face of high-volume, sustained DDoS.
  4. Organisations should ensure that their site is mirrored on platforms like Blogger or WordPress and configured so that it can fail over to these more resistant (likely lower-functionality) backups in the face of an attack.
  5. Organisations with high levels of in-house expertise should consider using customised publishing platforms that allow for graceful degradation in response to high load and automated failover to mirror sites. They should implement caching strategies than minimise the impact of application attacks.

* Scrubbing involves setting up a large server farm capable of accepting many incoming connections and using a combination of automated and manual techniques to drop illegitimate traffic and pass through legitimate traffic.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.

Most Read Articles

You must be a registered member of iTnews to post a comment.
| Register

Log In

Username / Email:
  |  Forgot your password?