Facebook makes 30 petabyte Hadoop migration

 

Replicates system that supports Ad Network to new data centre.

Facebook has revealed it developed a replication system to move a 30 petabyte (PB) file system to a new data centre in Oregon.

Facebook’s data warehouse Hadoop cluster grew 10 PB over a the year to March 2010, hitting 30 PB which forced the data centre move.

Hadoop is a distributed file system developed by the Apache Software Foundation.

Facebook used the Hive datawarehousing framework and its massive Hadoop cluster for internal analysis and to support products, such as the Facebook Ad Network.  

Engineer Paul Yang said that physically moving the systems to the new location was "not a viable option as our users and analysts depend on the data 24/7, and the downtime would be too long".

The data warehouse hardware included 2000 12 terabyte machines, split into 1200 8-core machines and 800 16-core machines with 32GB RAM each.

To support the migration Facebook’s engineering team developed a replication system to mirror changes from the old cluster onto a new larger one, which would allow Facebook to “redirect everything” at switchover time.

“This approach is more complex as the source is a live file system, with files being created and deleted continuously,” said Yang. 

The first cab off the rank was a bulk copy using Hadoop applications such as DistCP, which was moved to the new destination.

Then, using its new replication system, Facebook dealt with file and metadata changes that occurred after the bulk copy process was started, explained Yang. 

“File changes were detected through a custom Hive plug-in that recorded the changes to an audit log. The replication system continuously polled the audit log and copied modified files so that the destination would never be more than a couple of hours behind," he said.

When the engineers were ready to begin the switchover, they “set up camp in a war room” and shut down the older JobTracker - the Hadoop service that scheduled tasks to the right node - and fired it up at the new location. 

“Once replication was caught up, both clusters were identical, and we changed the DNS entries so that the hostnames referenced by Hadoop jobs pointed to the servers in the new cluster,” said Yang. 

Yang believed Facebook's successful replication of the Hadoop cluster may improve the appeal of Hadoop and Hive to the enterprise.

Copyright © iTnews.com.au . All rights reserved.


Facebook makes 30 petabyte Hadoop migration
 
 
 
Top Stories
The True Cost of BYOD - 2014 survey
Twelve months on from our first study, is BYOD a better proposition?
 
Photos: Unboxing the Magnus supercomputer
Pawsey's biggest beast slots into place.
 
ANZ looks to life beyond the transaction
If digital disruptors think an online payments startup could rock the big four, they’ve missed the point of why people use banks, says Patrick Maes.
 
 
Sign up to receive iTnews email bulletins
   FOLLOW US...
Latest Comments
Polls
What is delaying adoption of public cloud in your organisation?







   |   View results
Lock-in concerns
  29%
 
Application integration concerns
  3%
 
Security and compliance concerns
  28%
 
Unreliable network infrastructure
  9%
 
Data sovereignty concerns
  22%
 
Lack of stakeholder support
  3%
 
Protecting on-premise IT jobs
  4%
 
Difficulty transitioning CapEx budget into OpEx
  3%
TOTAL VOTES: 1117

Vote