Poor data classification costing companies dear

By on
Poor data classification costing companies dear

Companies are wasting up to £1m per terabyte of data they store because of poor or non-existent data classification.

The organisers of Storage Expo, to be held in London next week, commissioned a survey into data classification which found that the main reasons companies classify data is access control (67 per cent), retention control (21 per cent) and retrieval and discovery (12 per cent).

However, Alan Pelz-Sharpe, principal analyst at CMS Watch, reckons that these priorities are in the wrong order and the resulting data duplication is costing companies dear.

"Typically 80 per cent of mail data consists of duplication," he said. "Yet any search tool has to treat each piece of data equally, thus slowing down the process and pushing discovery costs through the roof."

Pelz-Sharpe estimates that storage currently costs around 10p per gigabyte, but the cost of legal discovery on 1GB of storage would be at least £1,000.

"So storing everything may seem cheap on the one hand, but can become very expensive should something go wrong," he said.

As well as increased costs, unstructured and incorrectly classified data is a headache for many companies and can negatively affect productivity and security and raise a raft of compliance and regulatory issues.

"Increasingly, data classification is determined based on intended use of data, rather than simply its subject matter or source," added Theresa Regali, another principal analyst at CMS Watch.

"Classification is vital to ensure that data does not fall into the wrong hands and security protocols are met and to facilitate enterprise-wide search, retrieval and discovery."

Pelz-Sharpe and Regali will be chairing keynote sessions at next week's event to discuss the issues surrounding this topic.
Got a news tip for our journalists? Share it with us anonymously here.
Copyright ©v3.co.uk

Most Read Articles

Log In

  |  Forgot your password?