What to say when things go wrong

By

[Blog post] Is crisis comms part of your disaster recovery plan?

Disaster strikes, something goes wrong and it turns out to be a security incident, a serious one. Systems go down and the situation is a mess.

What to say when things go wrong

Customers are screaming at your company on social media, staff are working round the clock to limit the damage - and you’re not even sure of the full extent of the damage.

Staying quiet and hoping things will die down sooner rather than later isn’t an option. Your paying customers demand an explanation. How do you communicate when something bad happens?

A good example of what not to do can be found across the ditch, where listed telco incumbent Spark (née Telecom NZ) was hit by a mysterious fault or attack that took out its Domain Name Service (DNS) for 600,000 customers during the past weekend.

The fault meant Spark customers lost DNS resolution intermittently over three days which effectively killed internet access for many people - unless they swapped to a third-party open DNS server like Google’s.

At first, the fault was blamed on Spark customers clicking on the nude pics stolen from celebrities in the recent iCloud hack, and in the process downloaded malware that compromised their computers and shanghaied them into a denial of service attack botnet.

There were Eastern European cyber criminals in the mix too, plus DNS amplification attacks but according to the latest missive from Spark, the whole thing was caused by 135 customer modems running open, abusable DNS resolvers.

Instead of blowing over, the whole thing has now become reminiscent of the XT mobile network launch fiasco in 2010 which hurt the company's reputation badly.

The net result of the mixed messaging from Spark is confusion everywhere including unhappy customers and questions around the company’s overall technical competency. How did relatively few customers manage to cause an entire national network to go belly-up? So far there's not been a good answer to this rather crucial question.

Meanwhile, overseas media picked up on the "users clicking on nude pics and take down the NZ internet" angle and that piece of news is everywhere. It’s a safe bet that already angry Spark customers won’t like being labelled as malware-infested pervs by overseas media.

It’s easy to see how Spark got its disaster comms wrong. To start with, blaming customers that pay for a service that they depend on is a total no-no. It doesn’t matter how annoying the customers, if you want them to stay with you, fix their security and technical problems and don’t blame them publicly.

Also changing the direction of blame to a problem that customers won’t be able to solve is guaranteed to make matters even worse. Spark provided no fix to the bad modems the company blamed on the downtime.

That’s probably because the telco knows that most retail broadband modems, including the ones it distributes, do not receive regular and timely security updates. In other words there is no easy fix for most customers.

To avoid a mess like this you need to have a plan in place and work out how to present the information to customers. Generally speaking, it’s always simpler to stick to the truth. 

Be ready to act fast with support where needed to solve the issue. 

Trying to hide foul-ups like infrastructure upgrades gone wrong is also a mistake, as the truth will come out sooner or later and your organisation will look twice as bad.

It is also important to have people available that can speak knowledgeably about what happened (or who realise they’re out of their depth and say “I don’t know” when needed).

Should you get techies to explain the situation? That depends.

There are techies that can be trusted to speak to customers and explain the issue in a calm and clear manner. They are rare, so unless your enterprise is lucky enough to have one to hand, best to avoid the tired and harassed techies working to fix the problem who could make things worse, not better.

Don’t forget that in a crisis situation, techies have a vested interest in avoiding blame and sometimes even shifting it. Many issues are difficult to understand unless you have direct access to what is causing the problems, and it can be tempting for techies to hide or obfuscate information out of self preservation.

Either way, ensure that technical explanations make sense or your company will look incompetent at best, or worse, as calculating liars who treat paying customers with contempt.

If a workaround is provided while the problem is being sorted out, make doubly sure that it won’t come back and bite you later on. You need to weigh up whether immediate relief is worth long-term pain in terms of support and other issues later on, when things are back to normal.

All of this needs to be part of your organisation’s disaster recovery comms plan, and understood by everyone involved. Comms need to be included in disaster drills too, and not left to when the proverbial hits the fan. There's too much at stake not to get it right.

Got a news tip for our journalists? Share it with us anonymously here.
Tags:
Juha Saarinen
Juha Saarinen has been covering the technology sector since the mid-1990s for publications around the world. He has been writing for iTnews since 2010 and also contributes to the New Zealand Herald, the Guardian and Wired's Threat Level section.

He is based in Auckland, New Zealand. Google
Read more from this blog: SigInt

Most Read Articles

Woolworths' CSO is Optus-bound

Woolworths' CSO is Optus-bound

Australia's super funds told to assess authentication controls

Australia's super funds told to assess authentication controls

Hackers abuse modified Salesforce app to steal data, extort companies

Hackers abuse modified Salesforce app to steal data, extort companies

The Northern Beaches Women's Shelter hones focus on tech-enabled abuse

The Northern Beaches Women's Shelter hones focus on tech-enabled abuse

Log In

  |  Forgot your password?