Top 10 technology mistakes

By

Cock-ups of a technical bent, old and new.

It was a week of mistakes, kicking off with the news of an Apple prototype iPhone being left in a Bay Area bar, and ending with McAfee admitting to a major mistake.

Mistakes are part and parcel of human nature. It's a story as old as time. But this is not something to be scorned. There have been many glorious mistakes that have advanced human knowledge immensely.

Research into one topic has led to accidental discoveries that have changed history. Penicillin, X-Rays, the electron and the Bakewell Pudding were all discoveries that came about through mistakes.

But the mistakes we're dealing with on this list aren't so much the glorious mistakes of times past, but more recent errors that had a concrete result. We've kept this as technical as possible, but business, as ever, intrudes. Have a look and see what you think.

Honourable Mention: Mars Climate Orbiter


Shaun Nichols: As any computer programmer will tell you, some of the most confusing and complex issues can stem from the simplest of errors.

If you're writing any kind of code for a game or application, this can be quite annoying. If you're writing the code for a NASA space probe, it can be a catastrophe that costs hundreds of millions of dollars. This was the case with the Mars Climate Orbiter.

A simple error in the development process caused the destruction of a US$327m space system. It turned out that, while most of the programming and mission planning had been done in units of measurement from the Imperial system used in the US, the software to control the orbiter's thrusters had been written with units from the metric system.

The result was the space equivalent to jumping off of a 100ft bridge with a 50m long bungee cord. The orbiter went far too deep into the Martian atmosphere and was promptly torn apart by the atmospheric forces.

Iain Thomson: The Mars Climate Orbiter failed due to a simple engineering error, converting Imperial measurements into metric. Imperial measurements are a curious thing for a British person to consider. We invented the damn things and it keeps coming back to bite us on the backside.

Despite efforts to keep Imperial measurements in the UK, the far more rational metric system has prevailed, as it has pretty much everywhere else in the world.

To complicate matters the US still keeps to the Imperial measurements that were in place when the original colonies were founded, meaning that I have to drink more pints over here to consume the same amount of alcohol as I would with fewer pints in London. That's my excuse and I'm sticking to it.

The Mars Climate Orbiter, and the Mars Polar Lander it contained, would have advanced our knowledge of the Red Planet immensely. The two probes would have monitored the Martian climate and the composition of its soil to determine the presence of usable water for possible manned missions. The timing was also less than ideal, pre-millennial angst being what it was.

Honourable Mention: CIA pipeline bug


Iain Thomson: OK, this one has never been officially confirmed but it's an interesting tale that has some relevance today.

In the early 1980s French intelligence persuaded a disaffected Soviet colonel to hand over something called the Farewell Dossier. In it were the names of Soviet spies who had infiltrated Western companies with the aim of stealing technology.

This was shared with the Americans and it was discovered that the Soviets had infiltrated a Canadian company to steal control software for gas pipelines, something that was needed if the Siberian gas pipeline intended to supply western European markets was to be completed.

According to National Security Council staffer Thomas C. Reed, the CIA, keen to disrupt trade between Europe and the Soviet Union, introduced malware into the software before it was stolen. Once activated the software would have catastrophic results, beyond what had been originally envisaged.

"The pipeline software that was to run the pumps, turbines and valves was programmed to go haywire, to reset pump speeds and valve settings to produce pressures far beyond those acceptable to the pipeline joints and welds," Reed recounted.

"The result was the most monumental non-nuclear explosion and fire ever seen from space."

Commercial spying has always been with us, I suspect right back to some distant ancestor being tortured to reveal the secret of fire. But it has never been more rampant than it is today. Technology has aided this process immensely.

When Mossad stole the blueprints for the Mirage 5 they needed to transport nearly three tons of blueprints. The same information would now fit on a couple of hard drives, and there are plenty of countries, and companies, that are actively seeking to steal such data.

Shaun Nichols: Whether real or not, it behoves the CIA to remain silent on the issue. By not confirming, they avoid the ire of the US public and world community, and by not denying they keep other governments thinking that they just might be able to make pipelines explode with the push of a button.

Iain brings up a very good point in relating the alleged incident to current events. We all know that securing power and fuel infrastructure has become a major worry for all governments, and if something like this could have been done in the early 1980s, imagine what sort of havoc could be wreaked these days.

It just underlies the importance of securing infrastructure. The code that would have caused such an explosion is likely to have been small and simple, yet capable of causing absolute chaos when properly deployed.

10. Windows Millennium Edition


Shaun Nichols: Windows Millennium Edition (ME) has been a favourite punch bag of Top 10 lists past, so we decided to go easy on Microsoft's 'Mistake Edition' this time around.

Windows XP received a very warm reception when it first came out, due in no small part to the fact that it allowed many people to dump Windows ME. The final member of the Windows 9x family suffered from so many bugs and limitations that many people opted to downgrade to earlier versions.

Of the many bugs in Windows ME, my favourite was with the software restore feature. Often users who had experienced severe malware infections would clean up the infection and then restore damaged system components. Unfortunately, Windows ME had a slightly bothersome tendency to restore the malware as well.

Many will argue that this was because it was so hard to tell the difference between Windows ME and an actual computer virus.

Iain Thomson: There are so many things to hate about ME one hardly knows where to start. That said, the last time we savaged the operating system several readers wrote in to tell us we were mistaken, so I took a look. Yes, there were happy ME users, but they are few and far between.

ME suits a very particular niche market: the conservative small business. If you were running a small business of, let's say 100 people, and you had a fairly static set of applications to maintain, then life wasn't too bad.

A monoculture Microsoft network would give you few problems, provided you didn't let users do anything stupid, and a lot of the major security bugs in Windows had been worked out by the ME version.

But for everyone else the system was a dog. Consumers hated the frequent crashes that came as a result of downloading material from the internet, and corporations didn't like the software conflicts and dodgy recovery software. XKCD has it right: ME is best left as a warning in history.

9. Brain virus

Iain Thomson: The Brain virus is recognised as the first malware for the MS-DOS operating system, but if you believe its creators the whole thing was a case of copyright protection gone wrong.

The virus is thought to have been developed in 1986 by two brothers in Pakistan named Basit and Amjad Farooq Alvi, who were looking to protect some medical software they had written from disc copying.

They had found some suitable code on an internet bulletin board site, and adapted it so that, if someone used the software, the malware would be installed. Someone else adapted this for MS-DOS and the stage was set.

Once installed, the code wrote itself to the boot sector of the computer's hard drive and displayed a message warning that the PC was infected, and giving a number to call to sort out the problem.

Unfortunately every time that disc was used in a new machine the virus would spread. The brothers soon found themselves deluged with angry callers and eventually had their phone lines cut off, but the damage was done. A classic example of why a little knowledge is a dangerous thing.

Shaun Nichols: The original case of digital rights management run amok, Brain underscores the fine line that companies can often find themselves walking when looking to secure their products.

Too often companies become so enamoured with securing their software that they lose sight of what they are doing to their legitimate customers. Brain wasn't originally designed with malicious intent; the idea was to notify users when a pirated copy of the software was run.

Unfortunately, the Alvi brothers didn't consider that the mechanism they used could be easily modified and adapted for malicious use. By employing viral techniques to manage their products, they soon found themselves connected to a virus outbreak.

I would like to say that vendors learned from this and were more responsible with their anti-piracy approaches in later years but, as we will see later on, that was not the case.

Top 10 technology mistakes

Read on to page two to continue the countdown.

8. Facebook Beacon

Shaun Nichols: When making this list we tried to include actual software cock-ups rather than business decisions, but this one has a heavy dose of both.

Back in 2007 someone convinced executives at Facebook that they had to concern themselves with petty things like revenues. The solution was to construct a new system called Beacon that would combine advertising with traditional social networking features.

Dozens of e-commerce sites signed up to the service and began sharing purchase data with the site. For some reason Facebook just couldn't quite anticipate that people might have problem with having all their purchases broadcast over Facebook.

Not surprisingly, a major protest erupted and Facebook was eventually forced to kill off the ill-conceived and ill-deployed platform. Facebook is still struggling to win back user trust from the incident.

Iain Thomson: It's one of the major problems with start-ups in the internet age: you have a brilliant idea and lots of users, so how do you start making money out of them?

Facebook faced just such a quandary. It was the social networking flavour of the day (still is for that matter) but, with the empty husks of The Well, Geocities and Friendster, Facebook co-founder Mark Zuckerberg decided to start monetising the site. Facebook is all about sharing data, so he decided that its users should share their purchase choices online.

Advertisers loved the idea. Such a system would allow for a whole new range of integrated marketing campaigns. Facebook updates could start all manner of viral sales pitches that were a marketer's wet dream, and companies would be willing to pay through the nose for such data. Users, on the other hand, were less than enthralled.

Now, there's an old argument when it comes to data privacy that if you've done nothing wrong you've got nothing to hide. This may be true, but when it came to purchasing decisions a lot of people were less than happy at the prospect of seeing their data spread across the web.

Statistical probability suggests that only a tiny minority were concerned about what my granny would call "unsavoury" items, so the Beacon case was important for showing how many of us really do value our privacy.

7. Sony rootkit

Iain Thomson: In terms of the sheer anger this one raised I would liked to have seen it higher on the list, but those are the brakes sometimes.

In 2000, when the music industry was really panicking about Napster and music piracy, someone at Sony came up with a bright idea. Why not introduce some digital rights management software that let Sony know every time a disc was copied? The software to do this could be developed simply, and pirates could be stopped in their tracks. Five years later the plan went into effect.

On one level the plan worked perfectly, and the software did exactly what it said on the tin. But Sony management, despite obviously thinking this was such a wonderful idea themselves, neglected to tell consumers about the code and it was discovered by Professor Mark Russinovich, who published a blog post pointing out that this wasn't the best idea in the world after all.

The problem was that the code was a rootkit, and anyone who works in security hates rootkits because they have the nasty habit of being used to break nasty great holes in firewalls.

When the news leaked, Sony tried to brazen it out and the firm's global digital business president, Thomas Hesse, was famously quoted as saying: "Most people, I think, don't even know what a rootkit is, so why should they care about it?"

One security firm had t-shirts printed with that quote, and I took great pleasure in wearing one to Sony press conferences thereafter.

Corporate hubris isn't usually news, but the breathtaking arrogance, coupled with growing consumer fears about the security of their online bank accounts, made the rootkit big news.

Within a week virus writers were using it to break into systems, and Sony was forced into an embarrassing climbdown and had to pay compensation.

Shaun Nichols: Sony's rootkit fiasco was the 21st century version of the Brain virus. As Iain touched on, the problem was that companies became so obsessed with preventing piracy that the customers become the enemy, and any sense of respect or ethics for the people who buy and use their products flies out the door.

In Sony's case, the company decided that its customers didn't need to know about what was on the disk and what it would do to their systems.

It never seemed to dawn on the company that there was something wrong with lacing its CDs with code that would not only automatically install itself onto your system, but embed itself at the kernel level.

Rootkits are understandably a huge worry in the security community as they run at a level that normal anti-virus tools can't detect.

One would think that, if McAfee and Symantec are spending tens of millions of dollars in R&D to eliminate something, you probably shouldn't be tossing it into your product without telling anyone.

6. Apple III

Shaun Nichols: Not so much a programming error as an engineering gaffe, the ill-conceived Apple III flopped in the market and helped to lock Apple out of much of the business space.

Designed to succeed the wildly successful Apple II and appeal to the growing enterprise workstation sector, the 1980 Apple III was built to be rugged and professional, with a stylish metal casing, while remaining quiet by eschewing fans.

The idea was that the casing would act as a natural heat sink, drawing heat away from the components and keeping the system cool. Unfortunately for Apple the case design also meant that the chips on the motherboard had to be positioned close to one another.

With inadequate heat sinks and no fan to cool down the board, the system was prone to overheating and all of the problems that came with it. Floppy disks were often damaged by the internal drives, and warped chips became dislodged from the motherboard.

Apple's solution for the loose chip issue? Pick the computer a few inches off the desk and drop it. Not surprisingly, the Apple III sold poorly and was discontinued in 1984.

Iain Thomson: Apple was there at the start of the personal computer, but it had started to penetrate only a few key vertical business markets when IBM steamrolled in with the PC and Apple's fate as a niche system was set.

The Apple III was the firm's last attempt to stem the tide of history. With that in mind, you'd have thought they'd have turned out a better system than this piece of junk. To me it typifies everything that was worst about Apple at the time: the rule of style over function, closed systems and a half-arsed attitude to build quality.

The majority of the business world took one look at the Apple III and walked away laughing quietly. Yes, there are some design professionals and accountants who wax lyrical about it, but the majority of users think the system is best forgotten.

Only now is Apple making serious inroads into the corporate computing sphere and it's doing it not because of the quality of its computers but the excellence of its smartphones.

5. IBM Personal System/2

Iain Thomson: IBM's decision to get into the PC market really kick-started the idea that a computer could be on every desk and legitimised the mass computerisation of the workplace. The unofficial slogan of the company was 'No-one ever got fired for buying IBM,' and the company aimed to keep it that way in the personal computer sphere.

But by the third generation of the PC, the company was losing its grip on the market. Clever reverse engineering by Compaq and others had spawned a growing PC clone market, and businesses were proving distressingly keen to buy a working PC without an IBM logo on it for significant discount, rather than paying what Big Blue told them to.

So IBM introduced the PS/2, a completely new PC with a closed micro-channel architecture that would force the cloners to start again from scratch. Unfortunately customers would have to do the same since the compatibility problems were immense, but IBM figured it had enough clout to force the market to change. It was wrong.

Don't let it be said that the PS/2 wasn't innovative. It standardised the industry around 3.5in floppy drives for a start, and the round plugs you see at the end of old keyboards and mice introduced by the system (thus the name PS/2) lasted over a decade as the default standard.

But the fundamental mistake IBM made was in not realising that the days of hardware margins were gone. Once anyone could build a computer the money was in the software, and IBM had cheerfully handed that part of the PC industry to a bright young man in Seattle with personal grooming issues and big plans for the computer industry.

Shaun Nichols: PS/2 was an interesting idea that IBM came up with too late. As is often the issue with larger companies, IBM simply wasn't agile enough to keep up with the industry, and it took a huge hit when they tried to introduce PS/2.

While it might have been a great idea in the early 1980s, by the time IBM tried to introduce the new platform Microsoft was already taking charge in the market and doing so in a manner that welcomed software developers and hardware vendors.

While the PS/2 platform was a failure for IBM, in the long term it was arguably a very good thing for the company. Witnessing the crash and burn of PS/2 showed IBM that the market was changing and that, if it wanted to maintain its position, it had to rethink its approach.

Big Blue spent much of the 1990s dumping many of its hardware operations and focusing on the enterprise space with software and services. As a result IBM was able to keep its status as a pillar of the industry and arguably the most trusted name in the business world.

Read on to find out what made the top four!

4. Iridium

Shaun Nichols: Anyone who has ever had to deal with patchy mobile phone coverage and dropped calls can see the appeal of satellite phone networks that offer global coverage.

While the idea of Iridium was a dream for users, the reality was a nightmare for everyone involved, particularly the investors who pumped billions of dollars into the company.

Launched with great promise in 1998, Iridium was a mobile network that would cover the entire globe. Just nine months later it had to file for bankruptcy protection. The problem was in the 77 orbital satellites on which the Iridium network relied.

As you're probably aware (and Iridium seemingly wasn't) launching a satellite into space is very expensive. Launching 77 satellites into space is very expensive times 77. The company had nowhere near enough capital to deploy the satellites while still building a user base, and those debts soon became far too much for the company to handle.

The operation was eventually resurrected and carries on today as a specialist service for remote applications such as ocean vessels and rescue operations, but Iridium was never able to recover from its huge debts.

Iain Thomson: Ever since Arthur C Clarke proposed satellite communications, the promise of a united world has been marred by one fact: the cost of getting the damn things up there in the first place. Escaping the gravity well is an expensive business.

Iridium was a brilliant idea, spawned at the heart of dotcom optimism about how the internet and communications would change the entire structure of human society. Motorola, Al Gore and a host of big name backers got behind the scheme, and before long the rocket's red tails were burning.

However, like many brilliant ideas it glossed over the facts that such a system would be expensive and wealth is highly concentrated around the world, so what's the point of serving it all from a capitalist point of view?

Also the cost of maintaining a network that large in low Earth orbit, with a high risk of collision or malfunction, seems not to have been considered. Billions of dollars were spent before the company declared bankruptcy and the entire operation was bought by private investors for just US$25m.

Iridium is still running today, principally as a US military communications system, but I suspect we're 20 years and a hell of a lot of engineering away from an affordable satellite communications system.

3. Itanium

Iain Thomson: I can remember sitting in the keynote at the Intel Developer Forum when the company announced Itanium in 2001. It was big news - Intel's first 64-bit chip - and my fingers were furiously typing as Craig Barrett extolled the future of the chip, when it would power everything under the sun and the world would be a better, faster place.

But sitting at the back of my mind was a red flag. No-one was writing 64-bit code yet, and what IT manager in his right mind would want to do a complete hardware and software refresh just for a bit of extra speed? We were in the middle of the dotcom bust, and funds were so tight people were posting job ads reading 'Will code for food.'

At the end of the presentation I sat down with a good friend (who now runs a rival news site) and we rehashed the details before the conversation petered out. "It's never going to work," he said. "Not in the short or medium term."

"You're right," I replied, "But it's going to be fun trying to see them talk their way out of this one. Intel's put billions behind this. It's too big to fail."

Sure enough the market wasn't ready to make such a drastic change, and AMD's Opteron chip, which combined 32-bit and 64-bit operations, showed how it should be done. Opteron beat Itanium and Intel was forced to rush out the Xeon to make ground.

Shaun Nichols: Intel refuses to give up, but it is increasingly looking like the window for Itanium has closed. Developers still haven't really jumped onboard, and advances in x64 technologies have begun to bring the chips into the areas that Itanium was banking on for its success.

Much like IBM with the PS/2, Intel learned the hard way that you can't simply shift the entire industry onto a new platform by mandate alone. Intel tried to tell everyone that we're all going to move to 64-bit now, and developers ignored the call.

Without software support, Itanium had little appeal to the market. By the time the industry was ready to move to 64-bit, x86 processors had caught up.

One of the biggest problems the tech industry has is the ability to differentiate between 'Can we do this?' and 'Should we do this?' Itanium was a textbook case of engineering optimism outweighing business sense.

2. Sony battery recall

Shaun Nichols: Most of the cock-ups on our list led to user frustration and, in the worst case, losses of large amounts of money. But Sony's 2006 and 2007 battery fiasco was a mistake that put lives in danger.

The issue stemmed from manufacturing flaws in the lithium-ion battery packs Sony manufactured for companies such as Dell, Acer and Apple. If bumped or dropped hard enough, the packs were prone to damage which could cause battery cells to heat up to the point of violently combusting.

In other words, the battery packs had a nasty habit of exploding into flames. With the help of news reports and circulating internet videos, the otherwise rare condition became a major issue.

One by one, vendors began to demand recalls and Sony eventually ended up taking the hit for replacing 4.3 million battery packs. The recall dealt a major financial blow to Sony and was the second largest recall in the history of the computing industry.

Iain Thomson: The IT world had been talking about exploding batteries for years before anyone took it seriously. Anyone who's used laptops knows how hot they get and there's always something at the back of your mind that suggests you're one malfunction away from a skin graft. But then pictures and video surfaced online.

When you look at what's actually in a lithium-ion battery it can be rather a shock. Basically you've got a lot of fairly combustible fluid and some electrical igniters, and if a spark forms in there you'll have molten liquid poured over very sensitive parts of the human anatomy. It's no wonder there was such a panic.

That said, the scare also served a useful purpose in reminding people how much the industry is homogenising these days. The Sony battery recall didn't just cause Sony to pull back its batteries; a host of other companies Sony supplies had to do the same.

We forget that behind every 'individual' laptop are three or four companies, using multiple manufacturer's components, to build a computer that has a brand stamp on it. A flaw in one manufacturing process affects us all.

1. Intel Pentium floating point

Iain Thomson: The Intel floating point fiasco was a perfect storm of cock-ups. A technology flaw met an engineering and PR mindset that couldn't cope and turned the whole thing into a pointless mass panic. It is now a textbook case of how not to do things in the industry.

In 1994 things were looking good for Intel. Its Pentium processor was riding high, with the latest processors capable of an astonishing 66MHz clock speeds. Then in October a mathematics professor contacted Intel about some problems he was having. He'd installed a few Pentiums in a system being used to enumerate prime numbers, but had been getting very dodgy results back ever since. Was it possible that the chips were faulty?

It turns out Intel already knew the answer. There was an error in the chip's floating point unit and the engineers had already spotted it, but had decided that as the problem wasn't an issue unless you were performing really high-level mathematical functions they'd sort it out next time rather than doing a recall.

It's a classic example of the engineering mindset, focusing on the practical rather than seeing the whole picture. There was no real problem for real world use, was the thinking, and if this was explained then people would take a rational viewpoint on it. However this neglected one crucial point: people are emotional and that makes us less than rational sometimes.

CNN got hold of the story and the scoop went mainstream. These days Intel's PR operation is a well-oiled fighting machine (in both cases of the word at the end of good launch parties) but back then the engineers still called the shots and engineers make lousy PR people. Intel said that it would replace any Pentium if the owner could show that they needed to use floating point functions.

Mass panic ensued. People who wouldn't know a floating point if it bit them on the backside became convinced that their processor wasn't reliable and raised a ruckus. The stock market panicked too, and Intel suffered a massive share price rollback, and a whole new series of jokes.

Q: How many Pentium designers does it take to screw in a light bulb?

A: 0.99904274017, but that's close enough for non-technical people.

Eventually Intel backed down, but the whole fiasco cost the company US$450m in direct costs, a battering on the world's stock exchanges and a huge black mark on its reputation.

Shaun Nichols: Every time the tech sector heats up and we get a new wave of hot start-ups, those involved in the company wonder why anyone would want a stuffed-shirt business type to run the show.

Inevitably you get a incident like Facebook Beacon or Intel's floating point crisis and everyone realises that tech smarts don't translate to business smarts.

I think it also shows why the tech sector will continue to experience these sort of cock-ups over and over again. Engineers and developers love rational solutions, but it's not always the rational solution that is best.

Intel was right in that very few systems would ever encounter any problems with the floating point issue, just like Sony was right in that very few systems would ever be at risk of a battery meltdown.

The problem is that the market does not appreciate statistical probability. If you were to tell people that there was a 1 in 10,000 chance of something happening to their computer, most all of them would worry about it even though they know the odds of a failure are extremely low.

Techies may not like to work with, and under, the career suit types but for a company to succeed in the long term, it's vital that true business men and women take the reins.

Multi page
Got a news tip for our journalists? Share it with us anonymously here.
Copyright ©v3.co.uk
Tags:

Most Read Articles

Porn industry standardises on HD-DVD

Porn industry standardises on HD-DVD

La Trobe ACAMI supercomputer comes online

La Trobe ACAMI supercomputer comes online

TfNSW extends deal for mobile phone detection cameras

TfNSW extends deal for mobile phone detection cameras

Australian teen leaks pictures of new iPhone parts

Australian teen leaks pictures of new iPhone parts

Log In

  |  Forgot your password?