Gartner’s “real time” enterprises to lay off millions

By on
More efficient IT use, with an emphasis on grids of shared computing resources, "will lay off millions," according to Gartner vice president and research director Tom Bittman.

Bittman, the former software chief for IBM's AS/400 mainframes, said some companies will use the efficiencies Gartner describes as "the real-time enterprise" to redeploy people onto other projects, but most "should be able to cut costs".

According to Gartner, "the Real-Time Enterprise is an organisation that competes by using up to date information to progressively remove delays to management and execution of critical business processes".

It demands "the death of the accidental enterprise architecture and will force companies to focus on strategic application integration".

Bittman said the total cost of IT ownership is being driven by skills requirements. Optical networking efficiency is improving at around 100 percent per year, he said, while computing power improves at about 60 percent per year. Human labour, however, stays about the same.

"The real focus is not to get more efficiency out of the hardware or software, but to get people out of the equation," he said.

Bittman said when computing is cheap and bandwidth expensive, organisations tend to keep their computing close to home. "You buy what you need - you buy boxes," he said.

This leads to over provisioning of computing resources, with most servers using only five to 15 percent of their capacity. "Your core competency must be managing those things," he said.

"As bandwidth becomes cheaper, it makes more sense to push computing out."

Bittman said using a combination of grid services (pooled computing resources) and rules and protocols to mange systems, organisations can remove much of the human element in systems management, and therefore much of the cost.

"You manage the policies and the IT infrastructure manages everything below that," he said, saying this is one of the key ways to exploit the growing gap between processing capability and the amount of power required by software.

Bittman identified three types of grid computing. For the enterprise, "massive parallel processing," he said, is the least important. More important are shared unique resources, such as a shared industry archive, with shared generic resources that can be rented out the most important of all.

Grid services, Bittman predicted, will be used by large corporations internally by 2006. The following year, grid service providers will offer services based on a rental model and by 2008, 15 percent of all Fortune 100 companies will use the service provider model "as a safety net at least".

Half of all these companies will use grid services internally, he said.

Most of the innovation in management and shared resources is being done by "small vendors you haven't heard about," Bittman said.

Most of them won't survive, he said, with the successful one likely to be snapped up by the big vendors.

In the present environment, Bittman said the inability of Windows NT-based systems running on Intel architecture to run applications together is a "fundamental disabler" of more efficient use of computing resources. He said Microsoft will fix this problem in the server version released after next January's .Net server, but said the software giant is still moving "at a glacial pace" to address the problem.

"If Windows doesn't fix it, Linux will," he said. Bittman said that although Windows remains a much more mature operating system than Linux, improved use of resources and better management could de-empahise the role of the operating system, opening the door for slimmed down systems like Linux.


Most Read Articles

Log In

Username / Email:
  |  Forgot your password?