Business Systems Part 2: On the starting grid

13 Nov 200318 Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

It’s a beguiling idea: hook up a whole raft of computers together and combine their power so that you have a supercomputer. Once they are hooked up you have a tremendous computing resource either for raw number-crunching power or as a strength-in-depth facility for peak traffic occasions, resilience in the event of failure of any element, new applications coming on stream, test and development work in parallel with production and so on.

Not many people now remember or realise that this was the fundamental concept behind the internet, which really got going when US universities, under the auspices of the National Science Foundation, linked their computer systems to facilitate the ever more demanding needs of scientific researchers. A similar current project called The TeraGrid is a multi-year effort to build and deploy the world’s largest, most comprehensive, distributed infrastructure for open scientific research by linking government and academic supercomputers across the USA.

Back down here in the business world, grid computing is a concept that is being espoused and promoted by IBM, Hewlett-Packard, Oracle and many other industry leaders. The basic concept still applies – linking computers to work together as single entities – but although physical location may not matter, we are usually talking about harnessing the resources within a single organisation. Oracle itself began the process with a dictum from on high two years ago when CEO Larry Ellison simply ordered that there be no more local investment in servers. All computing resources are now centrally managed for its worldwide operation, which has shed over 2,000 servers in the process.

“There are several drivers for grid computing in the business market,” says John Caulfield, solutions director of Oracle in Ireland. “Relatively low-cost computing is certainly one. It used to be very expensive to build in capacity and redundancy in your computing resources – everyone remembers back-up machines that were hardly ever used. With today’s blade servers, capacity and scalability are no longer difficult issues at all. Another major factor is that open standards mean that different systems and applications can work together. It may not always be easy, but implementing major new enterprise applications seldom requires the kind of matching investment in systems and platforms that would once have been normal. On the other hand, the ways in which we have been doing things means that every large organisation has islands of computing and almost certainly too much capacity because it has been added incrementally for each application. We look at the total but not always at the whole.”

But in many respects the key driver for grid architecture is the ability to manage all of the IT resources so efficiently – and transparently to the end users – that computing becomes a utility like electricity: “You plug it into the wall and it works. The organisation is, as it were, commoditising computing within its own perimeters.” John Caulfield points to practical examples like coping with the surges in demand by financial applications at month or period end, of order processing and logistics for peak seasons, of web applications that require 24×365 guaranteed performance. Data storage has become a major IT concern in recent years with the exponential growth in data. Balancing and provisioning all of these concerns, prioritised dynamically according to the needs and polices of the business, are where grid computing is showing huge promise.

John Scully, who heads up the IBM Global Services business in Ireland, also points to open standards as fundamental to the grid computing connectivity. “But although now you have a greater whole you are actually moving from the traditional fairly monolithic architecture to a modular and more flexible structure. You can pick best of breed in platforms, components, applications of course – and make them all work together.” He uses the example of disk drives as classic IT commodity items: “You plug them in, add more when you need to, replace one if it fails – they all work together out of the box and ordinary users don’t even need to know.”

Grid computing is somewhat redefined or refocused in business applications, he says, because what you are aiming to do is not to gang up computers for maximum power but to use their combined capacity better for the vast and growing range of applications that a modern business needs. “That can also mean geographical dispersion, mobility and so on – whatever users require. So by managing the resource centrally, very flexibly and dynamically, what we are doing is distributing computing where it is needed rather than computers. If I want to see a customer account, up to the minute and even with any orders currently being processed, it doesn’t matter to me where that data is held, where the application is working or anything else about the IT behind the scenes – I just want the information on the screen I’m using at the time.”

Once we start thinking that way, the platforms and the data storage and the delivery channels can be what suits, is available, is most secure, is cheapest or whatever. Some we will own, some will be services, some will be invoked as needed and paid for by the cent or megabyte. In the grid, it is the connectedness that counts. Once it is in place, any resource you need is connected somewhere and can be used. Will it “change the face of IT as we know it?” Like many apparent giant steps in technology before, the answer has to be a firm Probably.

By Leslie Faughnan