Although most multiline, multistate insurers are focusing their resources on lengthy policy management and claims systems rebuilds, the next generation of insurance infrastructures will be grid-based, according to John Parkinson, chief technologist for the Americas, Cap Gemini Ernst & Young (New York). However, for some insurers, since grid computing boasts the potential for sizable ROI achievables in 60 to 90 days, it is currently justifies exploration.
"Grid computing is a way of using the aggregated resources of many computers for a single computing task," Parkinson explains. "Everyone's total computing investment is under-utilizing its capacity." For example, an individual server seldom uses more than 35 to 40 percent of its capacity, relates Parkinson.
Infrastructures utilizing grid computing can either double their current computing power or do the same amount of work that is currently being performed, with half the hardware, says Parkinson. As a result, many companies could terminate hardware leases for technology that would go unused.
"The simplest way to enable [grid computing] is to put a layer of software between the existing hardware operating system and the application layer," says Parkinson. "The [grid computing] software 'watches' the use of hardware and figures out what [applications] consume what size of storage. It then builds a model so that when the application tries to run, the grid layer figures out what it should run on and its priority."
Harnessing the Grid's Power
Such an aggregation of capacity is especially beneficial to insurers performing large tasks, says Parkinson. "If an insurer needs to scan through all of its policies to figure out which are the most actuarially likely to generate claims," a lot of computing power is used, he says. "With grid computing that task can be run in the background with the [computing power] that the company already has."