Posted by: Ryan Arsenault
DataCenter, mainframe, mainframe capacity management
In the United Kingdom, auto insurance companies have seen a rise in the popularity of consumers comparing auto insurance quotes online, and insurance companies’ mainframes haven’t always been up to the task, according to application performance vendor Macro 4.
Each time a consumer applies for a quote from a comparison website, the site will send out a mass of automated requests in the form of XML data streams to dozens of insurance company websites.
Any insurance site that is unable to send back a quote within a relatively short period of time – sometimes as little as a few minutes – is presented as “unable to quote” by the comparison sites, according to Philip Mann, Principal Consultant at Macro 4.
Mann said the insurers’ quotation engines are often part of older mainframe computers, embedded in processes that were designed to be used by real people — such as sales and customer service staff. The insurers’ IT teams are often forced to separate out the quotation processing element of their systems as separate standalone functions, and repackage them so they can respond to automated requests from comparison sites – or invest in more mainframe capacity.
We spoke to Mann about the problem.
If large UK insurers are running into these problems with their mainframes at the online sales portal, has this crippled their business or customer base?
Mann: In the UK, the general insurance market has become very competitive and price-sensitive, with ever-decreasing brand and company loyalty. Price is often the key factor in people’s buying decisions, and the comparison websites are having a major business impact on many providers — from the bigger insurers to smaller ‘niche’ players.
It is the bigger, older and more established companies that are most likely to be heavily dependent on mainframe-based systems. And it is these insurers who are at risk of having problems with processing automated requests from the comparison sites.
Why is it that the companies are experiencing such a headache with the insurance quote requests? Big companies such as banks rely on mainframes because of the reason that they are effective in crunching huge numbers effectively.
Mann: Mainframe hardware and operating systems are indeed a powerful, reliable platform for high volume number crunching. The problem is not the mainframe platform, but the fact that many insurance companies are making use of legacy mainframe application code which was originally designed to provide quotations to real people — such as sales and customer service staff – to pass on to customers.
These older applications have had to be adapted to respond to the automated requests coming in from comparison sites, but are struggling to handle the greater workloads which they were never originally intended to accommodate.
Part of the challenge is that this new approach to ’selling’ insurance means the insurance companies have to perform far more quotations for every policy sold. For example, in the past a company might have performed 3-5 quotations for every new auto insurance customer sold. Now, through the comparison website, this has risen to 30-50 automated requests — around a tenfold increase. This puts pressure on the overall system’s ability to handle transaction loads and rapidly exposes any performance problems in the legacy quotation application code itself.
How are insurance companies rectifying the situation? Are they looking at options such as the new zEnterprise from IBM or now steering away from mainframes for the future?
Mann: Most companies are not planning to move away from mainframes to rectify the situation. There is just not the time or inclination to consider and implement this type of drastic solution to the problem. While the mainframe programs could be old, most companies are in general using the latest versions of IBM’s mainframe hardware technology (zEnterprise), which is as up-to-date and technologically advanced as any other hardware in the marketplace.
The normal response to a problem like this would be to, reluctantly, buy more processing power in the form of new mainframes or mainframe upgrades, or ‘throw hardware at the problem.’ While this might help in the short term, it is not always guaranteed, because it does not really get to the heart of the problem. And of course hardware upgrades are highly expensive.
What’s Macro4’s take on how to handle the performance management issue?
Mann: Most people when talking about the performance of computer systems are thinking of the computer hardware and operating systems and how they perform when running application processes. But it’s also very important to look at performance from the point of view of the applications themselves, and where and how they are utilizing computer resources. Companies like ours specialize in looking at things from this application point of view, using application performance measurement tools and methodologies.
With this approach, the resources required to run a transaction, such as generating insurance quotes, can be profiled, and any areas of poor performance or high resource utilization (within the application down to line of code level) can be highlighted for further investigation and tuning. This process has proven very productive in reducing overall processing requirements, delivering better response times and allowing much higher transaction levels to be handled without the need for expensive hardware upgrades. It is the sensible alternative to the more general approach of throwing more hardware at the problem.
Are you seeing this problem crop up in the U.S. market? Weigh in on this conversation in the comments or on Twitter @DataCenterTT.