The Y2K Problem

Publication
The Empire Club of Canada Addresses (Toronto, Canada), 8 May 1998, p. 12-24
Description
Speaker
Taylor, Anthony, Speaker
Media Type
Text
Item Type
Speeches
Description
Talking as a computer systems amateur. An explanation of how the Y2K problem arose. Examples of what might go wrong. Our growing dependence on embedded systems. Two extreme schools of thought about the impact of Y2K on society, and the speaker's response. Some predictions. Requirements to take care of the problem. The position of the insurance industry in this environment. Lawsuits related to the Y2K problem. Problems faced by the insurance industry of enormous magnitude in recent history related to widespread losses. How insurance works. The unique situation of Y2K for the insurance industry. Whether or not there is Y2K coverage already included within the insurance market's standard products. Actions taken by the insurance markets, or proposed to be taken, with a brief discussion of each: Full exclusion; Limited exclusion; Totally ignoring the subject; Offering specific millennium products. Hard decisions to be made over the next six months. The irony of the problem. One further warning from the speaker.
Date of Original
8 May 1998
Subject(s)
Language of Item
English
Copyright Statement
The speeches are free of charge but please note that the Empire Club of Canada retains copyright. Neither the speeches themselves nor any part of their content may be used for any purpose other than personal interest or research without the explicit permission of the Empire Club of Canada.

Views and Opinions Expressed Disclaimer: The views and opinions expressed by the speakers or panelists are those of the speakers or panelists and do not necessarily reflect or represent the official views and opinions, policy or position held by The Empire Club of Canada.
Contact
Empire Club of Canada
Email:info@empireclub.org
Website:
Agency street/mail address:

Fairmont Royal York Hotel

100 Front Street West, Floor H

Toronto, ON, M5J 1E3

Full Text
Anthony Taylor, Director of Wellington Underwriting, plc Lloyd's of London
THE Y2K PROBLEM
Chairman: George L. Cooke
President, The Empire Club of Canada

Head Table Guests

Ann Curran, Third Vice-President, The Empire Club of Canada and Partner, Lewis Companies Inc.; The Reverend Dr. A. Leonard Griffith, Honorary Assistant, St. Pauls, Bloor Street; Marty Venalainen, OAC Student, Humberside Collegiate Institute and President of Toronto Association of Student Councils for 1997-98; Ken Mead, Managing Director, Guy Carpenter & Company; Don Smith, President, Canadian Insurance Consultants; Audrey Burke, Director, Year 2000 Project, The Toronto Star; John Holding, Q.C., Senior Partner, Borden & Elliot; Gareth Seltzer, Past President, The Empire Club of Canada and Vice-President, Private Wealth Management, Guardian Capital Advisors; Robert Dechert, First Vice-President, The Empire Club of Canada and Partner, Gowling, Strathy & Henderson; and Bill Laidlaw, Second Vice-President, The Empire Club of Canada and a Director, Government Relations, Glaxo Wellcome Inc.

Introduction by George L. Cooke

The Year 2000 issue, or the Millennium Bug as it is called, derives from the inability of computers or devices driven by computer software programmes to recognise dates after December 31, 1999.

While the problem is a simple one to understand, it is enormous in scope and fixing it will be complicated and very expensive. Software programmes consist of millions of lines of code; the cost of rewriting one line of code is currently approximately US$1. The magnitude of this problem is made greater in that much of this work is "old world" work, not productive, and not exciting to many of our recently skilled and very creative graduates of computer science programmes.

The issue for all of us is not just our own software and its ability but rather the software and ability of anyone and everyone that we do business with.

I am very pleased that Tony Taylor has agreed to address our Club. Tony is a senior reinsurance underwriter in the Lloyd's market. More importantly, for us today, from what I can make out, he is the person at Lloyd's who best understands the broad business issues around the Year 2000.

Tony started his insurance career in 1963 and joined Lloyd's in 1977. In 1983 he founded the syndicate A. Taylor & Others, which has become one of the top 10 syndicates in Lloyd's, in terms of both size and financial performance. In November 1997 he assumed the additional responsibility of Underwriting Director for the Wellington Group. Wellington is one of the largest businesses operating in Lloyd's and represents an underwriting capacity of over $1.2 billion.

Outside work Tony is well known for his taste in fast horses and good wine. More amazing is his remarkable ability to play golf badly and yet profitably.

Mr. Taylor, with our sincere thanks for being here today, the microphone is yours.

Anthony Taylor

Good afternoon ladies and gentlemen.

Thank you George for your kind words and for asking me to talk today in front of such a large audience on a subject which I, along with 99.99 per cent of the world's population, knew absolutely nothing about 12 months ago.

Before getting into my presentation I would just like to explain that most of my comments will relate to examples from the U.S. rather than Canada where I am afraid there is a lack of published data.

I would like to talk to you today as a computer systems amateur. In fact most of the people in my company would regard me as a veritable luddite on the subject. My interest is purely mercenary in that as an underwriting member of Lloyd's, with unlimited liability, I am exposing my personal wealth should Y2K rebound upon the insurance community. As you are well aware personal money is significantly more important than corporate money. So my address today will be at a practical layman level and I am sure you will not be tested by my technical prowess.

Although the vast majority of you here today are no doubt fully up to speed with why we are facing this phenomenon, for the sake of those few who are not quite so advanced I would like to explain how the Y2K problem has arisen. In the 1960s and early 70s when computer systems were advancing to the extent that they started to replace existing manual systems, it was apparent that computer memory size and processing capacity was very limited and expensive and thus any shorthand method of representing data was advantageous. Recording of calendar dates in almost all businesses is an important and repetitive function and thus to express the 8th May 1960 as 08.05.60 rather than 08.05.1960 saved two digits in recording and thus made available extra space for other data. I am assured it was appreciated that a problem would arise when the Year 2000 was reached but that was 30 or 40 years away. Computer technology was developing at an accelerating rate so by the time you got near to the Year 2000 there would not be a problem because everyone would have switched over to super-duper advanced systems. So clearly, the economic decision, consciously made, was to save time and money by shaving two digits off the date code.

I suspect almost all non-technology-minded management since the 1970s were not made aware of this little wrinkle. When faced with the decision of upgrading their systems capacity rather than replace software by a completely new system, in the interests of economy they would often prefer to adapt an existing programme leaving the core criteria unaltered--a so called legacy system. The longer they left upgrading the system as a whole the more difficult it became to move away from their tried and tested facilities.

Now by using only two digits to indicate the year you can see a problem arises when you reach the millennium. The system cannot recognise the three-digit number which would be a natural change from 99 to 100. The Year 2000 which will show up as 00 can only be interpreted by the system as part of the existing sequence of numbers and therefore read as 1900 or possibly the date that the programme was first written, say 1970 or possibly any random date in the century. Thus, any transaction process which incorporates a date after 31st December 1999 will be incorrectly interpreted. A real life example in the U.K. related to a computer-controlled warehouse where, in the normal course of events, stock which has reached its sell-by date is automatically rejected and sent for destruction. In this specific case new goods were delivered to the warehouse registered with an end date in the Year 2000 and thus notated as 00. Those goods were subsequently rejected as being out-of-date because the computer system recognised the end date as having expired. It does not take much imagination to realise the potential problems that can occur in financial service companies in the recording of account balances and interest earnings in these circumstances.

To make life more complicated it is not just 31st December 1999 which will raise problems, but there are other problem dates, for instance the 29th February 2000. In the normal course of events the first year of each century is not recognised as a leap year; however, if the century is divisible by 400 then the leap year will apply. Some computer programmes may well not recognise 2000 as being a leap year. An example of the problem this may cause can be seen in a situation in New Zealand which occurred in 1996. An aluminium smelting factory had a computer system which failed to recognise the 366th day of the year and thus all processing systems automatically cut out at midnight 30th December. Unfortunately, at that time bauxite was being smelted and the shut down of systems caused the molten aluminium to solidify thus causing terminal damage to the smelting pot.

My comments so far relate to mainframe computer systems and it is in this area the world is concentrating to rectify non-compliant systems. However, as our technological age has developed we have become increasingly dependent upon embedded systems, that is software wrapped inside hardware devices, which are used to control the operation of machinery and equipment, otherwise known as microprocessors or microchips. Not only are these chips embedded in manufacturing systems but also they are endemic in office and consumer products and almost any item in our society which performs a specific electronic or movable function. These microchips operate on various bases. Some have no date control, others have operational cycles such as "do X every 100 days," whilst others operate on a continuous clock basis like regular computer systems and only those manufactured in recent years have a four-digit year designator.

Some chips are programmed to operate until a fixed period has elapsed after date of installation. At that time the chip's internal clock will revert back to the original start date. An example is the Geographical Positioning System, GPS, which is used by aircraft and ships for accurate position findings. Some 132 days before the turn of the century the GP system date will change at midnight from the 22 August 1999 and revert back to the 6 January 1980, as the maximum time period will have elapsed. However do not worry, because the U.S. Department of Defence asserts that the space and ground control segments will be compliant by mid-1999.

The world being what it is, there are of course two extreme schools of thought about the impact of Y2K on society. On the one hand there are the gloom-and-doom merchants and on the other there are those who say this is just hype.

An article by a software engineering professor of London University recently partly supported the latter view when saying: "We are being assailed by a barrage of dire warnings about this bug. Banks will fail, people will die in hospitals, household electronic goods will become unusable, cars will not start, planes will drop out of the sky. Most of these stories are the result of irresponsible scaremongering. Worse, they are scares deliberately distributed by consulting firms and others with an interest in millennial angst. Most of the extreme cases are urban myths." However, he then went on to say: "Some organisations face problems because they failed to invest in software engineering over a long period. They are seeing their chickens come home to roost. For these organisations, which treat the millennium as a malevolent trick played by a cruel deity, I feel very little sympathy." I fear governmental bodies all round the world, be they national, state, provincial or municipal and always short of long-term investment may fall under this category.

However, if you believe there really is no problem ask yourself the following question. Why are companies such as Citicorp and General Motors, organisations which are highly sophisticated in the state of their information system, budgetting $600 million and $500 million respectively for compliance costs? They don't spend that sort of money on a 'maybe' or chasing ghosts. That money is being spent to ensure that their information systems are as good in 2000 as they are in 1999. It will not necessarily provide any enhancement in their capabilities; if you like it's dead money but an awful lot of it.

They have to be sure that come the Year 2000 they will have unimpaired operating systems and not be at a serious commercial disadvantage to their competitors with all the implications that would have on their reputation and share price.

In the 1970s oil was the scarce commodity; those countries who produced it were in good economic shape; those who had to import it paid through the nose and were in trouble. In the year 2000 we can view information as being the scarce commodity. If you can access information you will be able to walk over those who failed to address the problem and the only way to address the problem is with labour, money and lots of time.

In 1996 a U.S. Life Company identified 150 million lines of data that needed vetting to correct date codes. The cost was $1.10 per line i.e. $165 million. This work is labour-intensive and with the scarcity cost of trained I.T. labour it is estimated the price may rise up to $6.70 per line in 2000.

As of just over one year ago 80 per cent of Fortune 500 companies in the U.S. had yet to commence compliance efforts and as of today only 17 per cent have completed their projects. A recent Canadian survey I believe identified over 50 per cent of companies were taking no corrective action.

Moving to gloom-and-doom predictions, Gartner Group estimate the global cost of compliance at $600 billion. Merrill Lynch are at $1 trillion including legal defence costs after the event. To put this in perspective, the cost of the Vietnam War has been estimated at $500 billion for the U.S.

We have all seen the stock markets in the western world climbing to unheard of levels in the last few years. What will happen to the highly priced stock market if confidence is shaken in a company's abilities to maintain growth projections?

There is a good chance that there will be a failure in computer systems of various U.S. government departments. Each week, the federal government sends out $32 billion in social security, payroll cheques and other housekeeping payments. What happens to the economy if it is deprived of this income injection if only for a few weeks? Already there are estimates that economic growth in 1999 will slow down by 0.8 per cent because of the diversion of investment into fixing Y2K problems.

So now let's look at the position of the insurance industry in this environment. Why should insurance, designed to respond to accidental losses, be concerned about losses resulting from a 40-year-old, consciously made, economic decision to save time and money by shaving off two digits from a computer date code? Well, I hold in my hand a copy of an article published by a law firm, and it is not unique. This paper is headed: "The Y2K Time BombRun for Coverage," with an accompanying quote: "Up to three-quarters of companies would have legitimate grounds for claiming compensation for Year 2000-related losses under their all-risks policies." This paper has been published two years before the predicted great event and gives a child's guide on how to pursue claims under various policy forms. Despite the lack of any factor demonstrating Y2K to be an insurable "accident" of any kind, it would appear some interested parties are lining up the insurance industry to pay for some or all of the problem.

In California legislation was recently introduced and rejected, although I fear not for long, to limit the exposure of the computer hardware and software companies to lawsuits related to Y2K. Supporting documentation included the following: "Proponents of the measure, primarily Silicon Valley companies and associations, say it will motivate computer companies to fix problems while also protecting California's technology-driven economy from a rash of crippling lawsuits." That all sounds very reasonable to me! Given the clout Silicon Valley companies wield in California, it is hardly surprising.

Meanwhile in various U.S. states there are legislative proposals to give state government additional immunity from lawsuits, presumably because they stand no chance of being compliant by the due date. No public outcry is to be expected, since taxpayers will ultimately be forced to pay in the absence of governmental immunity. So as two deep pockets run for another form of cover it looks as though insurers may be targeted to recompense Y2K sufferers. One should not expect any public outcry on behalf of insurers, until it becomes apparent that, like the taxpayers who ultimately bear the government's burden, it will inevitably fall to policyholders to bear the burden in terms of higher premiums if insurers are forced into the Y2K arena. Targeting Y2K losses against insurers does however pose one major problem for would-be gold diggers--the sum total of the capital and free reserves of the non-life insurance and reinsurance world markets is less than the lower end of the Y2K global cost which I have previously quoted. Or put simply we don't have enough money.

The insurance industry has faced problems of enormous magnitude in recent history related to wide-spread losses from generic causes e.g. asbestos and pollution. These were losses which became apparent to insurers after the event, sometimes 30 or 40 years after the asbestos fibres were released into the air, or the hazardous waste seeped into the ground water. Inevitably insurers were called on to pay for these losses. The effect on our industry has been substantial including many liquidations and just a tad of impact on Lloyd's. Aggressive lawyers seeking a "deep pocket" to bear the enormous projected Y2K costs are viewing Y2K as a potentially similar generic cause--insurable event; however, there is very little about Y2K which is insurable.

Insurance is a promise to indemnify a policyholder from defined consequences of a fortuitous event, that is an unexpected accidental occurrence, in return for the payment of a premium. The manner in which insurers are able to provide adequate security to the policyholder is to create a capital base by "spreading the risk" of losses among several policyholders. Historical data is compiled so that losses can be forecast as accurately as possible. The losses of the few are then borne by the premium pool of the many. Thus, adequate security for losses is provided.

In the case of Y2K we have a unique situation where not only do we know how an event is going to happen but also, within a short time scale, when. This differs somewhat from basic insurance where numerous policyholders fund an insurance pool to cover the losses of a few, which may or may not occur during a given annual period, caused by one of the various perils insured.

The Y2K risk I believe will be so pervasive that one cannot depend upon spread-risk criteria to balance our books. Given the magnitude of the problem, the fact that it will occur roughly during the same time period and is avoidable largely by the insured's own initiative (if it invests the time and money necessary) it is easy to see how difficult it would be to attempt to create an adequate "pool of insurance" to provide security through traditional insurance. All of these factors negate underwriting this risk on a sound basis. Add to this an absolute lack of historic data on which to build a valid insurance price and you can see how difficult our position becomes. It would be deceiving our clients for us to offer a product which we cannot rate and more importantly do not have enough capital to support our commitments.

As we have already seen arguments are being made that Y2K coverage is already included within the insurance market's standard products. I do not believe that to be the case since we are dealing here with a totally new phenomenon which was not previously envisaged by insurer or insured and certainly no premium has been

built into the insurance price to cover the risk. All the perils we cover under insurance products conform to the test of fortuity. With Y2K, as I have said before, we know it will happen, when it will happen and how it will happen. No one would insure a burning house. Why should anyone insure Y2K?

So having listened to my philosophical ramblings let's look at what action the insurance markets have taken or are proposing to take. These actions fall under four broad headings:

a) Full exclusion.

b) Limited exclusion.

c) Totally ignoring the subject.

d) Offering specific millennium products.

Dealing first with the full exclusion. This path has been taken with respect to certain classes of business and high-risk occupations covered by standard products such as commercial fire or errors and omissions. Here the market believes the risk is so great that it cannot be underwritten under existing products, be it a failure of the computer system or a loss from other perils consequent upon that failure; for example an explosion in a petrochemical plant caused by a breakdown of computer controls which results in the release of explosive gases.

Second--the limited exclusion. Here losses are excluded as a result of the failure of a computer system but cover is given for losses caused from other defined perils which have been exacerbated by the failure of a computer system; for example, a fire loss at a factory which is exacerbated by a Y2K-related failure of the computer controls of the sprinkler system. This limited coverage indemnifies policyholders for losses caused by a standard peril, i.e., in this case fire, but it would exclude repair costs to and remedial costs of the failed electronic system and any business interruption losses directly related thereto. The Insurance Services Office in the U.S.

(ISO) and the Association of British Insurers in the U.K. (ABI) have both prepared wordings of this type as have some of our Lloyd's Associations and some Canadian Insurers.

The third route of do nothing stems from the fear that taking affirmative action to exclude implies existing covers already include the Y2K peril. Alternatively, it may be the insurer is more scared of the volume of business it will lose if it tries to take corrective action--the more likely reason.

Finally, we have a limited number of specific products selling millennium cover in the professional indemnity area. These come in two forms. The first is to charge a very high premium, possibly up to 80 per cent of the policy limit, but offer to return a large proportion thereof in the event of there being no claim. Strangely this has not been very attractive to clients. The other route is using more traditional principles but severely limiting the scope of the coverage given and requiring an assessment of each insured by a professional auditor to ensure Y2K compliance. The premium here is not as high as in the first example but certainly well above that which would normally be paid for standard coverage and the jury is out as to whether it will prove popular.

Time is running out not just for industry in general as they realise the implications of the Y2K phenomenon, but also for the insurance industry in deciding what underwriting approach to adopt. There will be some very hard decisions to be made during the course of the next six months balancing risk appetite against the potential risk of losing large volumes of business because of competitive issues. Each insurer I am afraid has to make up his own mind.

How ironic it is that the economic decision to delete two digits from the date code in the computer programmes, which was taken 40 years ago, in order to save time and money has now rebounded upon the world with unknown consequences. There may not now be enough time and money to fix it. Much as we secretly believe there is a silver bullet which will solve all our problems, I regret everything I have read to date suggests that this is an impossibility.

Thank you for listening to me and perhaps I can leave you with one further warning. There is a distinct possibility that if we are still using today's Windows, DOS and Unix-based systems in 40 years' time, there will be another date problem. The problem is that on 19 January 2038 at 3:14:35 a.m., the number of seconds since January 1970 will exceed 2,147,483,647. This is the largest number that can be stored in a 32-bit processor, used commonly for storing time information in the aforementioned systems. Of course, we won't he using them then... will we?

The appreciation of the meeting was expressed by Ann Curran, Third Vice-President, The Empire Club of Canada and Partner, Lewis Companies Inc.

Powered by / Alimenté par VITA Toolkit
Privacy Policy