An urgent problem on the coast

The coming hurricane season poses exceptional risk for Texas, mostly to persons and businesses insured by the Texas Windstorm Insurance Association but also among those who will end up picking up the pieces after a major storm.  The most recent data shows that going into the 2013 hurricane season, which is less than three months away, the Texas Windstorm Insurance Association has about $180 million in cash available from which to pay claims, access to $1 billion through issuance of Class 2 securities and access to $500 million through Class 3 securities.  There is some possibility of additional funds if TWIA can market its Class 1 securities or obtain another “bond anticipation note” as it did in 2012. This would give it another $500 million.  And, if TWIA can afford to purchase reinsurance, it might — just might — be able to squeeze out $1 billion more on top of the stack.  Thus, the best case is that TWIA’s stack will be $3.18 billion.  A more realistic assessment is that TWIA’s stack with which to pay claims will be $2.68 billion. And a pessimistic assessment is that the stack will be a scant $1.68 billion, perhaps even less if the catastrophe fund keeps bleeding from Ike claims or the Class 2 bonds prove difficult to market.  The major bill pending in the Texas legislature, S.B. 18, has many virtues but in its present form does nothing to change these computations for most of the 2013 hurricane season.

The problem is that the risk of losses greater than this amount in 2013 are considerable. No one knows the exact probabilities, but based on my modeling, which is in turn based on TWIA’s commissioned studies from experts, the probability of losses to TWIA that are greater than its funding stack range from about 2% on the most optimistic views about the funding stack to 4% on the more pessimistic views about the funding stack. Those are about the same probabilities as the risk of death in the coming year for your average 67 to 75 year old. It’s roughly the same probability of flipping five heads in a row.

It could be even worse.  David Crump noted in response to an earlier version of this post that we may not even have Class 2 securities because, as a result of the 2011 legislation (section 2210.6136), if the Class 1 securities don’t sell, the first $500 million of Class 2 securities appear to rely on the same funding method as the failed Class 1 securities.  (Who thought of that?)  Only after that do we get to the more reliable method of surcharges on all coastal property insurance and an assessment on insurers.  I certainly hope David is wrong in his estimation of the Class 2 securities but, on mature reflection, he has a point. So, we need to add an additional category of gloom: “Crump gloomy.”  And, if he’s right there is about an 8% chance that the top of the TWIA stack will be lower than the amount of the claims. That is very scary indeed.

If the losses are greater than the funding stack, TWIA policyholders are likely not to be paid in full, and certainly not in a timely way. If, for example an average Category 4 storm were to hit Corpus Christi the damages would be about $4 billion.  (EMail of March 14, 2013 from Jennifer Armstrong of TWIA to David Crump).  Policyholders in that part of the coast would thus be paid between 17 cents and 80 cents on the dollar, leaving many unable to rebuild well. If a 3% deductible is going to lead to “blue roofs,” as was suggested by opponents of such an idea at the hearing of the Senate Business and Commerce committee earlier this week (because policyholders won’t be able to find the money to rebuild), consider what an effective 20% – 83% deductible is going to do.

Even losses in 2013 smaller than the full stack are going to cause trouble for TWIA.  A smaller storm in 2013, say, a half-Ike, could wipe out the catastrophe reserve fund and the Class 2 securities.  This means there would be just a very, very small stack to protect TWIA for 2014 and beyond.  The only good news is that legislation pending in the Texas legislature does try to address those later hurricane seasons.

TWIA stacks for 2013

TWIA stacks for 2013

 

There are several ways the situation could be improved for the coming 2013 hurricane season.  First, TWIA could attempt to make another assessment under the pre-2009 law to cover Ike losses that have continued to drain the catastrophe reserve fund.  (Clearly TWIA does not have authority to make such assessments for post 2009 storms). It appears, at least with the benefit of hindsight, that the $430 million assessment that occurred following 2008 Ike was inadequate to cover TWIA’s responsibility for Ike after the litigation dust has settled. But whether TWIA has the legal authority to do this — and don’t expect the insurance industry to take any such supplemental assessment sitting down — is still not clear. And I would not be surprised to see any litigation on this topic take considerably longer than the hurricane season to get resolved.

Second, the legislature could develop an alternate funding source for the Class 1 bonds for just the coming season.  Indeed, not that I would ever suggest such a thing, but given the somewhat desperate situation that exists, the insurance industry might acquiesce to this burden in exchange for relief from some of the responsibility it is supposed to bear under S.B. 18 for hurricane risk in 2014 and forward. The insurance industry could, for example, bear assessment risk or partial assessment risk for the Class 1 securities that now appear unmarketable since investors understandably mistrust whether TWIA policyholders will stick around and pay the huge surcharges that will be required to pay off the bonds.

Third, the legislature could actually raise explicit taxes [laughter] to pay for reinsurance that might reduce the risk.  Or maybe it could use some of the Texas budget surplus to  pay?  While this will rightly gall Texas taxpayers, particularly once the reinsurers smell blood in the water and charge accordingly, it may still be a prettier picture than picking up the pieces after TWIA goes insolvent.

Fourth, and this may be what coastal residents are counting on, is to just wait and see and try to bail out TWIA policyholders after the fact when a big hurricane strikes.  This will be galling to all.  It will be galling to those on the coast because the fight to get such relief will be slow and tough.  It will be galling to those away on the coast because the taxes that will need to be imposed either directly or indirectly to pay for the losses will be high. The taxes will be the engineered result of problematic legislation passed in 2009 and the steadfast refusal of some on the coast to accept financial responsibility for the true risk of hurricanes there. There is, of course, Uncle Sam, but somehow I would not count on Washington to be as generous following 2013 hurricane Chantal that devastates red Texas as it was to residents of the bluer northeast following Superstorm Sandy.  Besides, with the sequestration and all, it does not appear Washington is going to be eager to spend money on much of anything.

This leaves Texans with prayer as the final alternative. If, however, as many suspect, God helps those who helps themselves, it might be a good investment to deal in a more secular way, right now, with the 2013 risk.

Note: My thanks to David Crump for (1) making the public records request that generated the most recent information on this point; (2) sharing it with me; and (3) pointing out that my original post may have actually been too optimistic.

Senator Carona calls for insurers to be more constructive on windstorm legislation

Far more important, frankly, than my testimony yesterday before the Texas Senate Business & Commerce Committee, was the colloquy between influential members of that committee and representatives of the insurance industry, notably Beamon Floyd, director of the Texas Coalition for Affordable Insurance Solutions (big Texas insurers such as Allstate, State Farm, Famers, USAA), and Jay Thompson of the Association of Fire and Casualty Companies of Texas.  You can watch it all here from 1:49 to 2:00 and 2:22 to 2:25 on the video of the hearing.  John Carona (R-Dallas) and chair of the committee castigates the insurance industry for acting in bad faith, dragging its heels and apparently stonewalling on the issue of TWIA reform.  While such criticism might be expected from members along the coast or from those predisposed to criticize whatever the insurance industry does, this critique

State Senator John Carona

State Senator John Carona

came directly from Senator Carona,  a man who described himself as a friend of the insurance industry and, indirectly, from Governor Rick Perry, likewise seldom confused as an insurance basher.

The problem, basically, is that the insurance industry is resisting a bill that would likely compel it to shoulder more expense for risk along the Texas coast than it does now, even if it can pass many of those expenses on, but it hasn’t been bold enough thus far to come forward at this stage of the legislative process with support for specific solutions to the short and long term problems facing TWIA and its insureds. Nor has the industry publicly (or otherwise, to my knowledge) to date presented facts showing the extent of the burden that would be created by the assigned risk plan embodied in SB 18. This silence places legislators such as Senator Carona in a difficult position. They do not wish to create crushing burdens on the insurance industry that will make insurance in Texas yet more expensive or difficult to obtain, particularly in their districts, but they are also not willing to create a situation in which a significant storm forces an insurer for which they bear responsibility to undergo a difficult forced recapitalization or, worse, leaves it unable to pay claims promptly and fully. My sense is that Senator Carona and perhaps others felt much the way I do when confronted with a student, even one who has done well in the past, who is long on generalized rhetoric but doesn’t show that they have actually done the needed homework.

Here’s what I bet Senator Carona and others would like to see. With respect to all of these numbers, it would be best if they came from certified actuaries using contemporary storm models and it would be helpful if the figures were provided in both absolute dollars and as a percentage of industry premium revenue.  Some of these numbers may well be difficult to develop, but if figures could be brought forth even on an order of magnitude basis, it might separate out real threats to the Texas insurance industry from reflexive rhetoric.

Numbers Relevant to SB 18

(a) Evidence as to the expected costs of the 2210.0561 potential for assessment; this figure might be either a measure of expected losses or an explanation of why this assessment responsibility needs to be reinsured along with the costs thereof.

(b) Evidence as to the costs of the 2210.0561 assessment to help TWIA buy up to $2 billion in reinsurance. My wild guess is that we are looking at $150 million per year in the immediate future but ramping down substantially as the take out in the assigned risk plan decreases the expected amount reinsurers would pay

(c) Evidence as to what it will cost to set up and maintain a clearinghouse that will migrate coastal residents, and perhaps others, either into a private take-out policy or into the assigned risk pool.  Perhaps I am naive, but I believe the clearinghouse could be operated for less than $10 million per year.

(d) Evidence as to what the shortfall between “market rates” and transition premiums will cost insurers AFTER premium tax credits and recoupment are taken into account.

(e) At least an order of magnitude guess as to what it will cost, net of premiums, to write policies on the riskiest policies as to which SB 18 caps the premium at 25% higher than market. Such an estimate will require at least three figures: (1) an estimate of how many policies there will be in this category; (2) an estimate of actual expected losses among the purchasers; and, importantly, (3) an estimate of the incremental costs of capital that insurers need to stockpile in order to bear this correlated risk.

(f) An estimate of the cost of servicing TWIA policyholders even for windstorm claims pursuant to section 2210.5725 of the bill.

I also suspect Senator Carona and others in the legislature would like to see at least a bargaining position from the insurance industry on how much of these costs should be transferred either to TWIA policyholders or more directly to statewide insureds.

Numbers Relevant to An Alternative Plan

For any alternative plan submitted by the insurance industry, we ought to see numbers on the following:

(a) what are the rates that will be paid for risks currently covered by TWIA policies

(b) how will it address the 2013 hurricane season — the Carona bill is weak here

(c) how does it get the stack of protection up to an amount sufficient to cover at least a 1 in 100 years storm, preferably a 1 in 500 years storm

(d) who bears the financial burden of such a stack

So, I know this is a lot of work and there isn’t much time in which to do it.  But my sense is that one outcome of yesterday’s hearing is going to be a greater sense of urgency on many sides from those who will try to scuttle the assigned risk alternative.

P.S. For those who would rather (or also) like to see my testimony, you can find it at 1:36 to 1:44 of the hearing.

Hinojosa/Hunter file bills that buttress TWIA by forcing non-coastal property holders to pay for coastal risk

State Senator Juan “Chuy” Hinojosa (D-McAllen) and State Representative Todd Hunter (R-Corpus Christi) have filed companion bills in the State Senate (SB 1089) and State House (HB 2352) that would buttress the resources available to the Texas Windstorm Insurance Association (TWIA) to pay claims in the event of a tropical cyclone hitting the Texas coast but would do so by placing most of the burden either directly or indirectly on policyholders living away from the Texas coast.  The bill, like the current system and as heralded in recommendations of the Coastal Windstorm Task Force, would rely primarily on post-event bonding as a way of financing catastrophic risk.  But, by impelling insurers statewide and coastal policyholders to increase the size of the catastrophe reserve that pays before any bonds are issued, the bill would make it less likely that this  system of “insurance in reverse” would need to be used. The new system would come into effect in September of 2013.  It would apparently leave the current system in place for much of this hurricane season.

In a nutshell, here’s how the Hinojosa/Hunter plan works.  TWIA builds up its catastrophe reserve trust fund (a/k/a CRTF, a/k/a “cat fund”) so that it equals 1.5% of its “direct exposure” for the prior year.  (Section 2210.456). Since TWIA lists its current direct exposure at $72 billion, this means the catastrophe reserve fund is supposed to grow to at least $1.08 billion. Catrisk’s earlier modeling suggests that such a catastrophe reserve fund would be able to cover something like a 1 in 20 year storm.

But just because TWIA’s catastrophe reserve fund could cover a 1 in 20 year storm, does not mean that TWIA’s policyholders would be paying to cover that risk.  That’s because under the Hinojosa/Hunter plan, the catastrophe fund is financed mostly with other money.  To get from the paltry $180 million that now stands in the fund to $1.08 billion, the plan would assess  property insurers statewide, regardless of the extent to which they choose to do business on the Texas coast, 1/10 of the desired amount of the catastrophe reserve fund each year.  (Section 2210.456(c) (0.15% of the direct exposure)).  As it stands, this would amount to  $108 million per year for many years into the future. These are real assessments, not compelled loans by the insurance industry.  The  assessments are not creditable against premium taxes otherwise owed and are not supposed to be passed on — at least directly — by a premium surcharge on policyholders. It would demean the insurance industry, however, to suggest that they will not be clever enough to find a way to pass much of this cost on to policyholders.

Coastal insureds — including non-TWIA homeowner insureds and coastal residents with automobile insurance or other forms of property insurance — also pay to protect TWIA policyholders from risk. Under the Hinojosa/Hunter plan, a 3.9% premium surcharge is issued on all such policies. How much would this surcharge bring in?  Unclear. I don’t have the data, yet, particularly on automobile policies along the coast.  But we do know how much TWIA policyholders would pay on their TWIA policies to increase the protection available to them: about $17 million (0.039 x $446 million in premium taxes).  And since TWIA reports that it 62% of the coastal homeowner wind market (measured by exposure and not premiums), one can approximate that non-TWIA homeowner insureds would pay roughly $11 million.  Thus, TWIA policyholders would, at most, pay about 13% of the amount it will take to strengthen the catastrophe reserve fund that would be exclusively available to those policyholders to pay claims in the event of a tropical cyclone. If, as I suspect, non-wind homeowner policies, automobile policy premiums and other property insurance premiums along the coast are at least as large as TWIA premiums, the surcharge on TWIA policies will, at least for a few years, in fact pay perhaps just 7% of the actual cost of this portion of the risk posed by such policies.

And even this last figure of somewhere between 7 and 13% potentially understates the degree to which TWIA policies will be funding the risk they pose.  This is because under section 2210.083 of the Hinojosa/Hunter bill, when the cat fund needs to be restocked following a disaster that wipes it out, insurers doing business anywhere in the state must promptly pay, in addition to the regular shortfall assessment and in addition to whatever else they may be paying their own policyholders, half the amount of any public securities (up to $1 billion) issued to pay TWIA policy losses and, as I read section 2210.084, the entirety (up to $900 million) of additional public securities issued to pay TWIA losses.  Thus, following a serious hurricane, even more of the money used to pay for future hurricane losses will be coming from sources other than TWIA policies. Of course, the Hinojosa/Hunter bill permits insurers to “reinsure” against these potential assessments (section 2210.088), but this just means that insurers will be paying cash for the risk imposed on them by the law rather than perhaps just making an accounting entry for contingent liabilities on their books.

 

Layering of Protections Under Hinojosa/Hunter Bill

Layering of Protections Under Hinojosa/Hunter Bill

The Hinojosa/Hunter provides for at least three heightened layers of protection in the event of a storm that pierces the catastrophe reserve fund.  Each of the layers is provided by bonds, issued after the disaster, by the Texas Public Finance Authority. The layers (Classes A, B and C) differ primarily in their amortization periods and in the source of money used to repay the debts. Up to the first $1 billion is to be provided by Class A securities with an amortization period of 10 years.  The money to repay this debt each year — probably about 1/8 of the amount borrowed — will come from TWIA itself.  If the full $1 billion were borrowed, this would likely amount to a charge of $125 million per year for 10 years, which in turn would increase existing TWIA premiums by 25%. It is not clear whether the market would trust the ability of TWIA to actually obtain these funds, since some TWIA policyholders might be reluctant to renew with TWIA in the event such a hefty increase were imposed. The Texas Public Finance Authority has published grave doubts about the ability to market similar bonds authorized by the current law. 

Class B bonds can be issued in an amount up to $900 million and likewise must be amortized in no more than 10 years.  The source of repayment, though, is different. Although TWIA premiums could in theory be used to repay this obligation — I rather suspect they will be tied up elsewhere — the vast bulk of the funding is likely to come from yet another surcharge: this one on all premiums on coastal property insurance, including non-TWIA wind insurance, conventional coastal homeowner insurance, automobile insurance, and other forms of property insurance. The surcharge won’t be another 25% because the base is bigger.  But since it will cost $110 billion or more each year to amortize the debt, I would not be surprised to see an additional 5 to 7% surcharge.

If the catastrophe reserve fund indeed bulks up to $1.08 billion and the Class A bonds are indeed marketable, the Class B bonds should cover TWIA against the 1 in 50 year storm.  For storms bigger than that, the Hinojosa/Hunter bill provides for $2.75 billion in Class C bonds.  These have an amortization period of 14 years.  They are to be paid by a surcharge on all premiums on property insurance statewide.  The rate will be about 1/10 of the amount borrowed divided by a denominator that I would love to know the value of: the amount of premiums on property insurance sold in this state. If you forced me to make an educated guess, however, I would guess that property insurance premiums in Texas are about $20 billion per year, which would put the needed surcharge at 1-2% per year for 14 years. Of course, if the amount borrowed were not the full $2.75 billion, the surcharge would be less.

There are two other sources of funds worth mentioning.  The Hinojosa/Hunter plan continues to permit TWIA to purchase reinsurance and imposes no price constraints upon their doing so.  Such reinsurance is notoriously expensive and often difficult to obtain.  There is no explicit provision or encouragement for other forms of protection such as pre-event catastrophe bonds. There are also, in theory, Class D securities that provide an unlimited amount of protection to TWIA policyholders.  The problem: no source of funds is identified to pay back the bonds. Section 2210.639 simply mentions that these borrowings could be paid by TWIA premiums (yeah, right) or “money received from any source for the purpose of repaying Class D public securities.”  In other words, no one has a clue.

There is more in the Hunter bills and the Hinojosa bill that Catrisk will try to address in the near future.  And there are some simulations we can run to get some better ideas of the relative burdens borne throughout Texas under this bill. But this should provide an explanation of the basics.

 

Footnote: I bet that I am going to hear the double dipping criticism of this post again.  The point of these critics is that TWIA policyholders also have conventional homeowner insurance and automobile insurance.  Thus, their burden is higher than I have reported because they get hit with a double or triple whammy.  There is some truth to this criticism.  My defenses are (a) I have tried to report data here as policy based rather than policyholder based; thus the conclusions reached here should be accurate; (b) I can;t find and no one has volunteered the data needed to make the needed computational adjustments; if I had them I could and would do so. My suspicion is that, while a few numbers would change, the themes of the Hinojosa/Hunter bills would not.  They believe coastal risk should be socialized and these bills very much reflects that philosophy.

 

The issues with heavy reliance on pre-event bonds

Pre-event bonds. They sound so good. And they may well be an improvement over reinsurance and other alternatives for raising money. But there is no free lunch and its worth understanding some of the issues involving with reliance on them. In short, while pre-event bonds can work if TWIA stuffs enough money annually into the CRTF — and has the premium income and reduced expenses that permits it to do so. If TWIA lacks the will or money to keep stuffing the CRTF, however, pre-event bonds become a classic debt trap in which the principal balance will grow until it becomes unmanageable. Let’s see the advantages and disadvantages of pre-event bonds by taking a look at the Crump-Norman plan for TWIA reform.

A key concept behind the Crump-Norman plan is for TWIA immediately to bulk up its catastrophe reserve trust fund (CRTF) to a far larger sum than it has today — $2 billion — and to keep its value at that amount of higher for the forseeable future. That way, if a mid-sized tropical cyclone hits, TWIA does not to resort to post-event bonds. It already has cash on hand. The problem, as the Zahn plan, the Crump-Norman plan and any other sensible plan would note, however, is that TWIA simply can not snap its fingers today and bulk up its CRTF to $2 billion without asking somebody for a lot of money. Policyholders would probably have to face a 400% or 500% premium surcharge for a year in order to do so and I can’t see the Texas legislature calling for that. But perhaps TWIA can prime the CRTF by borrowing the money from investors by promising them a reasonable rate of return (maybe 5%) and assuring investors that TWIA will be able to use future premium income to repay the bonds. Each year, TWIA commits insofar as possible to stuff a certain amount of money from premium revenues– perhaps $120 million — into the TWIA, earn interest on the fund at a low rate (maybe 2%) and pay the bondholders their 5% interest and amortize the bonds so that the bonds could be paid off in, say, 20 years. If there are no major storms, the CRTF should grow and there is no need to borrow any more money. The strategy will have worked well, providing TWIA and its policyholders with security and at a cost far lower than it would likely get through mechanisms such as reinsurance. If there are major storms, however, then the CRTF can shrink and TWIA can be forced to borrow more to pay off the earlier investors and restore the CRTF to the desired $2 billion level. The Outstanding Principal Balance on the bonds grows. And, of course, if there are enough storms, the Outstanding Principal Balance can continue to grow until it basically becomes mathematically impossible for TWIA to service the debt out of premium income. And even before that point, investors are likely to insist on higher interest rates due to the risk of default. In the end, however, TWIA is insolvent, its policyholders left to mercy rather than contract.

On what does this risk of insolvency depend? There certainly can be a happy ending. Basically it depends on three factors: (1) the amount TWIA stuffs into the CRTF each year, (2) the spread between the interest TWIA earns on the CRTF and the interest rate it pays to bondholders; and (3) the claims TWIA has to pay due to large storms. I’ve attempted to illustrate these relationships with the several interactive elements below. Of course, you’ll need to download the free Wolfram CDF Player in order to take advantage of their interactive features. But once you do, here is what I think you will see.

(1) Pre-event bonds are risky. Different 100 year storm profiles result in wildly different trajectories for the CRTF and Outstanding Principal Balances. That’s perhaps why they are cheaper than reinsurance because the risk of adverse events is borne by the policyholder (here TWIA) rather than swallowed up by reinsurer. If the reinsurance market is dysfunctional enough — as indeed I have suggested it may be in this instance — then self-insurance through pre-event bonds may indeed be preferable to alternatives.

(2) Little changes in things such as the interest rate end up making a big difference in the expected trajectories of the CRTF and Outstanding Principal Balance. For simplicity, I’ve modeled those interest rates as constants, but in reality one should expect them to change in response to macro-economic forces as well as the perceived solvency of TWIA.

(3) Little changes in the commitment TWIA makes to the CRTF matter a lot. A few percent difference ends up having the potential for a large effect on whether the Outstanding Principal Balance on the pre-event bonds remains manageable or whether they become the overused credit card of the Texas public insurance — world — a debt trap. Pre-event bonds may work better where policyholders understand that they may be subject to special assessments — unfortunately following a costly storm — in order to prevent a deadly debt sprial from resulting. So long as we want to rely heavily on pre-event bonds, laws need to authorize this harsh medicine. Ideally, careful actuarial studies should be done — by people who make it their full time job — to try and get the best possible handle on the tradeoffs between the amount put in and the risks of insolvency. The unfortunate truth, however, is that some of the underlying variables — such as storm severity and frequency — is sufficiently uncertain that I suspect no one will know the actual values with way greater certainty than I have presented.

(4) Luck helps. My interactive tool provides you with 20 different 100-year storm sets. They’re all drawn from the same underlying distribution. They are just different in the same way that poker hands are usually different even though they are all drawn from the same deck. If storms are somewhat less than predicted or the predictions are too pessimistic, pre-event bonds have a far better chance at succeeding than if one gets unlucky draws from the deck or the predictions are too optimistic. Unfortunately, as the debate over climate change shows, disentangling luck from modeling flaws is difficult when one only has a limited amount of history to examine.

[WolframCDF source=”http://catrisk.net/wp-content/uploads/2012/12/crtfopbcrumpnorman.cdf” CDFwidth=”550″ CDFheight=”590″ altimage=”file”]

Continue reading

Catastrophe insurance and the case for compulsory coinsurance

The effect of coinsuranceThis entry presents an interactive tool by which you can study the effects of “coinsurance” on expected losses from catastrophe.  The short version is that coinsurance can, under the right circumstances, significantly reduce expected losses from tropical cyclones. As such, legislatures in coastal states, including Texas, should strongly consider prohibiting subsidized insurers such as TWIA, from selling windstorm insurance policies unless there is a significant amount (say 10%) coinsurance.  The rest of this blog entry explains why and demonstrates the tool.

Continue reading

It’s (close to) a Weibull — again!

You recall that in my last post, I went through an involved process of showing how one could generate storm losses for individuals over years.  That process, which underlies a project to examine the effect of legal change on the sustainability of a catastrophe insurer, involved the copulas of beta distributions and a parameter mixture distribution in which the underlying distribution was also a beta distribution. It was not for the faint of heart.

One purpose of this effort was to generate a histogram that looks like the one below that shows the distribution of scaled claim sizes for non-negligible claims. This histogram was obtained by taking one draw from the copula distribution for each of the [latex]y[/latex] years in the simulation and using it to constrain the distribution of losses suffered by each of the [latex]n[/latex] policyholders in each of those [latex]y[/latex] years.  Thus, although the underlying process created an [latex]y \times n[/latex] matrix, the histogram below is for a single “flattened” [latex]y \times n[/latex] vector of values.

Histogram of individual scaled non-negligible claim sizes

Histogram of individual scaled non-negligible claim sizes

But, if we stare at that histogram for a while, we recognize the possibility that it might be approximated by a simple statistical distribution.  If that were the case, we could simply use the simple statistical distribution rather than the elaborate process for generating individual storm loss distributions. In other words, there might be a computational shortcut that could approximate the elaborate proces.  If that were the case, to get the experience of all [latex]n[/latex] policyholders — including those who did not have a claim at all — we could just upsample random variates drawn from our hypothesized simple distribution and add zeros; alternatively, we could create a mixture distribution in which most of the time one drew from a distribution that was always zero and, when there was a positive claim, one drew from this hypothesized simple distribution.

Continue reading

Copulas and insurance law reform

Storm models are crucial to law reform.  One needs them to get a sense if premiums are reasonable.  And, as I want to show in a series of blog posts, they can also help figure out the effect of legally mandated changes to the insurance contract.  You need to tie behavior at the level of the individual policyholder to the long term finances of the insurer. How would, for example, changing the required deductible on windstorm policies issued by the Texas Windstorm Insurance Association affect the precautions taken by policyholders to avoid storm damage?  That’s important for many reasons, among them that it affects the sustainability of TWIA. Might the imposition of coinsurance into the insurance contract do a better job of making TWIA sustainable?  These are the kind of questions for which a decent storm model is useful.

So, over the past few weeks I’ve been thinking again about ways in which one could, without access (yet) to gigabytes of needed data, develop approximations of the windstorm damage events likely to be suffered by policyholders.  And I’ve been thinking about ways in which one could parameterize those individual damages as a function of the level of precautions taken by policyholders to avoid damage.

What I’m going to present here is a model of storm damage that attempts to strike a reasonable balance of simplicity and fidelity. I’m afraid there’s a good bit of math involved, but I’m going to do my best here to clarify the underlying ideas and prevent your eyes from glazing over.  So, if you’ll stick with me, I’ll do my best to explain.  The reward is that, at the end of the day, we’re going to have a model that in some ways is better than what the professionals use.  It not only explains what is currently going on but can make predictions about the effect of legal change.

Let’s begin with two concepts: (1) “claim prevalence” and (2) “mean scaled claim size.”  By “claim prevalence,” which I’m going to signify with the Greek letter [latex]\nu[/latex] (nu), I mean the likelihood that, in any given year, a policyholder will file a claim based on an insured event. Thus, if in a given year 10,000 of TWIA’s 250,000 policyholders file a storm damage claim, that year’s prevalence is 0.04.  “Mean scaled claim size,” which I’m going to signify with the Greek letter [latex]\zeta[/latex] (zeta), is a little more complicated. It refers to the mean of the size of claims filed during a year divided by the value of the property insured for all properties on which claims are filed during a year.  To take a simple example, if TWIA were to insure 10 houses and, in a particular year, and 2 of them filed claims ([latex]\nu =0.2[/latex]) for $50,000 and for $280,000, and the insured values of the property were $150,000 and $600,000 respectively, the mean scaled claim size [latex]\zeta[/latex] would be 0.4.  That’s because: [latex]0.4=\frac{50}{2\ 150000}+\frac{280000}{2\ 600000}[/latex].

Notice, by the way, that [latex]\zeta \times \nu[/latex] is equal to aggregate claims in a year as a fraction of total insured value.  Thus, if [latex]\zeta \times \nu = 0.005[/latex] and the total insured value is, say, $71 billion, one would expect $355 million in claims in a year. I’ll abbreviate this ratio of aggregate claims in a year to total insured value as [latex]\psi[/latex] (psi).  In this example, then,  [latex]\psi=0.005[/latex].[1]

The central idea underlying my model is that claim prevalence and mean scaled claim size are positively correlated. That’s because both are likely to correlate positively with the destructive power of the storms that occurred during that year.  The correlation won’t be perfect.  A tornado, for example, may cause very high mean scaled claim sizes (total destruction of the homes it hits) but have a narrow path and hit just a few insured properties.  And a low grade tropical storm may cause modest levels of wind damage among a large number of insureds.  Still, most of the time, I suspect, bigger stoms not only cause more claims, but they increase the size of the scaled mean claim size.

copula distribution provides a relatively simple way of blending correlated random variables together.  There are lots of explanations: Wikipedia, a nice paper on the Social Science Research Network, and the Mathematica documentation on the function that creates copula distributions.   There are lots of ways of doing this blending, each with a different name.  I’m going to stick with a simple copula, however, the so-called “Binormal Copula” (a/k/a the “Gaussian Copula.”) with a correlation coefficient of 0.5.[2]

To simulate the underlying distributions, I’m going to use a two-parameter beta distribution for both claim prevalence mean scaled claim size. My experimentation suggests that, although there are probably many alternatives, both these distributions perform well in predicting the limited data available to me on these variables. They also benefit from modest analytic tractability. For people trying to recreate the math here, the distribution function of the beta distribution is [latex]I_x\left(\left(\frac{1}{\kappa ^2}-1\right) \mu ,\frac{\left(\kappa ^2-1\right) (\mu -1)}{\kappa ^2}\right)[/latex], where [latex]\mu[/latex] is the mean of the distribution and [latex]\kappa[/latex] is the fraction (0,1) of the maximum standard deviation of the distribution possible given the value of   [latex]\mu[/latex]. What I have found works well is to set [latex]\mu _{\nu }=0.0244[/latex], [latex]\kappa _{\nu }=0.274[/latex] for the claim prevalence distribution and [latex]\mu _{\zeta }=0.097[/latex], [latex]\kappa _{\zeta }=0.229[/latex] for the mean scaled claim size distribution. This means that policyholders will file a claim about every 41 years and that the value of claims for the year will, on average, be 9.7% of the insured value of the property.[3]

We can visualize this distribution in a couple of ways.  The first is to show a probability density function of the distribution but to scale the probability logarithmically.  This is shown below.

PDF of sample copula distribution

PDF of sample copula distribution

The second is to simulate 10,000 years worth of experience and to place a dot for each year showing claim prevalence and mean scaled claim size.  That is done below. I’ve annotated the graphic with labels showing what might represent a year in which there was a tornado outbreak, a catastrophic hurricane, a tropical storm as well as the large cluster of points representing years in which there was minimal storm damage.

Claim prevalence and mean scaled claim size for 10,000 year simulation

Claim prevalence and mean scaled claim size for 10,000 year simulation

Equipped with our copula, we can now generate losses at the individual policyholder level for any given year.  The idea is to create a “parameter mixture distribution” using the copula. As it turns out, one component of this parameter mixture distribution is itself a mixture distribution.

Dear reader, you now have a choice.  If you like details, have a little bit of a mathematical background and want to understand better how this model works, just keep reading at “A Mini-Course on Mixture and Parameter Mixture Distributions.”  If you just want the big picture, skip to “Simulating at the Policyholder Level” below.

A Mini-Course on Mixture and Parameter Mixture Distributions

To fully understand this model, we need some understanding of a mixture distribution and a parameter mixture distribution.  Let’s start with the mixture distribution, since that is easier.  Imagine a distribution in which you first randomly determine which underlying component distribution you are going to use and then you take a draw from the selected underlying component distribution.  You might, for example, roll a conventional six-sided die, which is a physical representation of what statisticians call a “discrete uniform distribution.”  If the die came up 5 or 6, you then draw from a beta distribution with a mean of 0.7 and a standard deviation of 0.3 times the maximum.  But if the die came up 1 through 4, you would draw from a uniform distribution on the interval [0,0.1].  The diagram below shows the probability density function of the resulting mixture distribution (in red) and the underlying components in blue.

Mixture Distribution with beta and uniform components

Mixture Distribution with beta and uniform components

The mixture distribution has a finite number of underlying component distributions and has discrete weights that you select. The parameter mixture distribution can handle both infinite underlying component distributions and handles weights that are themselves draws from a statistical distribution. Suppose we create a continuous function [latex]f[/latex] that takes a parameter [latex]x[/latex] and creates triangular distribution which has a mean of [latex]x[/latex] and extends 1/4 in each direction from the mean.  We will call this triangular distribution the underlying distribution of the parameter mixture distribution.  The particular member of the triangular distribution family used is determined by the value of the parameter. And, now, we want to create a “meta distribution” — a parameter mixture distribution — in which the probability of drawing a particular parameter [latex]x[/latex] and in turn getting that kind of triangular distribution with mean [latex]x[/latex] is itself determined by another distribution, which I will call [latex]w[/latex]. The distribution [latex]w[/latex] is the weighting distribution of the parameter mixture distribution. To make this concrete, suppose [latex]w[/latex] is a uniform distribution on the interval [0,1].

The diagram below shows the result.  The blue triangular underlying distributions represent a sample of the probability density functions of triangular distributions.  There are actually an infinite number of these triangular distributions, but obviously I can’t draw them all here. Notice that some of the density functions are more opaque than others. The opacity of each probability density function is based on the probability that such a distribution would be drawn from [latex]w[/latex].  The red line shows the probability density function of the resulting parameter mixture distribution.  It is kind of an envelope of these triangular distributions.

Parameter mixture distribution for triangular distributions where mean of triangular distributions is drawn from a uniform distribution

Parameter mixture distribution for triangular distributions where mean of triangular distributions is drawn from a uniform distribution

We can combine mixture distributions and parameter mixture distributions.  We can have a mixture distribution in which one or more of the underlying functions is a parameter mixture distribution.  And, we can have a parameter mixture distribution in which either the underlying function and/or the weighting function is a mixture distribution.

It’s that combination — a parameter mixture distribution in which the underlying function is a mixture distribution — that we’re going to need to get a good simulation of the damages caused by storms. The weighting distribution of this parameter mixture distribution is our copula. It throws out two parameters:  (1) [latex]\nu[/latex], the likelihood that in any given year the policyholder has a non-zero claim and (2) [latex]\zeta[/latex] the mean scaled claim size assuming that the policyholder has a non-zero claim.  Those two parameters are going to weight members of the underlying distribution, which is a mixture distribution.  The weights of the mixture distribution are the the likelihood that the policyholder has no claim and the likelihood that the policyholder has a non-zero claim (claim prevalence). The component distributions of the mixture distribution are (1) a distribution that always produces zero and (2) any distribution satisfying the constraint that its mean is equal to the mean scaled claim size.  I’m going to use another beta distribution for this latter purpose with a standard deviation equal to 0.2 of the maximum standard deviation.  I’ll denote this distribution as B. Some examination of data from Hurricane Ike is not inconsistent with the use of this distribution and the distribution has the virtue of being analytically tractable and relatively easy to compute.

This diagram may help understand what is going on.

The idea behind the parameter mixture distribution

The idea behind the parameter mixture distribution

Simulating at the Policyholder Level

So, we can now simulate a large insurance pool over the course of years by making, say, 10,000 draws from our copula.  And from each draw of the copula, we can determine the claim size for each of the policyholders insured in that sample year. Here’s an example.  Suppose our copula produces a year with some serious damage: claim prevalence value of 0.03 and a mean scaled claim size of 0.1 for the year.  If we simulate the fate of 250,000 policyholders, we find that 242,500 have no claim.  The graphic below shows the distribution of scaled claim sizes among those who did have a non-zero claim.

Scaled claim sizes for sample year

Scaled claim sizes for sample year

Fortunately, however, we don’t need to sample 250,000 policy holders each year for 10,000 years to get a good picture of what is going on.  We can simulate things quite nicely by looking at the condition of just 2,500 policyholders and then just multiplying aggregate losses by 100.  The graphic below shows a logarithmic plot of aggregate losses assuming a total insured value in the pool of $71 billion (which is about what TWIA has had recently).

Aggregate losses (simulated) on $71 billion of insured property

Aggregate losses (simulated) on $71 billion of insured property

We can also show a classical “exceedance curve” for our model.  The graphic below varies the aggregate losses on $71 billion of insured property and shows, for each value, the probability (on a logarithmic scale) that losses would exceed that amount.  One can thus get a sense of the damage caused by the 100 year storm and the 1000-year storm.  The figures don’t perfectly match TWIA’s internal models, but that’s simply because our parameters have not been tweaked at this point to accomplish that goal.

Exceedance curve (logarithmic) for sample 10,000 year run

Exceedance curve (logarithmic) for sample 10,000 year run

The final step is to model how extra precautions by a policyholder might alter these losses.  Presumably, precautions are like most economic things: there is a diminishing marginal return on investment.  So, I can roughly model matters by saying that for a precaution of x the insured results in the insured drawing from a new beta distribution with a mean equal to [latex]\ell \times 2^{-x}[/latex], where [latex]\ell[/latex] is the amount of damage they would have suffered had they taken no extra precautions. (I’ll keep the standard deviation of this beta distribution equal to 0.2 of its maximum possible value.) I have thus calibrated extra precautions such that each unit of extra precautions cuts the mean losses in half. It doesn’t mean that sometimes precautions won’t result in greater savings or that sometimes precautions won’t result in lesser savings; it just means that on average, each unit of precautions cuts the losses in half.

And, we’re done!  We’ve now got a storm model that when combined with the model of policyholder behavior that I will present in a future blog entry, should give us respectable predictions on the ability of insurance contract features such as deductibles and coinsurance to alter aggregate storm losses. Stay tuned!

Footnotes

[1] As I recognized a bit belatedly in this project, if one makes multiple draws from a copula distribution, it is not the case that the mean of the product of the two values [latex]\nu[/latex] and [latex]\zeta[/latex] drawn from the copula is equal to [latex]\nu \times \zeta[/latex]. You can see why this might be by imagining a copula distribution in which the two values were perfectly correlated, in which case one would be drawing from a distribution transformed by squaring.  It is not the case that the mean of such a transformed distribution is equal to the mean of the underlying distribution.

[2] Copulas got a bad name over the past 10 years for bearing some responsibility for the financial crisis..  This infamy, however, has nothing to do with the mathematics of copulas, which remains quite brilliant, but with their abuse and the fact that incorrect distributions were inserted into the copula.

[3] We thus end up with a copula distribution whose probability density function takes on this rather ghastly closed form.  (It won’t be on the exam.)

[latex]\frac{(1-\zeta )^{\frac{1-\mu _{\zeta }}{\kappa _{\zeta }^2}+\mu _{\zeta }-2} \zeta ^{\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta }-1} (1-\nu )^{\frac{1-\mu _{\nu }}{\kappa _{\nu }^2}+\mu _{\nu }-2} \nu ^{\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu_{\nu }-1} \exp \left(\frac{\left(\text{erfc}^{-1}\left(2 I_{\zeta }\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta },\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right)\right)-\rho

\text{erfc}^{-1}\left(2 I_{\nu }\left(\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu _{\nu },\frac{\left(\kappa _{\nu }^2-1\right) \left(\mu _{\nu }-1\right)}{\kappa _{\nu }^2}\right)\right)\right){}^2}{\rho ^2-1}+\text{erfc}^{-1}\left(2 I_{\zeta
}\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta },\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right)\right){}^2\right)}{\sqrt{1-\rho ^2} B\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta
},\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right) B\left(\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu _{\nu },\frac{\left(\kappa _{\nu }^2-1\right) \left(\mu _{\nu }-1\right)}{\kappa _{\nu }^2}\right)}[/latex]


 

 

Reducing Maximum Residential Policy Limits

I and many others have proposed that TWIA reduce the maximum policy limits on residential properties from the current $1.8 million to some substantially lower figure as a way of reducing the likelihood of post-event bonding and insolvency. Such a reform would also have the effect of reducing subsidization of individuals who have more expensive homes than the average Texan paying for the subsidy. As shown in the recently released Alvarez & Marsal report It would also bring TWIA more in line with other coastal windstorm programs such as Alabama’s $500,000 or Florida’s $1.000,000 dwelling limit.

But, a legitimate question is how much value would this reform really create?  The case for the reform is somewhat stronger if it would reduce TWIA exposure by, say, 20% than if it would do so by just 1%.  One statistic advanced by Representative Craig Eiland (D. Galveston) is to note that only 1% of residences insured by TWIA are valued at over $1 million.  This statistic needs to be augmented, however, by an appreciation that the more valuable the residence, the more it contributes towards the risks of TWIA.  All residences should not count alike.

Data provided by TWIA permits a first stab at a better answer.  I’ve presented it in the chart below. It shows that reducing limits to $1 million for residences will have only a 1% effect on TWIA’s total insured value.  Somewhat disappointing. If we’re serious about cutting TWIA exposure, we have to dig deeper.  Going to $500,000 gets about a 5% reduction in total residential insured value.  A reduction to $250,000 (the federal flood limit) creates the big gain, reducing TWIA’s residential TIV by 29%.

TWIA exposure reduction chart

TWIA exposure reduction chart

A few more points.

1. Just because the reduction is smaller than ideal does not mean we should not do it.  Every little bit helps.  And, as I have said ad nauseum, the economic and moral case for subsidizing expensive beach homes seems rather small. It’s all the more so where we condition the continued reduction in maximum policy limits, as I have proposed, on a finding that excess insurance is available.

2. For actuarial math nerds only! The figures I’ve put up, although the best I can do with the data I have, are still not ultimately what one would want.  What really needs to be done by organizations such as AIR, RMS and others that have better access to the underlying storm data, is to determine the effect of right-censoring the loss distribution on homes on the overall distribution of losses faced by TWIA after a correlative premium reduction is taken into account.  One can then use the survival function of the transformed aggregate insured loss distribution to recompute the probability of TWIA needing to resort to various classes of securities and its risk of insolvency. My guess is that the relationship is somewhat sublinear because losses to the left of the censor point are more common than losses to the right. So, even though I am showing a healthy 29% reduction in TIV from a $250,000 cap, that will likely not reduce of TWIA insolvency by 29%.  If you asked me for a wild guess, I’d guess 15%. That’s still good. Every little bit helps.

3. The data here is one reason I really like coinsurance as a way of limiting TWIA exposure.  (See suggestion # 2 of my 10 point proposal here).  It has a more direct effect than right-censoring individual loss distributions, makes it less necessary for people to purchase multiple insurance policies, and may result in greater mitigation.

4.  In the spirit of transparency, I’m posting the spreadsheet that underlies this analysis here, along with a Mathematica notebook used to conduct the analysis.

Residential Dwellings spreadsheet

PDF of Mathematica notebook showing reduction in TIV from cap on maximum policy limits

The curious case of Corpus Christi

Today’s Corpus Christi Caller has an interesting article that purports to show a special immunity of the Corpus Christi area to hurricane risk, which is said to be no more than that facing New York City. The article is based on a report from NOAA published since 2010 and apparently brought to the recent attention of Todd Hunter, Corpus Christi’s state representative. It’s based on data from 1887 forwards that attempts to calibrate the comparative risk of landfall both within Texas and throughout the Gulf and Eastern Seaboard.

Here’s the key picture which, though not shown in the report, appears to underlie the article’s conclusions and quotations.

Return periods of Atlantic hurricanes

Return periods of Atlantic hurricanes by county

See the blue 19 next to Corpus Christi and the blue 20 next to New York City. This is supposed to show that the risk of hurricanes in those two regions are similar: one every 19 or 20 years a hurricane will strike within 50 miles. And see the orange 9s next to Galveston and Brazoria counties. That is supposed to show that the risk of hurricanes in those two regions are greater, once every 9 years.

The evidence gets a bit more complicated, however, if one looks at the next picture in the NOAA document, one not mentioned in the Caller article. It shows the history of major hurricanes based on historic evidence from 1887 to 2010. Although the coastal bend (33-40 years) still comes out better than the east Texas coast (25-26 years), the ratio isn’t as great as for all hurricanes. Moreover, the comparison with New York City now fails. The Big Apple gets hit only once every 68 years.

Major hurricane return periods

Return period for major Atlantic hurricanes by county

So, what are we to make of all this? I would say not too much. What the NOAA report lacks is any notion of statistical significance that would make it particularly useful in drawing fine grained distinctions between areas of the Texas coast. It might just be that what the pictures show is significantly good and bad luck. Drawn from a sample of just 130 years or so, one might expect to see distributions of return periods that varied from county to county. Perhaps some trends might be observable, such as greater strike frequency in Florida than Texas, but what the report lacks is a “p-value,” the probability that one would see variations in the data as large as those exhibited in the graphics simply as a matter of chance. I’m not faulting NOAA for this; it would be very hard to develop such a statistic and it was purporting to capture historic evidence only. Moreover, our climate is dynamic. Storm tracks and storm frequency can change as a result of global weather phenomenon. Thus, while one should not ignore historic data, you have to be very careful about projecting it into the future or using it to make highly specific projections.

So, should the report be ignored? No. Perhaps curious atmospheric features (jet stream placement) and geographic features such as the placement of Cuba indeed give Corpus Christi a little shield. And if Corpus Christi wants to argue on that basis for lower rates for southwest coastal Texas and higher rates for the eastern Texas coast, I wouldn’t be mightily opposed. Somehow, however, I don’t think that’s where coastal Texas wants to go in the upcoming legislative session. Recognition of large differences based on geography in catastrophe risk isn’t the best basis on which to plead risk socialization and rate uniformity. (More on that point soon!)

An idea for future TWIA finance

Although they may thoroughly disagree on the direction in which reform should go, almost everyone agrees has come to agree with what I predicted in 2009:  TWIA finances are in serious need of reform.  This blog entry sketches out one direction in which TWIA might proceed.  The idea here is that TWIA should, in a steady state, have enough cash on hand in its catastrophe reserve fund to pay for insured losses and operating expenses, without having to borrow, with a high probability, say 99%.  Further TWIA should have borrowing capacity to address the rare situations (say 1% of years) in which its reserves would be inadequate. Those borrowings should be repaid by some percentage of TWIA policyholders, persons living on the coast, and Texans generally, perhaps collected through the proxy of insurers doing business in Texas.

Although people can quarrel about the precise parameters in this abstract statement of the goal, I have some hope that people could agree on the concept. Government-sponsored insurance companies that don’t have the right to draw on the government fisc, ought not to be relying heavily on post-event bonding as a way of paying claims; instead they need enough money in their piggy bank just as we require of their private insurer counterparts. But what if TWIA’s catastrophe reserve fund does not meet this lofty goal?  What then?  Especially given the magnitude of the current reserve shortfall and the current economy, matters can not be corrected overnight. There should, I say, be an adjustment period during which premiums are adjusted (either upwards or, at some hypothetical future time, downwards) such that, at the end of the adjustment period, things come into balance and the catastrophe reserve fund meets the goal.

How do we operationalize this idea? Here is the beginning of a statutory draft. I’ve put in dummy statute section numbers for ease of reference. Obviously, the real section numbers would have to be revised by legislative counsel. Also, we’re probably going to have to develop a more comprehensive process for 2210.355A(b)(1) and reconcile this provision with the alternative process currently set form in 2210.355A.

2210.355A

(a) Definitions

(1)  The “Exceedance Function for the catastrophe year” is a function that approximates the probability that insured lossses and operating expenses in the catastrophe year will exceed a specified dollar amount. Insured losses shall be computed on a net basis after consideration of any reinsurance or other sources of recovery.

(2) The term “Loss PDF” means the probability distribution function mathematically associated with the Exceedance Function.

(3) The term “Century Storm Reserve Adequacy” means having a catastrophe reserve fund at the start of each catastrophe year such that this fund would be able, without additional borrowing, to fully pay insured losses and operating expenses in the following catastrophe year with a 99% probability as computed using the Exceedance Function for the catastrophe year.

(4) The term “Reserve Adjustment Period” means ten years.

(b)

(1) The Association shall, prior to the start of each catastrophe year, use the best historical and scientific modeling evidence with considerations of standards in the business of catastrophe insurance, to determine the Exceedance Function and associated Loss PDF for the catastrophe year.”

(2) If, at any time, the Association finds that its catastrophe reserve fund at the start of a catastrophe year does not achieve Century Storm Reserve Adequacy,  the Association shall adjust the premiums to be charged in the following year either downwards of upwards as appropriate such that, were:


(A) such premiums to be charged for the Reserve Adjustment Period on the base of currently insured properties;

(B) insured losses and operating expenses of the Association to be for the Reserve Adjustment Period at the mean of the Loss PDF for the catastrophe year; and

(C) the Association were to earn on any reserve balances during the Reserve Adjustment Period the amount of interest for reasonably safe investments then available to the Association,

the catastrophe reserve fund at the end of Reserve Adjustment Period would achieve Century Storm Reserve Adequacy.

(c) By way of illustration, if the Exceedance Function takes on a value of 0.01 when the size of insured losses and operating expenses is a equal to 440 million dollars and the mean of the Loss PDF for the catastrophe year is equal to 223 million, the initial balance of the catastrophe reserve fund is 100 million dollars and the amount of interest for safe investments then available to the Association is equal to 2% compounded continuously, then the premiums charged for the following calendar year should be equal to $614,539,421.

And what happens, by the way, if a storm hits that exceeds the size of the catastrophe reserve fund?  Stay tuned.  I’ve got an idea there too.

How do we keep premiums low under this scheme?  Likewise, stay tuned.  Hint: think about coinsurance requirements and lower maximum policy limits.  Think about carrots to get the private insurance industry writing excess policies on the coast with ever lower attachment points.

  • Footnote for math nerds only. Anyone seeing the implicit differential equations in the model and the applications of control theory?
  • Footnote for Mathematica folks only. Here’s the program to compute the premium. Note the use of polymorphic functions.

p[\[Omega]_, \[Mu]_, q_, c_, r_, z_] :=
x /. First@
Solve[Quantile[\[Omega], q] ==
TimeValue[c, EffectiveInterest[r, 0], z] +
TimeValue[Annuity[x – \[Mu], z], EffectiveInterest[r, 0], z],
x];
p[\[Omega]_, q_, c_, r_, z_] :=
With[{m = NExpectation[x, x \[Distributed] \[Omega]]},
p[\[Omega], \[Mu], q, c, r, z]]

  • Footnote for statutory drafters. Note the use of modular drafting such that one can change various parameters in the scheme (such as the 10 year adjustment period) without having to redraft the whole statute.