TWIA leadership further confuses House Insurance Committee

Associated Press reports are successfully repeating the message the Texas Windstorm Insurance Association leadership sought to convey at today’s special meeting of the House Insurance Committee: “Coastal Group Expects Surplus” is the headline, for example, that the Houston Chronicle attaches to the AP report.  Unfortunately for TWIA policyholders or any legislators misled by today’s presentation, the surplus scenario is essentially a picture of the best possible world in which no significant storms affect the largest windstorm insurer on the Texas coast.  Thus, while the graphic is not false, all it really does is confirm that insurance companies, even ones with premiums that do not reflect risk, make money  if they never have any large claims. It is not, however, an accurate depiction of reality.

Here’s the happy picture that TWIA wants the world to see. Surplus goes in a predictable linear way from that troublesome negative (red) $183 million in the fourth quarter of 2012 to a cheerier (blue) positive $211 million by the fourth quarter of 2014. That’s the picture presented by TWIA lawyer David Durden, TWIA chief actuary James Murphy and Pete Gise, TWIA’s comptroller at yesterday’s special meeting of the House Insurance Committee.  It is a picture that will make unquestioning TWIA policyholders breathe a sigh of relief, lessen pressure to reform TWIA, forestall efforts to place the insolvent insurer into receivership and let those who profit from the band playing on continue to do so for a time.

A misleading projection of TWIA finances

A misleading projection of TWIA finances

But look carefully at the fine print in the foonotes for this graphic. “Surplus amounts include operational expenses, non-catastrophe losses, projected changes in Ike reserves, and state sales tax refunds.”  What’s not included?  TWIA doesn’t say in the graphic, but I can tell you.  What TWIA does not include is the main thing TWIA was set up to handle and for which it needs catastrophe reserves: large losses from tropical cyclones.  (I’m also not sure they are taking account of reinsurance premiums, which now consume more than 20% of TWIA premiums). In other words, TWIA could have shown roughly the same “projected” increase in surplus  in any year it chose, ranging from the year before Hurricane Ike to the year before Hurricane Alicia. And TWIA would have been equally misleading in doing so.

And what is the probability that over the next two hurricane seasons TWIA will incur no tropical cyclone expenses. Assuming we have normal hurricane seasons over the next two years– which itself is rather optimistic given the unanimous forecasts of weather experts — the probability is about 1/3. Even with the most optimistic estimates of Texas hurricane frequency, the probability that the TWIA graph accurately projects reality is less than half. So, yes, less than half the time, the graphic produced by TWIA might be accurate.

The majority of the time, however, the TWIA graphic will be wrong. And some of the time it will be seriously wrong. This is exactly why every actuary who has consulted for TWIA or TDI in recent times has noted that TWIA takes in too little revenue relative to expenses to sustain a surplus. On average, in any two year period during which TWIA suffers a significant loss (i.e. a loss greater than $50 million), the average total loss during that time period is well over $500 million. Such losses would in fact significantly increase the deficit TWIA now suffers from. This is based on the Compound Poisson Distribution discussed on this blog as a way of modeling annual losses to TWIA and emulating the sophisticated work of state-of-the-art storm modelers such as AIR and RMS.  The Mathematica code proving this point is shown at the bottom of this post.

When we actually take possible storm losses into account, the two year position of TWIA is likely to be worse or no better than it is today.

I’ve tried in this blog to stay away from accusations of bad faith.  People have honest disagreements and different values.  And I have had respect for people doing what must be difficult work at an insurer with little money.  And this graphic did, after all, have a footnote from which one knowledgable in the area might recognize that the graphic was missing critical information. And TWIA did disclose at the hearing — after a lengthy exposition of the graphic — that their graphic assumes no storm losses.  But to me it is like presenting a graphic projecting how well the Astros are likely to do this year based on how they do during their best periods without taking into account the fact that they also suffer a lot of losing streaks. It is, at best, an insulting partial truth, one that I hope reporters,  legislators and, tomorrow, the TWIA Board of Directors, are smart enough to see through.

The code

Mean[Total /@Map[Max[# – 50000000, 0] &,

DeleteCases[Partition[RandomVariate[CompoundPoissonDistribution[0.54,WeibullDistribution[0.42, 177000000]], 10000], 2, 1], {0,

0}], {2}]]

Perry says special special on windstorm is “certainly possible”

Texas Governor Rick Perry told reporters at a Hurricane Preparedness Week meeting today that it was “certainly possible” that he would add windstorm insurance reform to the agenda for a special session.  He said that “we’re not going to bring it forward until we get a little closer to what I would consider to be an agreement between the disparate groups that are out there.”

Governor Perry’s desire not to waste legislator’s time on a fruitless effort is understandable, but since hurricanes are unlikely to respect this delicacy, let’s hope those groups will in fact move together swiftly, perhaps prodded along by a Governor who should not want to see vivid images of an unrepaired Texas coast featured in future political advertisements run against him.  There have, after all, been 60 tropical cyclones that have made landfall in the United States during the month of June (in the time for which statistics have been kept).  Eight have been Category 2 or higher, including Alma in 1966, which made American landfall in Florida and Audrey in 1957, which brushed the Texas border while severely damaging Louisiana.

Addendum: 4 p.m. 5/31/2013 —  Fox 26 Houston is doing a story on Governor Perry’s statement for its 5 p.m. news.  It will likely feature interviews with Senator Larry Taylor and with me. The reporter, Greg Groogan, definitely understands the issues.

Here, by the way is the Mathematica code that generated the above statistics.

atl = Import[“http://weather.unisys.com/hurricane/atlantic/tracks.atl”, 

   “Lines”];
Length@Flatten[
   Map[StringCases[#, RegularExpression[“.+6/\\d{2}/\\d{4}.+XING=1.+”]] &,
    atl]];
Column@Flatten@
  Map[StringCases[#,
     RegularExpression[“.+6/\\d{2}/\\d{4}.+XING=1.+SS=(2|3|4|5)”]] &, atl]

For other reports on this breaking item, look here and here.

 

 

 

Study shows Coastal Taskforce Plan requires more than 50% subsidization

The Coastal Taskforce Plan recently endorsed by several coastal politicians would require people other than TWIA policyholders massively to subsidize TWIA — perhaps paying more than 60% of expected losses from tropical cyclones. That is the result of a study I have conducted using hurricane modeling software. As shown in the pie chart below, the study shows that only about 38% of the payouts come from TWIA premiums. The rest comes 26% from Texas insurers, 21% from policyholders of all sorts in 13 coastal counties and Harris County, 8% from insureds located throughout Texas and 7% from the State of Texas itself. These figures are based on running a 10,000 year storm simulation based on data created by leading hurricane modeler AIR and obtained through a public records request.  The figures are also based on my best understanding of the way in which the Coastal Taskforce plan would operate, although certain aspects of the plan remain unclear and additional clarification would help.

Expected Distribution of Sources for TWIA Payouts Due to Losses from Tropical Cyclones

Expected Distribution of Sources for TWIA Payouts Due to Losses from Tropical Cyclones (Sharing)

Continue reading

The issues with heavy reliance on pre-event bonds

Pre-event bonds. They sound so good. And they may well be an improvement over reinsurance and other alternatives for raising money. But there is no free lunch and its worth understanding some of the issues involving with reliance on them. In short, while pre-event bonds can work if TWIA stuffs enough money annually into the CRTF — and has the premium income and reduced expenses that permits it to do so. If TWIA lacks the will or money to keep stuffing the CRTF, however, pre-event bonds become a classic debt trap in which the principal balance will grow until it becomes unmanageable. Let’s see the advantages and disadvantages of pre-event bonds by taking a look at the Crump-Norman plan for TWIA reform.

A key concept behind the Crump-Norman plan is for TWIA immediately to bulk up its catastrophe reserve trust fund (CRTF) to a far larger sum than it has today — $2 billion — and to keep its value at that amount of higher for the forseeable future. That way, if a mid-sized tropical cyclone hits, TWIA does not to resort to post-event bonds. It already has cash on hand. The problem, as the Zahn plan, the Crump-Norman plan and any other sensible plan would note, however, is that TWIA simply can not snap its fingers today and bulk up its CRTF to $2 billion without asking somebody for a lot of money. Policyholders would probably have to face a 400% or 500% premium surcharge for a year in order to do so and I can’t see the Texas legislature calling for that. But perhaps TWIA can prime the CRTF by borrowing the money from investors by promising them a reasonable rate of return (maybe 5%) and assuring investors that TWIA will be able to use future premium income to repay the bonds. Each year, TWIA commits insofar as possible to stuff a certain amount of money from premium revenues– perhaps $120 million — into the TWIA, earn interest on the fund at a low rate (maybe 2%) and pay the bondholders their 5% interest and amortize the bonds so that the bonds could be paid off in, say, 20 years. If there are no major storms, the CRTF should grow and there is no need to borrow any more money. The strategy will have worked well, providing TWIA and its policyholders with security and at a cost far lower than it would likely get through mechanisms such as reinsurance. If there are major storms, however, then the CRTF can shrink and TWIA can be forced to borrow more to pay off the earlier investors and restore the CRTF to the desired $2 billion level. The Outstanding Principal Balance on the bonds grows. And, of course, if there are enough storms, the Outstanding Principal Balance can continue to grow until it basically becomes mathematically impossible for TWIA to service the debt out of premium income. And even before that point, investors are likely to insist on higher interest rates due to the risk of default. In the end, however, TWIA is insolvent, its policyholders left to mercy rather than contract.

On what does this risk of insolvency depend? There certainly can be a happy ending. Basically it depends on three factors: (1) the amount TWIA stuffs into the CRTF each year, (2) the spread between the interest TWIA earns on the CRTF and the interest rate it pays to bondholders; and (3) the claims TWIA has to pay due to large storms. I’ve attempted to illustrate these relationships with the several interactive elements below. Of course, you’ll need to download the free Wolfram CDF Player in order to take advantage of their interactive features. But once you do, here is what I think you will see.

(1) Pre-event bonds are risky. Different 100 year storm profiles result in wildly different trajectories for the CRTF and Outstanding Principal Balances. That’s perhaps why they are cheaper than reinsurance because the risk of adverse events is borne by the policyholder (here TWIA) rather than swallowed up by reinsurer. If the reinsurance market is dysfunctional enough — as indeed I have suggested it may be in this instance — then self-insurance through pre-event bonds may indeed be preferable to alternatives.

(2) Little changes in things such as the interest rate end up making a big difference in the expected trajectories of the CRTF and Outstanding Principal Balance. For simplicity, I’ve modeled those interest rates as constants, but in reality one should expect them to change in response to macro-economic forces as well as the perceived solvency of TWIA.

(3) Little changes in the commitment TWIA makes to the CRTF matter a lot. A few percent difference ends up having the potential for a large effect on whether the Outstanding Principal Balance on the pre-event bonds remains manageable or whether they become the overused credit card of the Texas public insurance — world — a debt trap. Pre-event bonds may work better where policyholders understand that they may be subject to special assessments — unfortunately following a costly storm — in order to prevent a deadly debt sprial from resulting. So long as we want to rely heavily on pre-event bonds, laws need to authorize this harsh medicine. Ideally, careful actuarial studies should be done — by people who make it their full time job — to try and get the best possible handle on the tradeoffs between the amount put in and the risks of insolvency. The unfortunate truth, however, is that some of the underlying variables — such as storm severity and frequency — is sufficiently uncertain that I suspect no one will know the actual values with way greater certainty than I have presented.

(4) Luck helps. My interactive tool provides you with 20 different 100-year storm sets. They’re all drawn from the same underlying distribution. They are just different in the same way that poker hands are usually different even though they are all drawn from the same deck. If storms are somewhat less than predicted or the predictions are too pessimistic, pre-event bonds have a far better chance at succeeding than if one gets unlucky draws from the deck or the predictions are too optimistic. Unfortunately, as the debate over climate change shows, disentangling luck from modeling flaws is difficult when one only has a limited amount of history to examine.

[WolframCDF source=”http://catrisk.net/wp-content/uploads/2012/12/crtfopbcrumpnorman.cdf” CDFwidth=”550″ CDFheight=”590″ altimage=”file”]

Continue reading

An interactive hurricane damage model for Texas

For most people, hurricane modeling is kind of a black box. Various experts set forth figures on the distribution of losses or statistics derived from those distributions. You pretty much have to take their word on it. I think policy discussions are better when the data is more transparent. So, a few weeks ago I sent a public information request to the Texas Windstorm Insurance Association asking for the raw data that they used to model hurricane losses. TWIA cooperated and send back about 30 megabytes worth of data.

So, I’m now able to create an interactive tool that lets you model the losses suffered by the Texas Windstorm Insurance Association from tropical storms. To run the tool, you will need to get the free CDF plug in from Wolfram Research. Once you have the plugin, you can run any CDF file. CDF is basically like PDF except that it permits interaction.

[WolframCDF source=”http://catrisk.net/wp-content/uploads/2012/12/analyzing-air-2011-data-for-catrisk.cdf” CDFwidth=”630″ CDFheight=”790″ altimage=”http://catrisk.net/wp-content/uploads/2012/12/air-data-static.png”]

Once you have the tool, you can do many things.

You can use the landfall county control to choose a county in which the storm first makes landfall. Notice that some of the counties are outside Texas because storms may first make landfall in, say, Louisiana but then go on to go over Texas and do damage here.

You can restrict the storms under consideration to various strength levels. I’m not sure, honestly, how AIR classifies tropical storms that don’t make hurricane strength. Perhaps they list them as Category 1. Or perhaps — and this would result in an underestimate of damage — they don’t list them at all.
You can also limit yourself to major hurricanes (category 3 or higher) or non-major hurricanes (categories 1 and 2).

You then get some fine control over the method of binning used by the histogram. If you’re not an expert in this area, I’d leave these two controls alone. In the alternative, play with them and I think you will get a feel for what they do. Or you can check out documentation on the Mathematica Histogram command here.

You then decide whether you want the vertical scale to be logarithmic or not. If some of the bin heights are very small, this control helps you see them. If you don’t remember what a logarithm is, you might leave this control alone.

Finally, you choose what kind of a histogram you want to see. Common choices might be a Count or an Exceedance Curve (Survival Function).

The tool then produces the histogram you have requested and generates a number of useful statistics. Here’s a guide to the six rows of data.

Row 1: This is the mean loss from storms meeting your selection criteria.
Row 2: This is the mean annual losses from the types of storms you have selected. This number will be lower than the mean storm loss because Texas (and all of its subdivisions) average less than one storm per year. Many years there are no storms.
Row 3: This is the worst loss from 100 storms. Note again, this is NOT the mean loss in 100 years. Some years have no storms; occasionally some years feature multiple storms.
Row 4: The AIR method for generating storms can be well approximated by a Poisson distribution. Here, we find the member of the Poisson family that best fits the annual frequency data for the selected storms.
Row 5: The AIR method for generating storms can be decently approximated most of the time by a LogNormal distribution. Here, we find the member of the LogNormal family that best fits the loss data for the selected storms.
Row 6: I can create a new distribution that is the product of a draw from the Poisson distribution and the LogNormal distribution. I can then take 10,000 draws from this distribution and find the size of the annual loss that is higher than 99% of all the annual losses. This lets me approximate the 1 in 100 year loss. Notice that this number will move around a bit every time you tinker with the controls. That’s because it is using an approximation method based on random draws. Every time you change a control, new random draws occur. Still, it gives you a feel for that dreaded 1 in 100 year annual loss.

If people have additional features they want added to this tool, please let me know. I may be able to modify it or build a new tool with related capabilities.

Catastrophe insurance and the case for compulsory coinsurance

The effect of coinsuranceThis entry presents an interactive tool by which you can study the effects of “coinsurance” on expected losses from catastrophe.  The short version is that coinsurance can, under the right circumstances, significantly reduce expected losses from tropical cyclones. As such, legislatures in coastal states, including Texas, should strongly consider prohibiting subsidized insurers such as TWIA, from selling windstorm insurance policies unless there is a significant amount (say 10%) coinsurance.  The rest of this blog entry explains why and demonstrates the tool.

Continue reading

Fractional order distributions using beta-weighted expected quantiles

In an earlier post, I discussed how decisions under uncertainty that could be characterized by a statistical distribution might be evaluated not just using “spectral measures” but also by using an order distribution.  The idea, in a nutshell, was that a risk averse person would evaluate a situation involving uncertainty as if she were making N draws from a distribution and being asked to pick the worst outcome.  “N” becomes a measure of risk aversion.  The higher the “N,” the more likely that one of your draws is going to be really bad. This methodology had the virtue of being perhaps more comprehensible than spectral measures.  A disadvantage, however, was that it permitted only integer measures of risk aversion in which one was restricted to discrete numbers of draws.

What I have now recognized — and I believe this may be what Wolfram Research’s Oleksandr Pavlyk may have been trying to communicate to my less expert self a few weeks ago — is that the mean of an order distribution of some distribution F is — at least often — the expectation of a quantile function of that distribution weighted by a beta distribution in which one of the parameters takes on a value of 1.  And, if this is the case, one can emulate fractional order distributions by letting the other parameter of the beta distribution be something other than an integer.

Continue reading

Reparameterizing the beta distribution

Just as one can reparameterize the lognormal distribution, which is frequently used in the economic analysis of insurance and, derivatively, insurance regulation, so too with the beta distribution, which is also frequently used.  If [latex]\mu[/latex] is the mean of the beta distribution and [latex]\kappa[/latex] is the fraction from 0 to 1 of the maximum possible standard deviation of the beta distribution given its mean, then one can write a modified beta distribution as follows:

Note: corrects sign error in earlier version.

Reparameterized beta distribution

Reparameterized beta distribution

Two quick ideas on the mathematics of risk aversion

En route to my forthcoming post on the effects of coinsurance and deductible regulation on optimal precautions against catastrophic risk, I’ve confronted some fundamental problems in comparing the value insured’s ascribe to statistical distributions.  I’m going to post about two of those confrontations here.

Continue reading

It’s (close to) a Weibull — again!

You recall that in my last post, I went through an involved process of showing how one could generate storm losses for individuals over years.  That process, which underlies a project to examine the effect of legal change on the sustainability of a catastrophe insurer, involved the copulas of beta distributions and a parameter mixture distribution in which the underlying distribution was also a beta distribution. It was not for the faint of heart.

One purpose of this effort was to generate a histogram that looks like the one below that shows the distribution of scaled claim sizes for non-negligible claims. This histogram was obtained by taking one draw from the copula distribution for each of the [latex]y[/latex] years in the simulation and using it to constrain the distribution of losses suffered by each of the [latex]n[/latex] policyholders in each of those [latex]y[/latex] years.  Thus, although the underlying process created an [latex]y \times n[/latex] matrix, the histogram below is for a single “flattened” [latex]y \times n[/latex] vector of values.

Histogram of individual scaled non-negligible claim sizes

Histogram of individual scaled non-negligible claim sizes

But, if we stare at that histogram for a while, we recognize the possibility that it might be approximated by a simple statistical distribution.  If that were the case, we could simply use the simple statistical distribution rather than the elaborate process for generating individual storm loss distributions. In other words, there might be a computational shortcut that could approximate the elaborate proces.  If that were the case, to get the experience of all [latex]n[/latex] policyholders — including those who did not have a claim at all — we could just upsample random variates drawn from our hypothesized simple distribution and add zeros; alternatively, we could create a mixture distribution in which most of the time one drew from a distribution that was always zero and, when there was a positive claim, one drew from this hypothesized simple distribution.

Continue reading