Another confusing graphic from TWIA

At yesterday’s hearing of the House Insurance Committee, leadership of the Texas Windstorm Insurance Agency presented a packet of three graphics to the members.  These graphics joined a crucial presentation created for today’s meeting of TWIA’s board of directors that falsely identified how post-event bonds would be paid off under existing law. Together, it presents a picture of an insurer less interested in forthrightly informing legislators than in trying to preserve its troubled existence.

As discussed in blog entries yesterday, the first two of the graphics presented yesterday to the House Insurance Committee were highly confusing if not downright misleading.  The first graphic, which purported to project TWIA going from insolvent to a positive surplus position over the next two years simply assumed away the possibility of any large claims being made against TWIA. What insurer can’t improve its position when no large claims are filed? There would be little need for solvency regulation if insurers never incurred large claims.

The second graphic presented yesterday purported to show that TWIA could pay off the borrowing of $500 million it wants to make in that its underwriting profit was greater than its debt service.  Again, however, TWIA simply assumed that it would not have any large claims that would deplete its profit.  Lots of people who borrow too much on their credit cards might have been able to pay them off it nothing else had happened.  Problem is, stuff happens.  And then, the borrower can’t pay.

Today, I want to tackle the third graphic submitted to the House Insurance Committee by TWIA.  It appears to continue what may be a pattern of misleading visual information.  Here’s the picture.

TWIA's stack size with misleading information on 2008 and failure to adjust for exposure

TWIA’s stack size with misleading information on 2008 and failure to adjust for exposure

Let’s look at that left hand bar, the one for 2008.  It would appear to show that TWIA had only a $2.1 billion stack for that year.  I believe legislators were supposed to gain comfort from the fact that the stack for 2013, with or without $500 million in pre-event borrowings known as a BAN (bond anticipation note) is actually higher than that amount.

But the stack appeared to be only $2.1 billion for 2008 because TWIA staff had simply not counted the leading source of protection for TWIA that existed at that time. TWIA’s graph simply deleted the key fact: TWIA had the ability to make unlimited assessments against the insurance industry.  TWIA thus had essentially a 100% guaranty of being able to pay claims. A proper picture would have had another gray bar extending high above the purple one showing this ability to assess. The gray bar would be just like the $230 million one it made lower in the stack.  It would be just like the one I am confident TWIA would have stuck over the 2013 stack if such an ability existed today.  I can think of absolutely no good reason why this bar should have been eliminated from the graphic.

Now, it is true that TWIA disclosed in a fine print footnote to the graphic “unlimited additional funding available via reimbursable assessments.”  Do you see it?  Look after the semi-colon in the first line at the bottom. But, again, that’s relegating the central point to a footnote and leaving the big graphic in a highly misleading condition.

I’ve got two other problems, by the way, with the graphic.

Stack size is best measured relative to exposure

Comparisons of stack size can be misleading without taking exposure into consideration.  A stack of $2.7 billion might be just fine if TWIA had exposure of $20 billion whereas a bigger stack of $3.5 billion might be inadequate if TWIA had exposure of $80 billion.  Once we do this, the picture becomes a little less cheery.  In 2008, for example, when TWIA’s stack was essentially unlimited, TWIA direct exposure was $64 billion. Today, it is about $75 billion. To have a $2.1 billion stack in 2008, is roughly the same as having a $2.46 billion stack today.

There is no apparent reason to have ignored Hurricane Dolly payments

I do not understand why TWIA excluded money paid for Hurricane Dolly from the 2008 stack.  That would have made it a little higher.  I suppose the purpose was to contrast preparedness for Ike with preparedness for today’s storms.  Again, though, that seems a peculiar choice. In assessing TWIA’s ability to pay claims, a more relevant comparator is stack size at the start of hurricane season. Had TWIA done this properly the stack in 2008 — the year of Ike — would have been much higher even without consideration of TWIA’s ability at the time to make unlimited assessments.

 

 

 

How much will TWIA pay on losses?

How much will TWIA pay on losses? With the information about reinsurance discussed yesterday, we now have a pretty good sense of what the Texas Windstorm Insurance Association’s finances are going to look like this summer. This lets us build an interactive property insurance calculator for Texas. Yes, if there’s a special session, all of this could change, but unless there is a special session and the bill that emerges out of it passes by a two thirds vote of both Houses, it won’t take effect until deep into this hurricane season anyway. So, in the mean time, what I’m about to show should be quite useful. It’s an interactive property insurance calculator for Texas that lets you see how much your claim against TWIA might be worth.

If you install a CDF player, which you can get for free right here, you will have an interactive widget appear below that produces a pie chart showing the percent of claims that TWIA will be able to pay (green) and the percent that it will not (red).  You get to vary the size of the Catastrophe Reserve Trust Fund, the amount of Class 2 (or Class 2 alternative bonds) TWIA will sell, and, if all the Class 2 bonds are sold, the amount of Class 3 bonds that will sell.  You also get to choose the size of the loss TWIA will face. Or, if you want to watch a movie that shows how the percentages are affected by varying these parameters, click the little arrow in the top right of the widget. The output is an easy to read pie chart that shows how much TWIA will be able to pay.

[WolframCDF source=”http://catrisk.net/wp-content/uploads/2013/06/twia-cents-on-the-dollar.cdf” width=”600″ height=”515″ altimage=”http://catrisk.net/wp-content/uploads/2013/06/Screenshot_6_7_13_12_24_PM.png” altimagewidth=”553″ altimageheight=”580″]

For those who don’t want to install a CDF player, here’s a snapshot showing some sample output.

An example of how much TWIA might pay: output from the interactive property insurance calculator for Texas

An example of how much TWIA might pay

S.B. 1700 in stark pictures

According to newspaper accounts here and here, S.B. 1700 is heading for a vote in the Texas Senate this week.  Before the Senate votes on the bill or the House Insurance Committee considers the matter, I hope they have some understanding of how radically it transfers wealth to TWIA/TRIP policyholders from people who do not have TWIA policies. I also hope legislators understand that although a $4 billion funding stack is definitely an improvement over the status quo, there is still a significant risk to the coast.  And I also hope they understand the TWIA/TRIP depopulation plan, which would in theory be a good idea, has about as much a chance of success without giant changes to TWIA and TRIP that will greatly anger coastal residents as a plan to depopulate Texas itself.

Here are some pictures that I hope aid understanding.

The Funding Stack

Here’s a picture of the TWIA funding stack for 2013 under S.B. 1700. For each element of the stack, I’ve shown who actually pays for that layer of responsibility.

SB 1700; Labeled[BarChart[{180, 500, 500, 500, 500, 1000, 800},    ChartLayout -> "Stacked",    ChartLabels ->     Placed[{"Catastrophe Reserve Trust Fund (TWIA premiums)",       "Class 1 Assessments (Texas insureds)",       "Class 1 Securities (Coastal insured surchanges)",       "Class 2 Assessments (Texas insureds)",       "Class 2 Securities (Coastal insured surcharges)",       "Baseline Reinsurance (TWIA premiums)",       "Insurer Purchased Reinsurance (Texas insureds)"}, Center],    BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss",      LineIndent -> 0},    ChartStyle -> Map[Lighter@ColorData[61][#] &, Range[8]]],   Style["TWIA Funding Stack for 2013\n(Numbers in Millions)", \ {FontSize -> 11, FontFamily -> "Swiss", LineIndent -> 0}]]

TWIA Funding Stack for 2013 under SB 1700

Distribution of expected responsibility

Here’s a pie chart based on a 10,000 year storm simulation showing how much each layer of responsibility would expect to pay under S.B. 1700. There are several features of this graph worth noting.  First, note that TWIA policyholders have paid only for the modest dark red wedge at the left and the orange baseline reinsurance at the bottom left.  That is less than half of the expected payments.  (Yes, they pay a modest portion of the coastal insured surcharges too, but we don’t know how much).  Also notice the large cherry red wedge of unfunded losses.  Although the stack goes up to $4 billion or so under this bill for 2013, and although insolvency now occurs in perhaps 1.5% of the years (26% over 20 years), when insolvency occurs, it is a huge amount of money that is unfunded.

By Layer

SB1700; Framed@Labeled[PieChart[Mean /@ Through[funcs[rv]],    ChartLabels ->      Placed[Map[       Pane[#, 144] &, {"Catastrophe Reserve Trust Fund and operating \ funds (TWIA premiums)", "Class 1 Assessments (Texas insureds)",         "Class 1 Securities (Coastal insured surchanges)",         "Class 2 Assessments (Texas insureds)",         "Class 2 Securities (Coastal insured surcharges)",         "Baseline Reinsurance (TWIA premiums)",         "Insurer Purchased Reinsurance (Texas insureds)",         "Unfunded losses"}], "RadialCallout"],     ChartLegends ->      Placed[{"Catastrophe Reserve Trust Fund and operating funds (TWIA \ premiums)", "Class 1 Assessments (Texas insureds)",        "Class 1 Securities (Coastal insured surchanges)",        "Class 2 Assessments (Texas insureds)",        "Class 2 Securities (Coastal insured surcharges)",        "Baseline Reinsurance (TWIA premiums)",        "Insurer Purchased Reinsurance (Texas insureds)",        "Unfunded losses"}, Bottom],     ChartStyle -> Map[ColorData[61][#] &, Range[8]], ImageSize -> 580,     ImagePadding -> {{90, 100}, {20, 20}},     BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss"}    ], Style[    "Distribution of expected loss payments by layer", {FontSize -> 14,      FontFamily -> "Swiss", FontWeight -> Bold}]   ]

Expected loss payments by layer based on 2013 stack

 By source

We can group the expected payments shown above so that we simply have expected payments by source.  Here is that graph.  Notice again that TWIA policyholders pay little more under this scheme than either Texas insurers (who will surely pass the cost on to non-coastal Texas insureds) and coastal insureds, many of whom have already paid for non-TWIA wind policies. And, again, notice the large chunk of unfunded losses that exists under S.B. 1700.

With[{wedges = With[{t = {#[[1]] + #[[6]], #[[2]] + #[[4]] + #[[7]], #[[3]] + \ #[[5]], #[[8]]} &[Mean /@ Through[funcs[rv]]]}, t/Total[t]]}, Framed@Labeled[ PieChart[wedges, ChartLabels -> Placed[Map[ Pane[#, 144] &, {"TWIA premiums", "Texas insurers (insureds)", "Coastal insureds", "Unfunded losses"}], "RadialCallout"], ChartStyle -> Map[ColorData[61][#] &, Range[4]], ImageSize -> 580, ImagePadding -> {{90, 100}, {20, 20}}, BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss"} ], Style[ "Distribution of expected loss payments by layer responsibility", \ {FontSize -> 14, FontFamily -> "Swiss", FontWeight -> Bold}]] ]

Distribution of expected loss payments by layer responsibility under SB 1700

 By Cash Payments

There’s another way to look at S.B. 1700.  Don’t focus on the source of expected loss payments. Focus instead on source of expected cash flow.  The two are not the same because large chunks of cash flow get lost in TWIA/TRIP overhead and in paying reinsurers enormous amounts to bear risk (a subject discussed elsewhere). Here’s that pie chart.  Notice that TWIA policyholders now shoulder a considerably larger share of the load (about 2/3rds). There is still, however, a large chunk of the load picked up by Texas insurers/insureds (14%), coastal insureds (8%) and unfunded losses (9%).  The unfunded losses are a smaller chunk because the denominator for the pie chart is now larger.

SB 1700; Framed@Labeled[   With[{wedges =       With[{t = {#[[1]], #[[2]] + #[[4]] + #[[7]], #[[3]] + #[[5]], \ #[[8]]} &[ReplacePart[Mean /@ Through[funcs[rv]], 1 -> 460000000]]},        t/Total[t]]},     PieChart[wedges,      ChartLabels ->       Placed[Map[        Pane[#, 144] &, {"TWIA premiums", "Texas insurers (insureds)",          "Coastal insureds", "Unfunded losses"}], "RadialCallout"],      ChartStyle -> Map[ColorData[61][#] &, Range[4]], ImageSize -> 580,      ImagePadding -> {{110, 60}, {20, 20}},      BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss"}]],    Style["Distribution of expected cash payments by source", {FontSize \ -> 14, FontFamily -> "Swiss", FontWeight -> Bold}]]

Distribution of expected cash payments for 2013 under SB 1700 by source

Political Power in TRIP

TRIP will be run by a Board of Directors appointed by the Texas Governor.  The graphic below shows the statutory composition of that board under new section 12 of S.B. 1700 (2210.102). Notice the little wedge representing non-seacoast interests.  Hopes, therefore, that the board will take steps to protect non-coastal Texans from having their wealth transfered to the coast would thus seem very optimistic.  Also notice how the southern areas of the Texas coast, which have less population and less insured property than the northern areas, have equal political power on the board.  This is not a one house (or one premium dollar) / one vote system.

Labeled[Framed@ Labeled[PieChart[{3, 1, 1, 1, 1, 1, 1}, ImageSize -> 200, ChartLegends -> Map[Pane[ Style[#, {FontSize -> 11, FontFamily -> "Swiss", LineIndent -> 0}], 216] &, {"insurance industry representatives who write \ wind/hail in first tier coastal counties", "Cameron-Kenedy-Kleberg-Willacy representative", "Aransas-Calhoun-Nueces-Refugio-San Patricio representative", "Brazoria-Chambers-Galveston-Jefferson-Matagorda-Harris \ representative", "non seacoast member", "engineer from second tier coastal county", "financial industry second tier coastal county"}], BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss", LineIndent -> 0}], Map[Style[#, {FontSize -> 11, FontFamily -> "Swiss", LineIndent -> 0}] &, {"TRIP Board of Directors", "With ex-officio members: elected official from southern \ seacoast, elected official from northern seacost, elected official \ from non-seacoast"}], {Bottom, Top}], Style["Political Power in TRIP", {FontSize -> 11, FontFamily -> "Swiss", LineIndent -> 0, FontWeight -> Bold}], Top]

Board of Director membership in TRIP

The Depopulation of TWIA/TRIP

One of the concepts in SB 1700 is that TWIA/TRIP should be “depopulated” by reducing its total insured exposure (currently over $75 billion).  Great. The bill does not, however, come with a magic wand with which to accomplish this task. The only tool it provides is a club that threatens the insurance industry with a collective $200 million assessment that goes into an “exposure reduction plan fund” if the 2016 target of a 20% reduction from 2013 levels is not met.  It places insurers in a bit of a prisoners dilemma and creates a lot of litigation-fomenting administrative discretion on this point by saying that the assessment will only be levied against insurers that “as determined by the [TRIP] board of directors, has not met the member’s proportionate responsiiblity for reduction of the association’s total insureds exposure.” So, if all other insurers have started selling insurance — presumably at a major loss — on the coast using TWIA or sub-TWIA rates, the insurer who is left and refusing to sell insurance on the coast might find themselves with a very hefty bill even if they just have a modest share of the Texas property-casualty market.  And this, I take it, is the whole point behind the clever section 2210.212 of the bill.

I suspect, however, that the $200 million assessment will be unlikely to lure many insurers back to the coast.  There is going to be a first mover problem.  If very few large insurers choose to avoid the 2210.212 club by selling on the coast, then no insurer ends up paying a very large 2210.212 assessment. Question for any other lawyers (or law students) reading this entry: would it violate federal antitrust laws, as modified by the McCarran Ferguson Act, for insurers collusively to refuse to sell; would it violate Texas law?

The other point — and this is the one to which the picture below relates — is that the reduction targets are ambitious.  Although they are stated as reductions from the 2013 status quo, they will in fact be larger.  That’s because TWIA/TRIP is likely to continue growing at significant rates.  Thus, to make a 20% cut from the 2013 status quo, one needs to make perhaps a 30% cut from the 2016 expected status quo. The graph below illustrates this point by comparing 3% TWIA growth to the depopulation targets stated in section 2210.212.

 

Labeled[Show[ DateListPlot[{{"January 1, 2013", 1}, {"January 1, 2016", 1.03^3}, {"January 1, 2018", 1.03^5}, {"January 1, 2020", 1.03^7}, {"January 1, 2022", 1.03^9}, {"January 1,2024", 1.03^11}}, PlotRange -> {0, 1.4}, PlotMarkers -> Automatic, PlotStyle -> Green, FrameLabel -> {"Time", "Total Insured Exposure As Fraction of 2013"}, BaseStyle -> {FontSize -> 11, FontFamily -> "Swiss"}, Epilog -> {Arrow[{{3.6238320000000005*^9, 0.28615669133896926}, {3.6578745686249995*^9, 0.7181793832820529}}], Inset[TextCell["Assessment of $200\nmillion if not reached", GeneratedCell -> False, CellAutoOverwrite -> False, CellBaseline -> Baseline, TextAlignment -> Left], {3.588546672*^9, 0.19378500614472127}, {Left, Baseline}, Alignment -> {Left, Top}]}], DateListPlot[{{"January 1, 2013", 1}, {"January 1, 2016", 0.8}, {"January 1, 2018", 0.65}, {"January 1, 2020", 0.55}, {"January 1, 2022", 0.45}, {"January 1,2024", 0.4}}, PlotMarkers -> Automatic, PlotStyle -> Red]], Style["Natural Growth of TWIA/TRIP (green) compared to 2210.212 \ \"requirements\" (red)", {FontSize -> 11, FontFamily -> "Swiss"}] ]

Natural growth of TWIA/TRIP compared to 2210.212 requirements

My final picture is of Albus Dumbledore and the most powerful wand in the universe: the Elder Wand.  I show it because, I suspect, that is what it is going to take for TRIP to actually accomplish the targets set forth in the legislation without infuriating the very political constituencies that have, with SB 1700, again kicked the fundamental problems of catastrophic risk transfer down the road.

The Elder Wand

Perhaps the only thing that will actually be able to implement the SB 1700 targets without infuriating coastal Texans

TRIP could raise premiums drastically to market rates.  That would likely reduce total insured exposure, but somehow I don’t think that is the idea in the legislation. It could refuse to take on new customers. Imagine the squeals that will produce. It could do what I have suggested for years and refuse to insure beyond some basic amount and rely on market-provided excess insurance for the rest. To do so to the extent of the targets contained in SB 1700 will likely require that excess policies kick in at about $100,000.  Again, I have doubts that his what the proponents of this legislation have in mind. Or, finally, TRIP could just realize that its impossible to reduce total insured exposure without taking steps that are going to be extremely unpopular with the very constituencies that put forth this bill. They could, instead, giggle. They could recognize that the “must” language in the bill is basically a legislative joke — a pretext for extracting in disguise another $200 million out of Texas insureds throughout the state to subsidize, yet again, coastal property, owned by poor and wealthy alike.

Study shows Coastal Taskforce Plan requires more than 50% subsidization

The Coastal Taskforce Plan recently endorsed by several coastal politicians would require people other than TWIA policyholders massively to subsidize TWIA — perhaps paying more than 60% of expected losses from tropical cyclones. That is the result of a study I have conducted using hurricane modeling software. As shown in the pie chart below, the study shows that only about 38% of the payouts come from TWIA premiums. The rest comes 26% from Texas insurers, 21% from policyholders of all sorts in 13 coastal counties and Harris County, 8% from insureds located throughout Texas and 7% from the State of Texas itself. These figures are based on running a 10,000 year storm simulation based on data created by leading hurricane modeler AIR and obtained through a public records request.  The figures are also based on my best understanding of the way in which the Coastal Taskforce plan would operate, although certain aspects of the plan remain unclear and additional clarification would help.

Expected Distribution of Sources for TWIA Payouts Due to Losses from Tropical Cyclones

Expected Distribution of Sources for TWIA Payouts Due to Losses from Tropical Cyclones (Sharing)

Continue reading

Who pays for hurricane losses under the Coastal Windstorm Task Force plan?

The plan put forward by the Coastal Windstorm Task Force led by Charles Zahn  and now endorsed by at least two Texas coastal politicians will likely cause much of the money paid out by the Texas Windstorm Insurance Agency to come not from premiums paid by TWIA insureds but from subsidies forcibly exacted from insureds throughout Texas and Texas insurers. Indeed, premiums paid by TWIA insureds may end up amounting to less than half of the money used to pay losses suffered by TWIA policyholders from tropical cyclones.

The chart below is my best understanding as to how the funding structure works.

Coastal Task Force Responsibility Chart Assuming Sharing within Layers

Coastal Task Force Responsibility Chart Assuming Sharing within Layers

The horizontal axis on this graph shows responsibility for each size loss potentially suffered by TWIA  policyholders as the result of a tropical cyclone. The vertical axis on the graph shows the percentage of responsibility.  Thus, non-TWIA policyholders in the 13 coastal counties and Harris County, which is apparently lumped in, pay for significant portions of losses less than about $2.6 billion. Insureds throughout Texas pay via premium surcharges for all losses in excess of about $4.4 billion.  See the little blue rectangles? Those are the relatively small amounts that TWIA policyholders actually pay for tropical cyclone losses. The rest is paid for by people who are not necessarily TWIA insureds. They pay it regardless of whether they are — as will frequently be the case — significantly poorer than people owning homes on the coast and regardless of whether they own a home or not.

Continue reading

An interactive hurricane damage model for Texas

For most people, hurricane modeling is kind of a black box. Various experts set forth figures on the distribution of losses or statistics derived from those distributions. You pretty much have to take their word on it. I think policy discussions are better when the data is more transparent. So, a few weeks ago I sent a public information request to the Texas Windstorm Insurance Association asking for the raw data that they used to model hurricane losses. TWIA cooperated and send back about 30 megabytes worth of data.

So, I’m now able to create an interactive tool that lets you model the losses suffered by the Texas Windstorm Insurance Association from tropical storms. To run the tool, you will need to get the free CDF plug in from Wolfram Research. Once you have the plugin, you can run any CDF file. CDF is basically like PDF except that it permits interaction.

[WolframCDF source=”http://catrisk.net/wp-content/uploads/2012/12/analyzing-air-2011-data-for-catrisk.cdf” CDFwidth=”630″ CDFheight=”790″ altimage=”http://catrisk.net/wp-content/uploads/2012/12/air-data-static.png”]

Once you have the tool, you can do many things.

You can use the landfall county control to choose a county in which the storm first makes landfall. Notice that some of the counties are outside Texas because storms may first make landfall in, say, Louisiana but then go on to go over Texas and do damage here.

You can restrict the storms under consideration to various strength levels. I’m not sure, honestly, how AIR classifies tropical storms that don’t make hurricane strength. Perhaps they list them as Category 1. Or perhaps — and this would result in an underestimate of damage — they don’t list them at all.
You can also limit yourself to major hurricanes (category 3 or higher) or non-major hurricanes (categories 1 and 2).

You then get some fine control over the method of binning used by the histogram. If you’re not an expert in this area, I’d leave these two controls alone. In the alternative, play with them and I think you will get a feel for what they do. Or you can check out documentation on the Mathematica Histogram command here.

You then decide whether you want the vertical scale to be logarithmic or not. If some of the bin heights are very small, this control helps you see them. If you don’t remember what a logarithm is, you might leave this control alone.

Finally, you choose what kind of a histogram you want to see. Common choices might be a Count or an Exceedance Curve (Survival Function).

The tool then produces the histogram you have requested and generates a number of useful statistics. Here’s a guide to the six rows of data.

Row 1: This is the mean loss from storms meeting your selection criteria.
Row 2: This is the mean annual losses from the types of storms you have selected. This number will be lower than the mean storm loss because Texas (and all of its subdivisions) average less than one storm per year. Many years there are no storms.
Row 3: This is the worst loss from 100 storms. Note again, this is NOT the mean loss in 100 years. Some years have no storms; occasionally some years feature multiple storms.
Row 4: The AIR method for generating storms can be well approximated by a Poisson distribution. Here, we find the member of the Poisson family that best fits the annual frequency data for the selected storms.
Row 5: The AIR method for generating storms can be decently approximated most of the time by a LogNormal distribution. Here, we find the member of the LogNormal family that best fits the loss data for the selected storms.
Row 6: I can create a new distribution that is the product of a draw from the Poisson distribution and the LogNormal distribution. I can then take 10,000 draws from this distribution and find the size of the annual loss that is higher than 99% of all the annual losses. This lets me approximate the 1 in 100 year loss. Notice that this number will move around a bit every time you tinker with the controls. That’s because it is using an approximation method based on random draws. Every time you change a control, new random draws occur. Still, it gives you a feel for that dreaded 1 in 100 year annual loss.

If people have additional features they want added to this tool, please let me know. I may be able to modify it or build a new tool with related capabilities.

Copulas and insurance law reform

Storm models are crucial to law reform.  One needs them to get a sense if premiums are reasonable.  And, as I want to show in a series of blog posts, they can also help figure out the effect of legally mandated changes to the insurance contract.  You need to tie behavior at the level of the individual policyholder to the long term finances of the insurer. How would, for example, changing the required deductible on windstorm policies issued by the Texas Windstorm Insurance Association affect the precautions taken by policyholders to avoid storm damage?  That’s important for many reasons, among them that it affects the sustainability of TWIA. Might the imposition of coinsurance into the insurance contract do a better job of making TWIA sustainable?  These are the kind of questions for which a decent storm model is useful.

So, over the past few weeks I’ve been thinking again about ways in which one could, without access (yet) to gigabytes of needed data, develop approximations of the windstorm damage events likely to be suffered by policyholders.  And I’ve been thinking about ways in which one could parameterize those individual damages as a function of the level of precautions taken by policyholders to avoid damage.

What I’m going to present here is a model of storm damage that attempts to strike a reasonable balance of simplicity and fidelity. I’m afraid there’s a good bit of math involved, but I’m going to do my best here to clarify the underlying ideas and prevent your eyes from glazing over.  So, if you’ll stick with me, I’ll do my best to explain.  The reward is that, at the end of the day, we’re going to have a model that in some ways is better than what the professionals use.  It not only explains what is currently going on but can make predictions about the effect of legal change.

Let’s begin with two concepts: (1) “claim prevalence” and (2) “mean scaled claim size.”  By “claim prevalence,” which I’m going to signify with the Greek letter [latex]\nu[/latex] (nu), I mean the likelihood that, in any given year, a policyholder will file a claim based on an insured event. Thus, if in a given year 10,000 of TWIA’s 250,000 policyholders file a storm damage claim, that year’s prevalence is 0.04.  “Mean scaled claim size,” which I’m going to signify with the Greek letter [latex]\zeta[/latex] (zeta), is a little more complicated. It refers to the mean of the size of claims filed during a year divided by the value of the property insured for all properties on which claims are filed during a year.  To take a simple example, if TWIA were to insure 10 houses and, in a particular year, and 2 of them filed claims ([latex]\nu =0.2[/latex]) for $50,000 and for $280,000, and the insured values of the property were $150,000 and $600,000 respectively, the mean scaled claim size [latex]\zeta[/latex] would be 0.4.  That’s because: [latex]0.4=\frac{50}{2\ 150000}+\frac{280000}{2\ 600000}[/latex].

Notice, by the way, that [latex]\zeta \times \nu[/latex] is equal to aggregate claims in a year as a fraction of total insured value.  Thus, if [latex]\zeta \times \nu = 0.005[/latex] and the total insured value is, say, $71 billion, one would expect $355 million in claims in a year. I’ll abbreviate this ratio of aggregate claims in a year to total insured value as [latex]\psi[/latex] (psi).  In this example, then,  [latex]\psi=0.005[/latex].[1]

The central idea underlying my model is that claim prevalence and mean scaled claim size are positively correlated. That’s because both are likely to correlate positively with the destructive power of the storms that occurred during that year.  The correlation won’t be perfect.  A tornado, for example, may cause very high mean scaled claim sizes (total destruction of the homes it hits) but have a narrow path and hit just a few insured properties.  And a low grade tropical storm may cause modest levels of wind damage among a large number of insureds.  Still, most of the time, I suspect, bigger stoms not only cause more claims, but they increase the size of the scaled mean claim size.

copula distribution provides a relatively simple way of blending correlated random variables together.  There are lots of explanations: Wikipedia, a nice paper on the Social Science Research Network, and the Mathematica documentation on the function that creates copula distributions.   There are lots of ways of doing this blending, each with a different name.  I’m going to stick with a simple copula, however, the so-called “Binormal Copula” (a/k/a the “Gaussian Copula.”) with a correlation coefficient of 0.5.[2]

To simulate the underlying distributions, I’m going to use a two-parameter beta distribution for both claim prevalence mean scaled claim size. My experimentation suggests that, although there are probably many alternatives, both these distributions perform well in predicting the limited data available to me on these variables. They also benefit from modest analytic tractability. For people trying to recreate the math here, the distribution function of the beta distribution is [latex]I_x\left(\left(\frac{1}{\kappa ^2}-1\right) \mu ,\frac{\left(\kappa ^2-1\right) (\mu -1)}{\kappa ^2}\right)[/latex], where [latex]\mu[/latex] is the mean of the distribution and [latex]\kappa[/latex] is the fraction (0,1) of the maximum standard deviation of the distribution possible given the value of   [latex]\mu[/latex]. What I have found works well is to set [latex]\mu _{\nu }=0.0244[/latex], [latex]\kappa _{\nu }=0.274[/latex] for the claim prevalence distribution and [latex]\mu _{\zeta }=0.097[/latex], [latex]\kappa _{\zeta }=0.229[/latex] for the mean scaled claim size distribution. This means that policyholders will file a claim about every 41 years and that the value of claims for the year will, on average, be 9.7% of the insured value of the property.[3]

We can visualize this distribution in a couple of ways.  The first is to show a probability density function of the distribution but to scale the probability logarithmically.  This is shown below.

PDF of sample copula distribution

PDF of sample copula distribution

The second is to simulate 10,000 years worth of experience and to place a dot for each year showing claim prevalence and mean scaled claim size.  That is done below. I’ve annotated the graphic with labels showing what might represent a year in which there was a tornado outbreak, a catastrophic hurricane, a tropical storm as well as the large cluster of points representing years in which there was minimal storm damage.

Claim prevalence and mean scaled claim size for 10,000 year simulation

Claim prevalence and mean scaled claim size for 10,000 year simulation

Equipped with our copula, we can now generate losses at the individual policyholder level for any given year.  The idea is to create a “parameter mixture distribution” using the copula. As it turns out, one component of this parameter mixture distribution is itself a mixture distribution.

Dear reader, you now have a choice.  If you like details, have a little bit of a mathematical background and want to understand better how this model works, just keep reading at “A Mini-Course on Mixture and Parameter Mixture Distributions.”  If you just want the big picture, skip to “Simulating at the Policyholder Level” below.

A Mini-Course on Mixture and Parameter Mixture Distributions

To fully understand this model, we need some understanding of a mixture distribution and a parameter mixture distribution.  Let’s start with the mixture distribution, since that is easier.  Imagine a distribution in which you first randomly determine which underlying component distribution you are going to use and then you take a draw from the selected underlying component distribution.  You might, for example, roll a conventional six-sided die, which is a physical representation of what statisticians call a “discrete uniform distribution.”  If the die came up 5 or 6, you then draw from a beta distribution with a mean of 0.7 and a standard deviation of 0.3 times the maximum.  But if the die came up 1 through 4, you would draw from a uniform distribution on the interval [0,0.1].  The diagram below shows the probability density function of the resulting mixture distribution (in red) and the underlying components in blue.

Mixture Distribution with beta and uniform components

Mixture Distribution with beta and uniform components

The mixture distribution has a finite number of underlying component distributions and has discrete weights that you select. The parameter mixture distribution can handle both infinite underlying component distributions and handles weights that are themselves draws from a statistical distribution. Suppose we create a continuous function [latex]f[/latex] that takes a parameter [latex]x[/latex] and creates triangular distribution which has a mean of [latex]x[/latex] and extends 1/4 in each direction from the mean.  We will call this triangular distribution the underlying distribution of the parameter mixture distribution.  The particular member of the triangular distribution family used is determined by the value of the parameter. And, now, we want to create a “meta distribution” — a parameter mixture distribution — in which the probability of drawing a particular parameter [latex]x[/latex] and in turn getting that kind of triangular distribution with mean [latex]x[/latex] is itself determined by another distribution, which I will call [latex]w[/latex]. The distribution [latex]w[/latex] is the weighting distribution of the parameter mixture distribution. To make this concrete, suppose [latex]w[/latex] is a uniform distribution on the interval [0,1].

The diagram below shows the result.  The blue triangular underlying distributions represent a sample of the probability density functions of triangular distributions.  There are actually an infinite number of these triangular distributions, but obviously I can’t draw them all here. Notice that some of the density functions are more opaque than others. The opacity of each probability density function is based on the probability that such a distribution would be drawn from [latex]w[/latex].  The red line shows the probability density function of the resulting parameter mixture distribution.  It is kind of an envelope of these triangular distributions.

Parameter mixture distribution for triangular distributions where mean of triangular distributions is drawn from a uniform distribution

Parameter mixture distribution for triangular distributions where mean of triangular distributions is drawn from a uniform distribution

We can combine mixture distributions and parameter mixture distributions.  We can have a mixture distribution in which one or more of the underlying functions is a parameter mixture distribution.  And, we can have a parameter mixture distribution in which either the underlying function and/or the weighting function is a mixture distribution.

It’s that combination — a parameter mixture distribution in which the underlying function is a mixture distribution — that we’re going to need to get a good simulation of the damages caused by storms. The weighting distribution of this parameter mixture distribution is our copula. It throws out two parameters:  (1) [latex]\nu[/latex], the likelihood that in any given year the policyholder has a non-zero claim and (2) [latex]\zeta[/latex] the mean scaled claim size assuming that the policyholder has a non-zero claim.  Those two parameters are going to weight members of the underlying distribution, which is a mixture distribution.  The weights of the mixture distribution are the the likelihood that the policyholder has no claim and the likelihood that the policyholder has a non-zero claim (claim prevalence). The component distributions of the mixture distribution are (1) a distribution that always produces zero and (2) any distribution satisfying the constraint that its mean is equal to the mean scaled claim size.  I’m going to use another beta distribution for this latter purpose with a standard deviation equal to 0.2 of the maximum standard deviation.  I’ll denote this distribution as B. Some examination of data from Hurricane Ike is not inconsistent with the use of this distribution and the distribution has the virtue of being analytically tractable and relatively easy to compute.

This diagram may help understand what is going on.

The idea behind the parameter mixture distribution

The idea behind the parameter mixture distribution

Simulating at the Policyholder Level

So, we can now simulate a large insurance pool over the course of years by making, say, 10,000 draws from our copula.  And from each draw of the copula, we can determine the claim size for each of the policyholders insured in that sample year. Here’s an example.  Suppose our copula produces a year with some serious damage: claim prevalence value of 0.03 and a mean scaled claim size of 0.1 for the year.  If we simulate the fate of 250,000 policyholders, we find that 242,500 have no claim.  The graphic below shows the distribution of scaled claim sizes among those who did have a non-zero claim.

Scaled claim sizes for sample year

Scaled claim sizes for sample year

Fortunately, however, we don’t need to sample 250,000 policy holders each year for 10,000 years to get a good picture of what is going on.  We can simulate things quite nicely by looking at the condition of just 2,500 policyholders and then just multiplying aggregate losses by 100.  The graphic below shows a logarithmic plot of aggregate losses assuming a total insured value in the pool of $71 billion (which is about what TWIA has had recently).

Aggregate losses (simulated) on $71 billion of insured property

Aggregate losses (simulated) on $71 billion of insured property

We can also show a classical “exceedance curve” for our model.  The graphic below varies the aggregate losses on $71 billion of insured property and shows, for each value, the probability (on a logarithmic scale) that losses would exceed that amount.  One can thus get a sense of the damage caused by the 100 year storm and the 1000-year storm.  The figures don’t perfectly match TWIA’s internal models, but that’s simply because our parameters have not been tweaked at this point to accomplish that goal.

Exceedance curve (logarithmic) for sample 10,000 year run

Exceedance curve (logarithmic) for sample 10,000 year run

The final step is to model how extra precautions by a policyholder might alter these losses.  Presumably, precautions are like most economic things: there is a diminishing marginal return on investment.  So, I can roughly model matters by saying that for a precaution of x the insured results in the insured drawing from a new beta distribution with a mean equal to [latex]\ell \times 2^{-x}[/latex], where [latex]\ell[/latex] is the amount of damage they would have suffered had they taken no extra precautions. (I’ll keep the standard deviation of this beta distribution equal to 0.2 of its maximum possible value.) I have thus calibrated extra precautions such that each unit of extra precautions cuts the mean losses in half. It doesn’t mean that sometimes precautions won’t result in greater savings or that sometimes precautions won’t result in lesser savings; it just means that on average, each unit of precautions cuts the losses in half.

And, we’re done!  We’ve now got a storm model that when combined with the model of policyholder behavior that I will present in a future blog entry, should give us respectable predictions on the ability of insurance contract features such as deductibles and coinsurance to alter aggregate storm losses. Stay tuned!

Footnotes

[1] As I recognized a bit belatedly in this project, if one makes multiple draws from a copula distribution, it is not the case that the mean of the product of the two values [latex]\nu[/latex] and [latex]\zeta[/latex] drawn from the copula is equal to [latex]\nu \times \zeta[/latex]. You can see why this might be by imagining a copula distribution in which the two values were perfectly correlated, in which case one would be drawing from a distribution transformed by squaring.  It is not the case that the mean of such a transformed distribution is equal to the mean of the underlying distribution.

[2] Copulas got a bad name over the past 10 years for bearing some responsibility for the financial crisis..  This infamy, however, has nothing to do with the mathematics of copulas, which remains quite brilliant, but with their abuse and the fact that incorrect distributions were inserted into the copula.

[3] We thus end up with a copula distribution whose probability density function takes on this rather ghastly closed form.  (It won’t be on the exam.)

[latex]\frac{(1-\zeta )^{\frac{1-\mu _{\zeta }}{\kappa _{\zeta }^2}+\mu _{\zeta }-2} \zeta ^{\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta }-1} (1-\nu )^{\frac{1-\mu _{\nu }}{\kappa _{\nu }^2}+\mu _{\nu }-2} \nu ^{\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu_{\nu }-1} \exp \left(\frac{\left(\text{erfc}^{-1}\left(2 I_{\zeta }\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta },\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right)\right)-\rho

\text{erfc}^{-1}\left(2 I_{\nu }\left(\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu _{\nu },\frac{\left(\kappa _{\nu }^2-1\right) \left(\mu _{\nu }-1\right)}{\kappa _{\nu }^2}\right)\right)\right){}^2}{\rho ^2-1}+\text{erfc}^{-1}\left(2 I_{\zeta
}\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta },\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right)\right){}^2\right)}{\sqrt{1-\rho ^2} B\left(\left(\frac{1}{\kappa _{\zeta }^2}-1\right) \mu _{\zeta
},\frac{\left(\kappa _{\zeta }^2-1\right) \left(\mu _{\zeta }-1\right)}{\kappa _{\zeta }^2}\right) B\left(\left(\frac{1}{\kappa _{\nu }^2}-1\right) \mu _{\nu },\frac{\left(\kappa _{\nu }^2-1\right) \left(\mu _{\nu }-1\right)}{\kappa _{\nu }^2}\right)}[/latex]


 

 

Interactively exploring TWIA funding reform

Here’s a Wolfram CDF document that let’s you explore the reforms to TWIA funding set forth in my earlier post.  To understand it, you must read this earlier blog entry. To interact with it, you will need to go to this website and download the Wolfram CDF player.  It’s easy — very similar to PDF. Once you have the plugin, you will be able to explore fully this and future interactive content. Your CDF plugin will work on this and other blogs and the Wolfram Demonstrations site. [WolframCDF source=”http://catrisk.net/wp-content/uploads/2012/08/s2210355a.cdf” CDFwidth=”740″ CDFheight=”700″ altimage=”file”]

By the way, if you want to see how to produce interactive widgets like this one in a WordPress document, take a look at this excellent and short video. And, to compare the premiums suggested by this document with those actually in use by the Texas Windstorm Insurance Association, take a look at this document and this document.

An idea for future TWIA finance

Although they may thoroughly disagree on the direction in which reform should go, almost everyone agrees has come to agree with what I predicted in 2009:  TWIA finances are in serious need of reform.  This blog entry sketches out one direction in which TWIA might proceed.  The idea here is that TWIA should, in a steady state, have enough cash on hand in its catastrophe reserve fund to pay for insured losses and operating expenses, without having to borrow, with a high probability, say 99%.  Further TWIA should have borrowing capacity to address the rare situations (say 1% of years) in which its reserves would be inadequate. Those borrowings should be repaid by some percentage of TWIA policyholders, persons living on the coast, and Texans generally, perhaps collected through the proxy of insurers doing business in Texas.

Although people can quarrel about the precise parameters in this abstract statement of the goal, I have some hope that people could agree on the concept. Government-sponsored insurance companies that don’t have the right to draw on the government fisc, ought not to be relying heavily on post-event bonding as a way of paying claims; instead they need enough money in their piggy bank just as we require of their private insurer counterparts. But what if TWIA’s catastrophe reserve fund does not meet this lofty goal?  What then?  Especially given the magnitude of the current reserve shortfall and the current economy, matters can not be corrected overnight. There should, I say, be an adjustment period during which premiums are adjusted (either upwards or, at some hypothetical future time, downwards) such that, at the end of the adjustment period, things come into balance and the catastrophe reserve fund meets the goal.

How do we operationalize this idea? Here is the beginning of a statutory draft. I’ve put in dummy statute section numbers for ease of reference. Obviously, the real section numbers would have to be revised by legislative counsel. Also, we’re probably going to have to develop a more comprehensive process for 2210.355A(b)(1) and reconcile this provision with the alternative process currently set form in 2210.355A.

2210.355A

(a) Definitions

(1)  The “Exceedance Function for the catastrophe year” is a function that approximates the probability that insured lossses and operating expenses in the catastrophe year will exceed a specified dollar amount. Insured losses shall be computed on a net basis after consideration of any reinsurance or other sources of recovery.

(2) The term “Loss PDF” means the probability distribution function mathematically associated with the Exceedance Function.

(3) The term “Century Storm Reserve Adequacy” means having a catastrophe reserve fund at the start of each catastrophe year such that this fund would be able, without additional borrowing, to fully pay insured losses and operating expenses in the following catastrophe year with a 99% probability as computed using the Exceedance Function for the catastrophe year.

(4) The term “Reserve Adjustment Period” means ten years.

(b)

(1) The Association shall, prior to the start of each catastrophe year, use the best historical and scientific modeling evidence with considerations of standards in the business of catastrophe insurance, to determine the Exceedance Function and associated Loss PDF for the catastrophe year.”

(2) If, at any time, the Association finds that its catastrophe reserve fund at the start of a catastrophe year does not achieve Century Storm Reserve Adequacy,  the Association shall adjust the premiums to be charged in the following year either downwards of upwards as appropriate such that, were:


(A) such premiums to be charged for the Reserve Adjustment Period on the base of currently insured properties;

(B) insured losses and operating expenses of the Association to be for the Reserve Adjustment Period at the mean of the Loss PDF for the catastrophe year; and

(C) the Association were to earn on any reserve balances during the Reserve Adjustment Period the amount of interest for reasonably safe investments then available to the Association,

the catastrophe reserve fund at the end of Reserve Adjustment Period would achieve Century Storm Reserve Adequacy.

(c) By way of illustration, if the Exceedance Function takes on a value of 0.01 when the size of insured losses and operating expenses is a equal to 440 million dollars and the mean of the Loss PDF for the catastrophe year is equal to 223 million, the initial balance of the catastrophe reserve fund is 100 million dollars and the amount of interest for safe investments then available to the Association is equal to 2% compounded continuously, then the premiums charged for the following calendar year should be equal to $614,539,421.

And what happens, by the way, if a storm hits that exceeds the size of the catastrophe reserve fund?  Stay tuned.  I’ve got an idea there too.

How do we keep premiums low under this scheme?  Likewise, stay tuned.  Hint: think about coinsurance requirements and lower maximum policy limits.  Think about carrots to get the private insurance industry writing excess policies on the coast with ever lower attachment points.

  • Footnote for math nerds only. Anyone seeing the implicit differential equations in the model and the applications of control theory?
  • Footnote for Mathematica folks only. Here’s the program to compute the premium. Note the use of polymorphic functions.

p[\[Omega]_, \[Mu]_, q_, c_, r_, z_] :=
x /. First@
Solve[Quantile[\[Omega], q] ==
TimeValue[c, EffectiveInterest[r, 0], z] +
TimeValue[Annuity[x – \[Mu], z], EffectiveInterest[r, 0], z],
x];
p[\[Omega]_, q_, c_, r_, z_] :=
With[{m = NExpectation[x, x \[Distributed] \[Omega]]},
p[\[Omega], \[Mu], q, c, r, z]]

  • Footnote for statutory drafters. Note the use of modular drafting such that one can change various parameters in the scheme (such as the 10 year adjustment period) without having to redraft the whole statute.

The chart TWIA isn’t sure you should see

Attached to this post is a chart prepared by TWIA for its August 7, 2012, board meeting. I didn’t make this chart up.  TWIA did. At the August board meeting, it was discussed whether this chart should be made more prominently available to the public via the internet. There was an initial suggestion that it should.  At the end of a two minute discussion, TWIA apparently decided that, instead, the information contained in the chart would be subsumed into some sort of “contingency plan narrative” that would, eventually, go on the web.  (Don’t believe me?  Listen to minutes 16:30 through 18:46 on the archived recording of the meeting). And if anyone can find that contingency plan (with or without the narrative) please let me know.

Projected Funding Structure Under Different Loss Scenarios

Exhibit for the tWIA board meeting

I don’t think the chart is too difficult to understand. The “problem” with the chart is that it is too easy to understand. The chart makes clear that TWIA faces a significant risk of insolvency. It shows that TWIA does not have enough money to pay for 1 in 100 year events and does not have enough money to pay for a modest storm plus a 1 in 60 year event occurring in the same hurricane season.

The chart depicts how TWIA would pay policyholders in two different scenarios.  The scenario on the left is one giant storm.  The scenario on the right is a small storm followed by a big storm.  In the giant storm scenario, TWIA suffers $4.5 billion in losses.  This is identified as a one in one hundred year event.  If that’s true, it happens about 10% of the time during any 10-year stretch. According to the chart, TWIA would fund be able to fund a total of $3.15 billion of the $4.5 billion loss: $300 million (green) out of premium revenue and the catastrophe reserve trust fund, $500 million (aqua) out of bond anticipation notes (later refinanced via Class 1 securities), $1 billion (turquoise) out of Class 2 securities, $500 million (gray) out of Class 3 securities, and $850 million (purple) out of reinsurance.  TWIA would have no available funds to pay the remaining $1.35 billion (yellow) of claims (and possibly loss adjustment expenses).  So, on average, policyholders would get 70 cents on each dollar that TWIA owed them.

In the small + big scenario depicted on the right, the losses from the small ( $500 million) storm are funded fully. $300 million (green) is paid out of premium revenue and the catastrophe reserve trust fund and the remaining $200 million (aqua) is paid out of bond anticipation notes (later refinanced via Class 1 securities).  But when the big $2.5 billion once every sixty years)  storm then hits in the same hurricane season, TWIA has a serious problem.  It can pay another $300 million out of bond anticipation notes, $1 billion (turquoise) out of Class 2 securities and $500 million (gray) out of Class 3 securities.  That leaves TWIA policyholders with claims from the second storm initially getting only 72 cents on every legitimate dollar of claims.  (And good luck to TWIA trying to get claimants from the first storm to pay back a portion of their claims so that both sets of policyholders are treated equally)

The source of the remaining $700 million under the small + big scenario is unclear.  If TWIA could somehow scrape up an additional $500 million (yellow), it would then arguably trigger the obligations of the reinsurance it purchased.  TWIA policyholders would ultimately be made whole in this case. But, apparently, if TWIA could not scrape up $500 million — and no one has any idea where the missing $500 million would come from —  TWIA would not be entitled to any money from the reinsurance policy for which it paid $100 million in premiums. Thus, TWIA policyholders would be stuck with, say, being paid only $144,000 on a legitimate $200,000 claim.

Footnote: Part of the problem exists because TWIA’s reinsurance is apparently “occurrence based” rather than based on aggregate losses.  Apparently such occurrence based policies are “industry standard” for reinsurance though not for catastrophe bonds.  My bet, though, is that a sophisticated reinsurance broker could negotiate with a sophisticated reinsurer for an aggregate loss trigger on reinsurance.

So, is the chart scary?  Should it be prominently displayed on the TWIA web site and by TWIA agents? Yes.  Sure, being too scared of hurricane risk a problem.  But being insufficiently scared is perhaps an even greater problem.  I would not want to be the homeowner with a destroyed house or a destroyed business holding a TWIA policy that provided “coverage” on paper, but that did not actually get me the money to rebuild. And I wouldn’t want to be the agent that sold such a policy having not disclosed by every reasonable means — including the use of charts where appropriate — the risks involved.