# Catastrophe insurance and the case for compulsory coinsurance

This entry presents an interactive tool by which you can study the effects of “coinsurance” on expected losses from catastrophe.  The short version is that coinsurance can, under the right circumstances, significantly reduce expected losses from tropical cyclones. As such, legislatures in coastal states, including Texas, should strongly consider prohibiting subsidized insurers such as TWIA, from selling windstorm insurance policies unless there is a significant amount (say 10%) coinsurance.  The rest of this blog entry explains why and demonstrates the tool.

Prior posts have discussed the possibility of using coinsurance as a way to reduce the damage caused by tropical cyclones. We can’t do much to stop the hurricane but, with adherence to building codes and timely precautions, we can reduce the damage they cause to our built environment. One traditional tool used in insurance to reduce expected losses is coinsurance. The idea is to impel the insured to take extra precautions against loss by making their insurance incomplete. There are many ways to do this, deductibles, low policy limits, for example, but a crucial tool in the arsenal is to reduce recoveries above the deductible by a percentage. So, if there’s a $100,000 loss and the deductible is$2,000, instead of the insurer paying $98,000, the insurer pays (with 10% coinsurance)$88,200 and the insured ends up paying $11,800 out of pocket. This latter approach is known as coinsurance. Coinsurance can seem quite cruel. It means the insured won’t be paid in full. It means they are going to have even more problems after a major storm. The idea, however, is that the insured who knows about coinsurance works really hard to reduce loss. In our Texas coastal context, the homeowner installs hurricane shutters or improves its roof or reinforces its garage door. Thus, maybe the loss doesn’t happen at all or, maybe, when it does, it is a$50,000 loss instead of a \$100,000 loss. In the end, premiums are reduced because expected losses are reduced. Moreover, to the extent that the insurer, as in Texas, has limited resources, the existence of coinsurance reduces the likelihood of insurer insolvency or a need for insurer recapitalization.  Coinsurance might work better than deductibles because, the homeowner who pretty much knows their losses are going to be over the deductible has little marginal incentive to take precautions.  By contrast, the homeowner who knows they have coinsurance may still make marginal efforts to reduce loss even when it is expected to be greater than the deductible. Coinsurance is tough love.

The question when considering compulsory coinsurance is the magnitudes of its two effects: the unfortunate fact of incomplete insurance balanced against the benefits of reduced expected loss. If coinsurance makes insurance incomplete but people don’t take precautions in response, there is little point. On the other hand, if people do respond to coinsurance and take significantly more precautions to reduce the risk, it may be a very good idea. Having people take optimal and individualized precautions on there own is a lot more efficient and a lot less expensive than enforcement of one-size-fits-all building codes.

Until now, however, there have been few tools available to consider the effects of coinsurance. What we’d really like is a great data set in which the insureds have varying coinsurance percentages and then try to use statistics to tease out the effect of coinsurance rates on losses from similar disasters. Alas. I haven’t found any such public domain data set and I don’t (yet) run an insurance company that would generate one. So, an alternative is simulation. And that’s what I’m trying here.

To use the tool, you’ll need to download the Wolfram CDF Player and install it into your browser. Here’s where you get it.  I can assure you it is no more difficult than downloading a plugin that displays PDF files.

Once you’ve installed the CDF Player, here’s how the tool works. You select a level $\rho$ of risk aversion.  Higher levels of risk aversion result in worse outcomes being weighted progressively more heavily than better ones. You select the price $\alpha$of taking one unit of precautions in a system calibrated such that each marginal unit of precaution reduces the expected loss from accidents by half. You select the probability $w$than a catastrophic event such as windstorm or earthquake will occur at all; during many policy periods it is quite likely that no catastrophic event will occur. You then select two parameters to the Weibull distribution of baseline losses if an event occurs. Baseline losses are calibrated such that a materialization of 1 means that the loss is equal to the value of the property insured.  The top graphic then produces the PDF (probability density function) of the resulting Weibull Distribution. The blue zone (often very thin) represents losses less than the deductible; the green zone represents losses in excess of the deductible. A dotted line shows the mean value of the Weibull Distribution. Finally, you select the deductible and coinsurance levels of the policy. The deductible is measured as a percentage of the value of the property insured.  Coinsurance is measured as a percentage of loss in excess of the deductible.

The tool responds to these control selections with two other panels of information.  A bottom graphic shows the “spectral measure” (a kind of weighted average) of the insured’s net losses as a function of the level of precautions taken by the insured.  A rational insured should be expected to select the level of precautions that minimizes this spectral measure.  A black point on the lower graphic shows this optimal precaution level.  A grid on the bottom shows a number of statistics created by the controls and various computations. Statistics in the top gold zone largely recapitulate the controls selected by the user.  Statistics in the blue zone show optimal precautions, losses as a fraction of what would occur if no precautions were taken, and the spectral measure of losses for the optimal level of precautions.  Statistics in the red zone show the effect of changing insurance policy parameters.  Each “loss Δ” represents the reduction in loss that would be caused by the change described for the row.  Thus a row that reads “loss Δ from 20% higher coinsurance 0.393” means that increasing the coinsurance level by 20 percentage points would change optimal precautions such that losses would be reduced by 39.3% from the level induced by optimal precautions with the current insurance policy parameters. The higher these “loss Δs,” the more effective policy changes would be in reducing the losses caused by catastrophes.

Play around with this tool. I’ll provide a guided tour in a later post.  This one is long enough already. In the mean time, I think what you will find is that the effect of coinsurance changes depend critically on such such factors as the cost of the precautions.  When the cost of precautions are high, changes in coinsurance rates from low levels may have little or no effect on the optimal level of precautions. Other regulations such as mandatory conditioning of indemnity obligations on maintenance of certain precautions may be necessary to reduce the risk of loss. When the cost of precautions is low, however, changes in coinsurance rates and deductibles may have dramatic effects.  Reductions of 50% or more from increases in coinsurance rates by 20 percentage points are not uncommon, nor are reductions of 10% or more from increases in deductibles by 2 percentage points (such as going from 2% of the property value to 4% of the property value).

Some technical notes

The model underlying this tool assumes that catastrophic events, when they occur, create losses that obey a Weibull Distribution $\Omega$. Readers of this blog will understand why. This distribution is calibrated such that a materialization of 1 means that the losses equal the value of the property. A high percentage of the time, however, no losses occur at all. For payment of an amount $\alpha x$, the insured may take precautions against loss that reduce the size of any baseline loss  to an amount $2^{-x} \ell$ such that the size of the loss is reduced by one half for each marginal unit of precautions. Thus, if $d$ is the deductible and $c$ is the rate of coinsurance, the actual loss suffered by the insured from precautions and any loss $\ell$, after consideration of insurer payments, is as follows:

$\alpha\, x + \ell \, 2^{-x} – \left ( 1-c \right ) \max (0,\ell \, 2^{-x}-d)$

If $w$ is the probability of a catastrophic event occurring, the distribution of losses suffered by the insured may be written as a mixture distribution in which (1) the first component is always zero (no loss), (2) the second component is a Weibull Distribution conditioned on the loss being less than the deductible. The transformation is as follows:

$\alpha\,x+\ell\,2^{-x}$

(3) the third component is a transformed Weibull Distribution conditioned on the loss being greater than the deductible. The transformation is as follows:

$\left(1-c \right)\left(\ell\,2^{-x} -d\right)+\alpha\,x+\ell\,2^{-x}$

The weights on each component of this mixture distribution are in turn $1-w$, $w\,CDF\left (\Omega,d\right )$ and $w\,\left (1-CDF\left (\Omega,d\right ) \right)$

The insured chooses the level of precautions $x$ so as to minimize the expectation of the first order statistic for $\rho$ observations drawn from this mixture distribution. The reasoning behind this optimization principle has been set forth in a prior blog entries (here and here).

So, essentially what one is doing with the tool is specifying $\rho$ and $w$. One then determines the underlying loss distribution $\Omega$. And then one parameterizes the insurance policy by selecting a coinsurance rate $c$ and a deductible $d$.