En route to my forthcoming post on the effects of coinsurance and deductible regulation on optimal precautions against catastrophic risk, I’ve confronted some fundamental problems in comparing the value insured’s ascribe to statistical distributions. I’m going to post about two of those confrontations here.

1. Often the lognormal distribution is used in computations involving insurance. The standard parameterization of the lognormal distribution isn’t terribly convenient for this purpose, however. Often, what appears more useful is to describe a lognormal distribution according to its mean and its standard deviation. Here’s the way to do that, where [latex]\mu[/latex] is the mean of the distribution and [latex]\sigma[/latex] is the standard deviation.

2. There are two main mathematical ways of coping with risk aversion. The traditional method is “certainty equivalent wealth” (CEW). With CEW, one transforms a distribution of position (such as wealth or income) via a utility function. Generally the utility function is non-decreasing and exhibits diminishing marginal returns. One then computes the level of wealth which, if held with 100% certainty, would yield the same utility as the expectation of the utility-transformed position distribution. This level of wealth is called certainty equivalent wealth. This method has many virtues but often fails to result in a closed form solution and often has difficulty dealing with distributions with domains that include negative values.

The newer rival in this field is “spectral measure,” which is an expectation of the quantile function of the position distribution according to a weighting distribution whose domain is identical to that of the quantile function ([0,1]) and which is non-increasing over the domain. (For negative positions such as loss one modifies the weighting function so that it is non-decreasing over the domain). The intuition is that this produces a weighted average of outcomes in which worse outcomes get weighted more heavily. Spectral measures cope perfectly well with negative values over the domain, often result in closed form expressions, and have certain desirable theoretical properties. A disadvantage, however, is that sometimes the expectation required to be calculated can not be solved symbolically and thus one does not obtain answers whose properties are apparent. Instead, one has to use numeric methods, simulation and, potentially emulation to gain insight into the behavior of a model.

So, I have a third way of coping with risk. My initial search suggests that it’s novel, but I can’t be certain yet. (If anyone has seen this before, please let me know and save me some embarrassment.) The idea is to take the mean of order distributions of the parent position distributions. The “order” used (an integer) is a measure of risk aversion. What’s an order distribution? Well, as set forth in the lucid documentation of the Mathematica software package, OrderDistribution[{dist,n},k] represents the distribution of the kth element in the sorted list of n samples drawn from the parent distribution. So, if one took the distribution of the first element in the sorted list of, say, three samples drawn from the parent distribution, one would have a new distribution. Its mean would be lower than the mean of the parent distribution but bigger than the minimum (if there was one) of the parent distribution. The more samples one took, the closer the mean of the order distribution would get to the minimum of the parent distribution.

Several examples may help. Suppose one’s position is modeled by an exponential distribution with parameter 1. The plot below shows the probability density function of the first three order distributions of this parent distribution. A dashed line shows the mean of each of these distributions. As the order *n* of the distribution increases, the mean (1/n) gets smaller.

Here’s a similar example using a normal distribution instead of an exponential distribution.

And, just to show that this method can work on more complex distributions, here’s a plot showing order distributions of a normal distribution truncated to the interval [0,1]. Below the plot, I’ve put the formula for the mean of these order distributions.

In the exponential, normal and other cases, one ends up with a “nice” closed form solution for the order distributions. (Although resort to a definition using an OwenT special function can not be considered very nice). In others, however, one does not. What needs to be further explored if this new method is to be practical, is how frequently one gets these closed form solutions relative to the competing spectral measures method.

By the way, this method is kind of like spectral measures in that one is saying that the risk-adjusted position of the individual is somewhere in the domain of the original distribution and generally on the “bad side” of the mean of the original distribution. Here, though, instead of computing quantile functions and taking expectations using a separate weighting distribution, one computes an order distribution and takes its expectation. The order serves somewhat the same function as the weighting distribution.

Pingback: Reparameterizing the beta distribution - Texas Windstorm

Pingback: Fractional order distributions using beta-weighted expected quantiles - Texas Windstorm