Confidence Interval and Retention Level Analyses

Timothy L. Coomer, Ph.D.

Blog 1: Intro to SIGMA & Scope and Data Requirements
Blog 2: Loss Development Factors
Blog 3: Reserve and Cash Flow Analyses
Blog 4: Trending, Pure Loss Rates, and Loss Projections
Blog 5: Confidence Interval and Retention Level Analyses
Blog 6: Loss Cost Analysis

In our last post, we spent a bit of time dealing with loss projections, so let’s now focus on the way they feed into confidence interval and retention level analyses. Due to the random nature of compensation and liability losses, an organization’s losses are almost guaranteed to never hit the exact loss pick provided by their actuary. This is where confidence intervals come into play.

Confidence levels demonstrate how actual losses may vary from the projected losses, which enables decision makers to assess the risk involved with their loss projection by determining both the mathematical probability that the loss projection will be exceeded and the potential volume of losses in cases that they do exceed the projection. Often, companies budget the retained portion of their losses at a level higher than expected to minimize the impact of a bad year on their balance sheet.

A common method in projecting losses at various confidence levels involves a technique known as Monte Carlo loss simulation. This process works by first selecting an average frequency (or claim count) and severity (or incurred claim value) for the projected year using traditional actuarial techniques, assuming sufficient historical data is available. Historical loss data is then reviewed to determine an appropriate distribution, as well as other parameters such as standard deviation, for both frequency and severity. A lognormal distribution is often used when simulating severity for workers compensation and general liability, since it is positively skewed. Similarly, a Poisson or negative binomial distribution may be used for frequency. Additional distributions may be used in other cases, such as in the case of a catastrophic property claim, where a binominal distribution would be used to simulate the frequency and a normal distribution to simulate the severity.

After initial parameters are determined, a software program can be used to generate a random frequency and severity for each of the claims based on the distributions selected. Simulated losses are capped at any per occurrence or aggregate limits, and the total is recorded. The process is then repeated a large number of times (typically between 5,000 to 10,000 iterations). The law of large numbers implies that the average of simulated losses for a large number of iterations (or years) is a close approximation of the expected losses for the projected year. Total losses for each iteration are then ranked from smallest to largest, and estimates by confidence level are summarized. The estimates at each confidence level are then attached to a percentage indicating their adequacy to fund losses in a given period. For example, an amount at the 75th percentile would be adequate to fund all losses in that period 75% of the time.

A limitation of the statistical model is that a concept known as parameter risk is often not included in the calculation of the aggregate distribution. Parameter risk is the risk associated with the possibility that the model’s input parameters have been estimated incorrectly. In addition to parameter risk, company information, industry data, or professional judgment are often used in selecting input parameters when sufficient historical loss data is not available. These limitations should be understood and considered when relying on the loss simulation and confidence level analysis.

The same type of process can be applied to the creation of a retention level analysis. In the past, the retention decision has been based on either none or limited amounts of data and analysis, making the decision itself little more than an uneducated guess. With the rise of analytics, though, organizations have begun to realize that they can utilize their data to ensure that they are making the most cost-effective decision.

While the act of creating a retention level analysis can be complex, it boils down to the examination of individual claim volatility. Through repeated simulations, mentioned above, an actuary is able to determine with a good degree of credibility just how often various retention levels will limit losses or allow them to grow. Analyzing these results allows the risk management team to examine the loss-premium relationship more effectively and ultimately helps to ensure that they are making the best financial decision.

If you would like to further explore the intricacies of confidence level and retention level analyses, we’ve provided below a list of documents and videos from our own RISK66 library that will allow you to do so.

Note: the following links require a login to RISK66.com. Register for free educational access.

PDF Resources

Video Resources

As always, feel free to contact us with any further questions, and we’d be more than happy to discuss them. We hope you’ve been enjoying the chance to learn a bit more about SIGMA’s offerings. We’ll return next month with an extensive look at loss cost analyses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives

Recent Posts

Analytical Strategies for Large Deductible Policies
When companies are determining a large deductible policy structure for their P&C risks, it is unfortunately all too common for the final decision to be based on “back of the napkin” calculations. This is often due to a lack of time, resources, or comfort in using more effective analytic...
Read More
Optimizing Your Large Deductible Policy
Large deductible policies provide companies the opportunity to reduce their insurance premiums in exchange for a portion of the retained losses from a covered risk. In some cases, the amount retained by an insured is based on costs from individual claims or occurrences, but it might also be based on...
Read More
Mid-year 2023 Actuarial Report Prep 
Quarterly or semi-annual actuarial reports can help eliminate year-end surprises and manage overall financial reporting. This may be particularly helpful if your company has had significant changes during the year such as growth, acquisitions, divestitures, changes in third party claims administrato...
Read More
Legislative Changes and Captive Actuarial Analytics
In a recent article published by Captive International, Michelle Bradley and Enoch Starnes discuss “Legislative Changes and Captive Actuarial Analytics”. In it, they examine recent legal developments in the United States and how they might affect the captive insurance industry. The insurance w...
Read More

Subscribe to Our Blog



hello world!
Copyright © 2023 SIGMA Actuarial Consulting Group, Inc. All Rights Reserved.
chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram