Confidence Interval and Retention Level Analyses

Timothy L. Coomer, Ph.D.

Blog 1: Intro to SIGMA & Scope and Data Requirements
Blog 2: Loss Development Factors
Blog 3: Reserve and Cash Flow Analyses
Blog 4: Trending, Pure Loss Rates, and Loss Projections
Blog 5: Confidence Interval and Retention Level Analyses
Blog 6: Loss Cost Analysis

In our last post, we spent a bit of time dealing with loss projections, so let’s now focus on the way they feed into confidence interval and retention level analyses. Due to the random nature of compensation and liability losses, an organization’s losses are almost guaranteed to never hit the exact loss pick provided by their actuary. This is where confidence intervals come into play.

Confidence levels demonstrate how actual losses may vary from the projected losses, which enables decision makers to assess the risk involved with their loss projection by determining both the mathematical probability that the loss projection will be exceeded and the potential volume of losses in cases that they do exceed the projection. Often, companies budget the retained portion of their losses at a level higher than expected to minimize the impact of a bad year on their balance sheet.

A common method in projecting losses at various confidence levels involves a technique known as Monte Carlo loss simulation. This process works by first selecting an average frequency (or claim count) and severity (or incurred claim value) for the projected year using traditional actuarial techniques, assuming sufficient historical data is available. Historical loss data is then reviewed to determine an appropriate distribution, as well as other parameters such as standard deviation, for both frequency and severity. A lognormal distribution is often used when simulating severity for workers compensation and general liability, since it is positively skewed. Similarly, a Poisson or negative binomial distribution may be used for frequency. Additional distributions may be used in other cases, such as in the case of a catastrophic property claim, where a binominal distribution would be used to simulate the frequency and a normal distribution to simulate the severity.

After initial parameters are determined, a software program can be used to generate a random frequency and severity for each of the claims based on the distributions selected. Simulated losses are capped at any per occurrence or aggregate limits, and the total is recorded. The process is then repeated a large number of times (typically between 5,000 to 10,000 iterations). The law of large numbers implies that the average of simulated losses for a large number of iterations (or years) is a close approximation of the expected losses for the projected year. Total losses for each iteration are then ranked from smallest to largest, and estimates by confidence level are summarized. The estimates at each confidence level are then attached to a percentage indicating their adequacy to fund losses in a given period. For example, an amount at the 75th percentile would be adequate to fund all losses in that period 75% of the time.

A limitation of the statistical model is that a concept known as parameter risk is often not included in the calculation of the aggregate distribution. Parameter risk is the risk associated with the possibility that the model’s input parameters have been estimated incorrectly. In addition to parameter risk, company information, industry data, or professional judgment are often used in selecting input parameters when sufficient historical loss data is not available. These limitations should be understood and considered when relying on the loss simulation and confidence level analysis.

The same type of process can be applied to the creation of a retention level analysis. In the past, the retention decision has been based on either none or limited amounts of data and analysis, making the decision itself little more than an uneducated guess. With the rise of analytics, though, organizations have begun to realize that they can utilize their data to ensure that they are making the most cost-effective decision.

While the act of creating a retention level analysis can be complex, it boils down to the examination of individual claim volatility. Through repeated simulations, mentioned above, an actuary is able to determine with a good degree of credibility just how often various retention levels will limit losses or allow them to grow. Analyzing these results allows the risk management team to examine the loss-premium relationship more effectively and ultimately helps to ensure that they are making the best financial decision.

If you would like to further explore the intricacies of confidence level and retention level analyses, we’ve provided below a list of documents and videos from our own RISK66 library that will allow you to do so.

Note: the following links require a login to RISK66.com. Register for free educational access.

PDF Resources

Video Resources

As always, feel free to contact us with any further questions, and we’d be more than happy to discuss them. We hope you’ve been enjoying the chance to learn a bit more about SIGMA’s offerings. We’ll return next month with an extensive look at loss cost analyses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives

Recent Posts

Evaluating, Establishing, and Optimizing a Captive Insurance Program
Michelle Bradley and Enoch Starnes were recently interviewed by Captive Insurance Times to discuss how to successfully set up and manage a captive program. If you or your organization is considering captive insurance options, you won't want to miss the expert insight found in their recent article. Q...
Read More
Captive Analytical Strategies for Mergers and Acquisitions
In a recent article published by Captive International, Michelle Bradley and Enoch Starnes provide a guide to what needs to be addressed after companies with a captive are acquired or merged. Of the many considerations needed during mergers and acquisitions (M&A), one that may not be clearly def...
Read More
Captive Q&A With XN Captive
Today we are joined by Sam Espinosa, Co-Founder and CMO of XN Captive, who is shedding a light in this Q&A on the advantages of captive solutions, the evolving market, and how businesses can leverage captives to drive growth and resilience. Q1: From your perspective, what role have captive manag...
Read More
A Practical Guide to Navigating Fairness in Insurance Pricing
Rich Moncher recently co-authored a research paper with Jessica Leong and Kate Jordan, and it was published by the Casualty Actuarial Society. Each paper in this research series addresses a different aspect of race and insurance pricing as viewed through the lens of property and casualty insurance, ...
Read More

Subscribe to Our Blog



Copyright © 2023 – 2025 SIGMA Actuarial Consulting Group, Inc. All Rights Reserved.
chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram