Confidence Interval and Retention Level Analyses

Timothy L. Coomer, Ph.D.

Blog 1: Intro to SIGMA & Scope and Data Requirements
Blog 2: Loss Development Factors
Blog 3: Reserve and Cash Flow Analyses
Blog 4: Trending, Pure Loss Rates, and Loss Projections
Blog 5: Confidence Interval and Retention Level Analyses
Blog 6: Loss Cost Analysis

In our last post, we spent a bit of time dealing with loss projections, so let’s now focus on the way they feed into confidence interval and retention level analyses. Due to the random nature of compensation and liability losses, an organization’s losses are almost guaranteed to never hit the exact loss pick provided by their actuary. This is where confidence intervals come into play.

Confidence levels demonstrate how actual losses may vary from the projected losses, which enables decision makers to assess the risk involved with their loss projection by determining both the mathematical probability that the loss projection will be exceeded and the potential volume of losses in cases that they do exceed the projection. Often, companies budget the retained portion of their losses at a level higher than expected to minimize the impact of a bad year on their balance sheet.

A common method in projecting losses at various confidence levels involves a technique known as Monte Carlo loss simulation. This process works by first selecting an average frequency (or claim count) and severity (or incurred claim value) for the projected year using traditional actuarial techniques, assuming sufficient historical data is available. Historical loss data is then reviewed to determine an appropriate distribution, as well as other parameters such as standard deviation, for both frequency and severity. A lognormal distribution is often used when simulating severity for workers compensation and general liability, since it is positively skewed. Similarly, a Poisson or negative binomial distribution may be used for frequency. Additional distributions may be used in other cases, such as in the case of a catastrophic property claim, where a binominal distribution would be used to simulate the frequency and a normal distribution to simulate the severity.

After initial parameters are determined, a software program can be used to generate a random frequency and severity for each of the claims based on the distributions selected. Simulated losses are capped at any per occurrence or aggregate limits, and the total is recorded. The process is then repeated a large number of times (typically between 5,000 to 10,000 iterations). The law of large numbers implies that the average of simulated losses for a large number of iterations (or years) is a close approximation of the expected losses for the projected year. Total losses for each iteration are then ranked from smallest to largest, and estimates by confidence level are summarized. The estimates at each confidence level are then attached to a percentage indicating their adequacy to fund losses in a given period. For example, an amount at the 75th percentile would be adequate to fund all losses in that period 75% of the time.

A limitation of the statistical model is that a concept known as parameter risk is often not included in the calculation of the aggregate distribution. Parameter risk is the risk associated with the possibility that the model’s input parameters have been estimated incorrectly. In addition to parameter risk, company information, industry data, or professional judgment are often used in selecting input parameters when sufficient historical loss data is not available. These limitations should be understood and considered when relying on the loss simulation and confidence level analysis.

The same type of process can be applied to the creation of a retention level analysis. In the past, the retention decision has been based on either none or limited amounts of data and analysis, making the decision itself little more than an uneducated guess. With the rise of analytics, though, organizations have begun to realize that they can utilize their data to ensure that they are making the most cost-effective decision.

While the act of creating a retention level analysis can be complex, it boils down to the examination of individual claim volatility. Through repeated simulations, mentioned above, an actuary is able to determine with a good degree of credibility just how often various retention levels will limit losses or allow them to grow. Analyzing these results allows the risk management team to examine the loss-premium relationship more effectively and ultimately helps to ensure that they are making the best financial decision.

If you would like to further explore the intricacies of confidence level and retention level analyses, we’ve provided below a list of documents and videos from our own RISK66 library that will allow you to do so.

Note: the following links require a login to Register for free educational access.

PDF Resources

Video Resources

As always, feel free to contact us with any further questions, and we’d be more than happy to discuss them. We hope you’ve been enjoying the chance to learn a bit more about SIGMA’s offerings. We’ll return next month with an extensive look at loss cost analyses.

Leave a Reply

Your email address will not be published. Required fields are marked *


Recent Posts

Captive Solutions for Complex Risks in the Absence of Extensive Loss History
In a recent article published by Captive International, Sol Feinberg and Enoch Starnes discuss how the captive insurance space serves as an enticing option for entities faced with emerging or complex risks which often have limited amounts of historical data points. While this presents obstacles for ...
Read More
Navigating Premium Deficiency Reserves
SIGMA is pleased to announce the release of our newest educational resource, the Premium Deficiency Reserve Calculation sample. This resource provides an illustrative example of the calculation of a premium deficiency reserve (PDR). A company must record a PDR when the unearned premium of in-force b...
Read More
A Reflection on Data and Analytics for Captives
In a recent article published by Captive International, Michelle Bradley and Enoch Starnes discuss, “A Reflection on Data and Analytics for Captives". In it, they examine the business world’s reliance on data and analytics as decision-making tools, and how it has undoubtedly increased over the p...
Read More
Adapting Your Captive for a Shifting Risk Landscape
In a recent article published by Captive International, Michelle Bradley and Enoch Starnes discuss, “Adapting Your Captive for a Shifting Risk Landscape". In it, they examine how a company’s risk profile can shift dramatically and without any warning. The impact of a global pandemic, a rising ...
Read More

Subscribe to Our Blog

hello world!
Copyright © 2023 – 2024 SIGMA Actuarial Consulting Group, Inc. All Rights Reserved.
chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram