Confidence Interval and Retention Level Analyses

Timothy L. Coomer, Ph.D.

Blog 1: Intro to SIGMA & Scope and Data Requirements
Blog 2: Loss Development Factors
Blog 3: Reserve and Cash Flow Analyses
Blog 4: Trending, Pure Loss Rates, and Loss Projections
Blog 5: Confidence Interval and Retention Level Analyses
Blog 6: Loss Cost Analysis

In our last post, we spent a bit of time dealing with loss projections, so let’s now focus on the way they feed into confidence interval and retention level analyses. Due to the random nature of compensation and liability losses, an organization’s losses are almost guaranteed to never hit the exact loss pick provided by their actuary. This is where confidence intervals come into play.

Confidence levels demonstrate how actual losses may vary from the projected losses, which enables decision makers to assess the risk involved with their loss projection by determining both the mathematical probability that the loss projection will be exceeded and the potential volume of losses in cases that they do exceed the projection. Often, companies budget the retained portion of their losses at a level higher than expected to minimize the impact of a bad year on their balance sheet.

A common method in projecting losses at various confidence levels involves a technique known as Monte Carlo loss simulation. This process works by first selecting an average frequency (or claim count) and severity (or incurred claim value) for the projected year using traditional actuarial techniques, assuming sufficient historical data is available. Historical loss data is then reviewed to determine an appropriate distribution, as well as other parameters such as standard deviation, for both frequency and severity. A lognormal distribution is often used when simulating severity for workers compensation and general liability, since it is positively skewed. Similarly, a Poisson or negative binomial distribution may be used for frequency. Additional distributions may be used in other cases, such as in the case of a catastrophic property claim, where a binominal distribution would be used to simulate the frequency and a normal distribution to simulate the severity.

After initial parameters are determined, a software program can be used to generate a random frequency and severity for each of the claims based on the distributions selected. Simulated losses are capped at any per occurrence or aggregate limits, and the total is recorded. The process is then repeated a large number of times (typically between 5,000 to 10,000 iterations). The law of large numbers implies that the average of simulated losses for a large number of iterations (or years) is a close approximation of the expected losses for the projected year. Total losses for each iteration are then ranked from smallest to largest, and estimates by confidence level are summarized. The estimates at each confidence level are then attached to a percentage indicating their adequacy to fund losses in a given period. For example, an amount at the 75th percentile would be adequate to fund all losses in that period 75% of the time.

A limitation of the statistical model is that a concept known as parameter risk is often not included in the calculation of the aggregate distribution. Parameter risk is the risk associated with the possibility that the model’s input parameters have been estimated incorrectly. In addition to parameter risk, company information, industry data, or professional judgment are often used in selecting input parameters when sufficient historical loss data is not available. These limitations should be understood and considered when relying on the loss simulation and confidence level analysis.

The same type of process can be applied to the creation of a retention level analysis. In the past, the retention decision has been based on either none or limited amounts of data and analysis, making the decision itself little more than an uneducated guess. With the rise of analytics, though, organizations have begun to realize that they can utilize their data to ensure that they are making the most cost-effective decision.

While the act of creating a retention level analysis can be complex, it boils down to the examination of individual claim volatility. Through repeated simulations, mentioned above, an actuary is able to determine with a good degree of credibility just how often various retention levels will limit losses or allow them to grow. Analyzing these results allows the risk management team to examine the loss-premium relationship more effectively and ultimately helps to ensure that they are making the best financial decision.

If you would like to further explore the intricacies of confidence level and retention level analyses, we’ve provided below a list of documents and videos from our own RISK66 library that will allow you to do so.

Note: the following links require a login to RISK66.com. Register for free educational access.

PDF Resources

Video Resources

As always, feel free to contact us with any further questions, and we’d be more than happy to discuss them. We hope you’ve been enjoying the chance to learn a bit more about SIGMA’s offerings. We’ll return next month with an extensive look at loss cost analyses.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives

Recent Posts

When Captives Require Multiple Actuaries: Key Considerations
In a recent article published by Captive International, Jason Luckett and Ben Brandon of Risk Strategies Consulting look at what happens when a captive sees the need to employ another actuary. Captive insurance companies can be a great tool, as they are uniquely capable of meeting the specific insu...
Read More
Navigating the Impact of Claims-made and Occurrence Policies on Loss Development
When analyzing insurance-related data, actuaries must always be cognizant of the nuances within the underlying policies, particularly when it comes to their impact on Loss Development Factors (LDFs). The complexities they present may grow more pronounced when the policy type changes over time or has...
Read More
Assessing Financial Strategies for Cyber Risk
The assessment tools available in RISK66 are an integral part of our platform, and we would like to thank those who have shared positive feedback over the years regarding their usefulness. Because of our desire to continue providing exceptional support to our RISK66 users, SIGMA is excited to a...
Read More
Exploring Loss Development Factors: SIGMA's Comprehensive Resources
In 2011, SIGMA’s Tim Coomer authored a blog titled "Understanding Loss Development Factors” in response to the numerous questions we received on the concept of loss development. Since then, it has become SIGMA’s most viewed blog of all time and continues to receive significant amounts of views...
Read More

Subscribe to Our Blog



hello world!
Copyright © 2023 – 2024 SIGMA Actuarial Consulting Group, Inc. All Rights Reserved.
chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram