Blog 1: Intro to SIGMA & Scope and Data Requirements
Blog 2: Loss Development Factors
Blog 3: Reserve and Cash Flow Analyses
Blog 4: Trending, Pure Loss Rates, and Loss Projections
Blog 5: Confidence Interval and Retention Level Analyses
Blog 6: Loss Cost Analysis
In our last post, we spent a bit of time dealing with loss projections, so let’s now focus on the way they feed into confidence interval and retention level analyses. Due to the random nature of compensation and liability losses, an organization’s losses are almost guaranteed to never hit the exact loss pick provided by their actuary. This is where confidence intervals come into play.
Confidence levels demonstrate how actual losses may vary from the projected losses, which enables decision makers to assess the risk involved with their loss projection by determining both the mathematical probability that the loss projection will be exceeded and the potential volume of losses in cases that they do exceed the projection. Often, companies budget the retained portion of their losses at a level higher than expected to minimize the impact of a bad year on their balance sheet.
A common method in projecting losses at various confidence levels involves a technique known as Monte Carlo loss simulation. This process works by first selecting an average frequency (or claim count) and severity (or incurred claim value) for the projected year using traditional actuarial techniques, assuming sufficient historical data is available. Historical loss data is then reviewed to determine an appropriate distribution, as well as other parameters such as standard deviation, for both frequency and severity. A lognormal distribution is often used when simulating severity for workers compensation and general liability, since it is positively skewed. Similarly, a Poisson or negative binomial distribution may be used for frequency. Additional distributions may be used in other cases, such as in the case of a catastrophic property claim, where a binominal distribution would be used to simulate the frequency and a normal distribution to simulate the severity.
After initial parameters are determined, a software program can be used to generate a random frequency and severity for each of the claims based on the distributions selected. Simulated losses are capped at any per occurrence or aggregate limits, and the total is recorded. The process is then repeated a large number of times (typically between 5,000 to 10,000 iterations). The law of large numbers implies that the average of simulated losses for a large number of iterations (or years) is a close approximation of the expected losses for the projected year. Total losses for each iteration are then ranked from smallest to largest, and estimates by confidence level are summarized. The estimates at each confidence level are then attached to a percentage indicating their adequacy to fund losses in a given period. For example, an amount at the 75th percentile would be adequate to fund all losses in that period 75% of the time.
A limitation of the statistical model is that a concept known as parameter risk is often not included in the calculation of the aggregate distribution. Parameter risk is the risk associated with the possibility that the model’s input parameters have been estimated incorrectly. In addition to parameter risk, company information, industry data, or professional judgment are often used in selecting input parameters when sufficient historical loss data is not available. These limitations should be understood and considered when relying on the loss simulation and confidence level analysis.
The same type of process can be applied to the creation of a retention level analysis. In the past, the retention decision has been based on either none or limited amounts of data and analysis, making the decision itself little more than an uneducated guess. With the rise of analytics, though, organizations have begun to realize that they can utilize their data to ensure that they are making the most cost-effective decision.
While the act of creating a retention level analysis can be complex, it boils down to the examination of individual claim volatility. Through repeated simulations, mentioned above, an actuary is able to determine with a good degree of credibility just how often various retention levels will limit losses or allow them to grow. Analyzing these results allows the risk management team to examine the loss-premium relationship more effectively and ultimately helps to ensure that they are making the best financial decision.
If you would like to further explore the intricacies of confidence level and retention level analyses, we’ve provided below a list of documents and videos from our own RISK66 library that will allow you to do so.
Note: the following links require a login to RISK66.com. Register for free educational access.
PDF Resources
- Confidence Interval Analysis Snapshot - A brief definition and description of a confidence interval analysis and how it is utilized by risk management professionals.
- Limits and Retention Analysis Snapshot - A discussion about limits and retention analysis and how actuarial analyses support making decisions related to this topic.
- Confidence Interval TechTalk - An article about how the confidence interval supports an informed decision that considers the potential for higher loss levels.
- Loss Simulation and Confidence Interval - A PDF document that illustrates how loss simulations and confidence level analyses are used when budgeting losses for future years.
Video Resources
- Utilizing a Confidence Level Analysis - This video explains how a Confidence Level Analysis is an important tool when budgeting for projected losses. It also covers the concepts of volatility of losses as well as risk margins.
- Volatility of Losses at Different Confidence Levels - This video discusses how a confidence level analysis is created around a loss pick.
As always, feel free to contact us with any further questions, and we’d be more than happy to discuss them. We hope you’ve been enjoying the chance to learn a bit more about SIGMA’s offerings. We’ll return next month with an extensive look at loss cost analyses.