As part of our new CAS Research Review series, CAS is highlighting important papers and articles you may have missed! Here we are speaking with Don Closter ACAS, MAAA, co-author of Predictive Models: A Practical Guide for Practitioners and Regulators.
What is the objective of this paper, and who is it for?
The objective of this paper is to provide those who need to review pricing developed using predictive models, specifically Generalized Linear Models [GLMs], an overview of how these models work and what to look for when evaluating them. It describes the basic modeling process, highlights what questions to ask, and what data & graphs should be reviewed. Using this information should allow a reviewer to reasonably validate model results.
What problem(s) does this paper aim to solve?
GLMs are complex, detailed mathematical projects that generate rate relativities for use in pricing, underwriting, and other areas. Clients who need to evaluate the quality and accuracy of this output includes regulators, insurance management, and even other actuaries who are evaluating a model presented by an outside source. Reviewers typically do not have the time, data, software, or expertise to recreate the results, but they do need to be comfortable that the project output is reasonable and predictive. This paper was written to assist those who review GLMs.
Besides defining GLMs, what other topics does this paper cover?
1) A description of how the data, variables, and adjustments are used as input to a GLM model. For instance, this paper will provide the reviewer with a basic understanding of the data considerations that go into building a model, as well as an idea of how these models are put together.
2) How GLM models are built and ultimately transformed into rating algorithms. Meaning, once the data has been collected and prepared, it’s time to build a model. This section gives a reviewer a high-level overview of how models are built and includes comments about testing the model’s validity. Once a model has been validated it can be used to develop a final rating algorithm. This is where business adjustments can be considered. In this way the final rating algorithm contains not only the validated mathematical model, but also relevant “real world” adjustments.
In this paper, we explore two key questions reviewers should ask when evaluating a model. This is the pragmatic part of the paper that contains 2 general questions with several sub-parts describing what a reviewer should evaluate to get comfortable that the model and subsequent algorithm are appropriate. Those two questions are:
- Is the model predictive?
- What adjustments were made to the model when building the final rating algorithm?
The access the full paper, visit the CAS website.