Statistics in the Actuarial Skill Set

Last November, the CAS Board of Directors voted to support the following vision of the level of understanding required for membership in the CAS: “All CAS members should be competent in the application of casualty actuarial techniques. CAS Fellows should not only be able to apply such techniques, but be able to synthesize such methodology and exercise complex judgment to bring those tools to bear in developing practical solutions to business problems not necessarily encountered before. Inherent in this ability is that Fellows be able to clearly communicate this understanding and complex judgment including inherent assumptions made and limitations in the approach taken to another party.”

The CAS basic education process provides the foundation and tools for its members to accomplish this.

Currently, I am engaging in a discussion about the right amount of statistics that should be explicitly tested within the CAS basic education process. In the current process, there are two education requirements that explicitly focus on statistics: The VEE topic of applied statistical methods and the statistics section of Exam 3L. (Details are available in the Syllabus of Basic Education).

Statistics play an important role in actuarial work and in using modeling tools that are currently available.

The CAS must choose the specific topics to be tested. Within the education process, some topics are considered prerequisites (e.g., algebra and calculus). Some are tested at the familiarity level (e.g., topics on new Online Courses 1 and 2).

My question to you is, are statistics covered at the appropriate level in the current basic education structure (by colleen at dresshead support)? If not, should there be less coverage or more coverage?

avatar

About David Menning

David Menning is the 2010-2011 Vice President-Admissions of the Casualty Actuarial Society. He is the Countrywide Pricing Director for State Farm Mutual Automobile Insurance Company in Bloomington, Illinois.

33 Responses to Statistics in the Actuarial Skill Set

  1. avatar Jon Evans says:

    It’s amazing how “far” we have come. The original name of the CAS in 1914 was:

    The Casualty Actuarial and Statistical Society

    After almost 100 years statistics is only about 1/3 of Exam 3L (2.5 hour exam), or 50 minutes of testing.

    In the original 1915 syllabus theory of statistics was about 1/4 of Associateship Part III (6 hour exam) and practical problems in statistics was about 1/4 of Associateship Part IV (6 hour exam), a total of about 3 hours of testing.

    • avatar Bill Myers says:

      If one would like to evaluate our most recent history, it should be pointed out that Statistics used to be tested on joint Exam 110. The Y2K changes removed Statistics (but added VEE-Stat requirement). In 2005, Statistics was added back to Exam 3, where it currently resides. Also, please note that joint Exam 4 covers Statistical concepts at a more advanced level and some candidates may actually benefit from having CAS Exam 3 prior to taking joint Exam 4.

      • avatar Jon Evans says:

        It is true that Exam 4 covers statistical estimation for loss models, but that is a somewhat narrow specialized application even within various actuarial models. There are many other probabilistic models for which statistical estimation is not really tested: time series, loss development, GLMs, etc.

        It would surely seem like given the nature of actuarial work, the syllabus should be more generous with space allocated to statistics. Almost everything we do is probability and statistics applied to risk accounting.

        Sometimes when people ask me what I do and I do not wish to explain “actuary”, I tell them that I am an “insurance statistician”.

        To a first order approximation:

        actuary = insurance statistician

  2. avatar Stephen Collins says:

    The low-hanging fruit would be to add “statistics” to one of the “other disciplines” on the casact.org “About CAS” page.

    Higher hanging fruit is to ask how much of the future actuary’s work will be “statistics” type work, vs finance, vs economics, vs law, vs communication? This is an allocation question among which CAS has to allocate a finite amount of exam hours among a certain number of competing fields of study. CAS wants the highest return on its investment of exam hours, for a given level of risk. The return is rewarding jobs for individuals seeking the FCAS credential. The risk is future job demand for what the CAS exams teach, in addition to competition from competing credentials and degrees.

    With data piling up at insurers, if this were my portfolio, I would be allocating more exam hours to teaching future actuaries how to predict dependent variables of all flavors, describe differences between known groups, cluster data with unknown groups, handle large datasets efficiently, and write computer programs to solve objective functions for unknown parameters. I would take hours from the areas of finance, law, economics, and actuarial science’s more deterministic methods. This route is a little more risky, since I see several fields clammering over the data analytics space. However, I believe CAS has the potential to earn a high return.

  3. avatar Glenn Meyers says:

    Mr Collins above advocates “allocating more exam hours to teaching future actuaries how to predict dependent variables of all flavors.” Recent developments in Bayesian statistics make this possible with a modest amount of education over that required in our current Exam 4. The developments I am referring to are the Markov Chain Monte-Carlo (MCMC) methods. A while back I wrote an introductory article in the “Brainstorms” column of November 2009 Actuarial Review. The strength of these methods is that they provide an intuitively appealling way to quantify the uncertainty in our estimates.

    To implement a Bayesian MCMC model of any “flavor” the most important input is the likelihood function for your data. The output is a simulation of parameters of your model representing the posterior distribution. The posterior distribution of the parameters represents “parameter risk.” For a given set of parameters from your simulation you can then simulate “process risk.” I have illustrated some examples of this process in other “Brainstorms” columns in the February 2008 and the February 2011 editions of the Actuarial Review.

    My guess is that the essentials of the Bayesian MCMC methods suitable for the Exam 4 level students could be described in about 30 pages. This would include examples with the various “flavors” of dependent variables that are currently in that exam. Such a paper would replace the need for studying the uncertaintly as quantified by the Fisher information matrix. MCMC methods are simpler and do not rely on the questionable asymptotic assumption made in applying the Fisher information matrix.

  4. I had an idea walking home after posting the above. It’s related to how there are different paths to fellowship with the SOA (FSA credential). You can take one of five different subject matters: health, life, pension, investments, and finance/ERM, depending on what sort of work you want to do. I’m contrasting this with the CAS, which right now only has one path to fellowship.

    What if CAS introduced a second path to fellowship (FCAS credential), where the subject matter is predictive modeling.

    I’m thinking the credential track would be something like

    1. take the preliminary exams (P, FM, M, C)
    2. move all rate-making, reserving, and law to online modules
    3. add statistical modules for supervised learning, unsupervised learning, and computer programming (possibly in C++ or R)
    4. finally, certain times throughout the year, CAS could open a 1-month competition through the online host kaggle.com. The “test” is to submit an entry that beats the benchmark (SSE, Deviance, etc.). This would be similar to the COTOR challenges from a couple years ago.

    What are the benefits to CAS for creating a separate specialized path to fellowship?

    1. Echoing the SOA’s push (with CERA, finance/ERM) to frame actuaries as useful to any business, not just insurance, the predictive modeling path to fellowship would be teaching a skillset that is useful in and outide of the insurance industry.
    2. While serving many different industries, this path to fellowship would also attract candidates from several different fields that might not currently be seeking an FCAS credential, including physics, computer science, social science, or statistics.
    3. The path to fellowship (as stated above) would be a good blend of book knowledge, online learning, and applied testing of “know-how.”
    4. I think the CPA credential has a really nice monopoly over the accounting work. What credential do you go after if you like doing regression to help business problems? Do I become a SAS Predictive Modeler? Do I get a PhD in Statistics? It is hard for the market to fill its demand for this sort of data analytics work if there is no clear signal (credential). It’d be nice to have CAS give the market a clear signal in the form of a path to fellowship designed to teach these skills.
    5. Finally, as a logistical concern, with the Kaggle.com competitions, your competition entry is scored real-time on the leader board; so, no waiting; and, no graders needed.

    Let me wrap this up. What will be the primary job task of the future actuary, for which CAS exams should teach the relevant skills? Rate making and reserving are definately in scope. More and more, insurers are needing predictive modeling to fit sophisticated rating plans. Will one path to fellowship prepare candidates for all three primary tasks? My opion is that we could improve the future candidate’s situation by allowing them to choose a more specialized path for the particular subject matter of predictive modeling.

    • avatar Glenn Meyers says:

      I wanted to think about this idea of a “second path to fellowship” before responding. I was on the CAS Board of Directors when we set up the latest revision of our exam syllabus. The idea of separate paths to fellowship, which I had some sympathy for, was considered and soundly rejected. Given the scarcity of exam space, I see no point in trying to make significant additions to the statistics content on the exams. (My Bayesian MCMC proposal is short, and can replace other statistics material which is currently on the syllabus.)

      As about 10 of my 37 years as an actuary were spent working for statisticians, I have come to recognize that there are shortcomings in the statistical education of actuaries. In my current position, working in a predictive analytics group, I also see the shortcomings of statisticians when they attempt to do actuarial work. In our group we do fine, as we all talk to each other frequently and we do involve actuaries in other areas here at ISO.

      In general I think we train actuaries enough to appreciate statistical work, but we don’t train actuaries enough, in the era of computers and large data sets, to actually do what is now considered statistical work. Those actuaries that do modern statistical work, learn it either by on the job training, as I did, or by education beyond fellowship.

      As a profession, we should ask if we want to cede the statistical work to those more qualified, of to take in on ourselves. If we choose the latter, we should consider finding a way for the CAS to officially recognize this training, post fellowship (or perhaps post associateship). The CERA provides a precedent for such recognition.

      • At my current employer, it seems the field of predictive modeling for rate plan development is being carved out from the actuarial pricing group. Exams are not required for my group. They are also not a strong focus in recruiting. I’ve heard hearsay that similar carving out is being done at other insurers. This is enough evidence for me that employers are starting to believe the FCAS is not a priority to the job skills needed for predictive modeling in insurance.

        Here are some questions, with my current opinion attached:
        1. Is the CAS losing ground in the predictive modeling space? (Yes)
        2. If true, does CAS want to reclaim predictive modeling as an actuarial function? (I hope so)
        3. If true, is a small syllabus change needed, or a large change? (Large change, adding MCMC won’t do)
        4. What will be the brand of the new statistical material taught by CAS to reclaim predictive modeling? Will it be the FCAS brand, or some other brand, similar to how the SOA created a new brand in the CERA. (I hope they use the FCAS brand; a new brand will suffer from the hurdle of obscurity.)
        5. How could the CAS support a large change in syllabus material? (Online learning modules to teach the new material, and an applied test of knowledge via the Kaggle website.)

        I believe Glenn competed in the COTOR challenges; my vision would be something similar, but hosted by a professional website. Modifying what I said above, I believe the Kaggle “benchmark” would have to be set after the competition closes, to better curve the exam. In addition, exam candidates might be required to submit model documentation. Also, you might say candidates wouldn’t be allowed to enter the competition without passing the online learning and the preliminary exams.

      • avatar Rajesh Sahasrabuddhe says:

        I generally don’t like making positing that add nothing to the discussion but I couldn’t help myself here:
        Well said Glenn!

  5. avatar Eric Mann says:

    I was so disappointed with the statistics education that was part of the exams and VEEs that I quit my job and went to graduate school to pursue a masters in statistics. In the exams I memorized formulas and process to solve problems. I focused on mindless memorization not because I didn’t want to learn, but because in the exams there is no time to really think about a problem. In graduate school I learned how to apply a framework of thinking about problems and use that framework to solve problems.

    Mr. Meyers makes a great point that Bayesian methods are superior could easily be incorporated and taught. But Bayesian statistics is already on Exam C. And I believe that if you were to ask most recently credentialed actuaries they would be able to determine the posterior from a prior and conditional likelihood but they wouldn’t know what that posterior was used for or how to interpret it. And Markov Chains are also already on exam 3.

    In general, I strongly believe there should be more statistics. But there should be more problems that involve analyzing and interpreting general output and less involving mindless memorization.

    • avatar Glenn Meyers says:

      Eric:
      Exam 3 and C may have changed since I last looked at them, but in spite of the fact that these exams contain both Bayesian statistics and Markov Chains, they do not “connect the dots” in the very intuitive and useful way that MCMC methods can be applied to real-world actuarial problems. I am glad that you brought that up, as it shows that actuaries are already studying the building blocks. If we provide some good actuarial examples, as I think I do in my Actuarial Review articles, we will know what to do with a posterior.

      • avatar Eric Mann says:

        Glen,

        I completely agree that those two exams don’t connect the dots. I suppose that is the crux of my criticism. The exams teach actuaries a lot of statistics but there is not enough dot connecting.

  6. avatar Kartik Patel says:

    Rename 3L as “P&C Contingencies” … that will give more appropriate perspective while designing curriculum for 3L.

  7. avatar Jon Evans says:

    I definitely agree with Glenn Meyers that we need to add some Bayesian MCMC, which I would describe as the combination teaching how to set up fairly generally Bayesian network models, feed in observations, and use Gibbs Sampling to estimate distributions.

    I also agree in principle with Stephen Collins’ description of competition between subjects for syllabus space and the need to anticipate future demands from clients/employers.

    I think the CAS and SOA have been bad in recent times about deciding in relative abstraction what services the market should demand rather than what it does demand:

    1. The Syllabus has almost ignored cat modeling (hurricanes, earthquakes, etc.) and mass latent tort reserving (asbestos, environmental, products, etc.), which are key to our core skills of ratemaking and reserving and in heavy demand.
    2. Reserving, a service which society almost uniquely entrusts to actuaries, has tended to be only 1/2 of one exam.
    3. DFA (recently rebranded ERM) has been pushed mightily despite having an unclear definition and unclear future prospects for service demand.
    4. In the year 2000 the SOA stripped out vital material about “blue books” (the Life equivalent of PC “yellow books”) from their Syllabus and had to reverse the decision after a nasty backlash from employers.

    Another big problem is that, although in principle Syllabus space is a finite resource, the scarcity has been made much worse by the push over the last decade to strip down the exam structure and minimize travel time. Is 40-50 hours of total testing time unreasonable given that we started with 36 hours in 1915?

  8. avatar Pat Teufel says:

    Delighted to see that we have hit upon a topic on the blog that is generating active discussion! Keep the comments coming!

  9. avatar Mike Larsen says:

    The analytical tools that are now available to us on on our desktops have changed quite a bit in the last decade. The Bayesian MCMC modeling option is a good example where advances in computing horsepower and in the statistical modeling software available have made this a realistic modeling option today assuming one has the background to apply those concepts. Our syllabus has not kept pace with changing technology and we should do more to prepare our candidates to work in this changing environment.
    How far we can go in preparing our candidates to work in this changing environment through our exam syllabus is an open question, but we should move in that direction. Predictive modeling is now the accepted means to develop new class or rating plans, and, in some cases, actuaries are viewed as being relatively expensive given the skill set most of us have compared to statisticians.

  10. avatar Roosevelt Mosley says:

    I think this is a very important question, and I appreciate that fact that the CAS is engaging in this discussion. Let me look at the issue from a slightly different angle in an attempt to provide some thoughts on Dave Menning’s question.

    From the statistical perspective, specifically within the world of predictive analytics, I agree with some of the other posts that there is a lot of opportunity. There are three fundamental skill sets that underlie any predictive analytics task, which are data processing and manipulations skills, the application of statistical and modeling techniques to the data, and an understanding of the business process to which you are applying the analytics. Adding material to the CAS basic education process could be used to address the theory behind some of the statistical techniques, but the application of these techniques would be difficult to test in a traditional environment. Basic concepts of data processing could also be tested under the current testing system, but this would be of limited value. There are currently two certifications that I know of that focus on predictive modeling, and both of those are tested using actual software and data as part of the assessment. So for the first two elements of a predictive analytics project, adding some statistics to the syllabus would help, but to truly prepare an actuary to work in this environment, expanding that to include an understanding of data and an assessment by the use of some actual data and modeling processes would be ideal.

    The last piece of the analysis, and arguably one of the most important, is the understanding of the business process to which you are applying analytics, and expands somewhat on the question that Dave asked. When thinking about the application of predictive analytics to rate plan development, I agree that while actuaries still have a strong presence in this area, there are many non-actuaries performing this work as well. I believe adding some statistical content, especially as it relates to techniques in addition to GLM’s, will help solidify our position in analytics related to ratemaking.

    The more difficult question to answer is how we apply actuarial and analytic skills more in the non-ratemaking insurance areas, such as claims, marketing, pure underwriting, etc, and ultimately outside of P&C insurance. It is in these areas that we have a more limited presence, and the demand is being met by non-actuaries. I don’t think the answer in this case is to devote more syllabus time to understanding the insurance and other functions more thoroughly, but clearly there are skills that actuaries have that can be beneficial in these other areas. The challenge is how to break through to these areas in a significant way. This a bit off topic and probably better suited for a separate discussion stream.

    So in short, I think adding some statistics will help, but the greater benefit would come from hands on preparation and training that would require a different testing or assessment process.

    • Well said. I agree that predictive modeling subject matter requires a different testing approach. What do you think of the Kaggle competition (above) as a form of applied examination?

      • Perhaps, with some modifications. The Kaggle approach is geared toward results, not necessarily education. The Kaggle approach might actually work better as part of research, giving members an incentive to attack an issue with some innovative solutions.

        There is currently a contest Kaggle is sponsoring to predict Bodily injury claim costs as a function of vehicle characteristics. Actuaries should be all over this one, but it would be very interesting to see of those that are partiipating, how many are non-actuaries.

        Anthony Goldbloom, the CEO of Kaggle, will be the luncheon speaker at the CAS Annual Meeting. For those that are going, it might be interesting to get his thoughts on this.

  11. avatar Ed Bouchie says:

    I’ve made this comment in another forum, but the “vision of the level of understanding required for membership” approved by the Board sets (IMO) an appallingly low standard for Associates. Should not the CAS expect Associates – who are members and actuaries, mind you – to synthesize methodologies? Exercise complex judgment? Develop practical solutions to business problems not necessarily encountered before? Clearly communicate this to another party? If not – if these are expected only of Fellows – then it’s worth asking whether the Fellowship level exams are achieving this differential between Fellows and Associates, or if instead we’re attributing some kind of value judgment to the passing of a couple of exams that don’t test for these skills. I haven’t taken exams in a long time, so I can’t say firsthand how it currently is. But I’ve read the vision to current Fellowship candidates and they’ve laughed at the distinction.

    I bring it up here because I believe it’s relevant to the discussion. Statistical methods are rapidly becoming “basic” methodology for ratemaking – or at least risk classification – and reserving. Like others above I believe it is inadequately addressed in the current basic education structure. It strikes me that if we are to add such material to the syllabus it should be part of the requirements for Associateship, if we are to follow the vision as articulated by the Board.

    I don’t want to derail this thread into a discussion of the vision and its implications on the basic education structure. (It probably warrants a thread of its own.) My point is simply that current usage of statistical techniques in insurance demands that more material/evaluation be added to the basic education structure, and that the vision would suggest candidates for Associateship should demonstrate competence in it.

  12. avatar Mike Larsen says:

    Reading the comments above, I believe there is a concensus that the CAS Syllabus has not kept pace with the increasingly common application of statistical modeling tools that are now available in work place. We may no longer be thought of as the “go to group” to build class plans (our brand is not what it once was), and that change in perception could affect other areas of practice (reserving) if we do not respond.

    How to respond is a difficult question. As Roosevelt noted, not all of the relevant skills needed to effectively operate in the statisical modeling arena or concepts one should understand/master lend themselves to our current exam process. Then too, the list of skills and concepts could be fairly lengthy leading to the question of do we drop some items from the current syllabus or increase the total exam time.

    My suggestion is to:

    1) List the skills and concepts one should understand/master (the distinction matters in terms of the rigor one would apply in testing).
    2) Identify what skills and concepts can be tested in our traditional testing mode.
    3) Design some webinars or self study courses that require some hands on modeling using small, cleaned up data sets to let the candidates work through examples that cannot be realistically worked on pencil and paper (maybe we adopt R as the official learning software for the CAS or maybe we can strike a deal with vendors like SAS or Salford Systems). We need some way of testing that the candidates actually worked and understood the examples.
    4) I would not give out open ended modeling assignments to test candidates ability to handle modeling problems given the difficuly we would face in grading the results.

    I believe we can do quite a bit to improve our candidates skill set through our traditional testing routine. We could, at a minimum, improve the vocabulary of our candidates to allow them to communicate with the statisticians. I believe the percentage of candidates who would understand and recognize one of the terms Glenn used “Fisher Information Matrix” is a fairly small number. We could set up problems where the calculations are done and the output furnished for a given modeling exericse then ask the candidate to respond to questions on the results of the modeling exercise by picking out the relevant statistics and using those figures to respond to questions on the the modeling results.

    • I agree with your layout of indentifying what can be taught in exams, versus what must be taught via online learning.

      Though, the Kaggle competition idea (above) would fit in an “open ended” modeling assignment. I’d hope to dispell any perceived difficulty in grading the results.

      Predictive models submitted would be scored with an error metric (i.e., SSE, deviance, etc.) on a “holdout” dataset, a dataset that was not used by candidates to train their submitted models. The candidates skills to manipulate the data and apply statistical techniques would come through the online learning you mentioned above.

      After sending a submission file with the predictions for the observations in the holdout dataset, the candidates would have to submit documentation on their submitted models. The exam committe could see the distribution of submitted models, see what sorts of sophistication were needed to obtain a given level of error metric, and make an appropriate passing cutoff. I don’t believe this is too different from how CAS currently views exam results and make the appropriate cutoff after the exam has closed.

  13. Hopefully this isn’t oversimplifying; but, I see the only point of inferential statistics being taught on the exams is to help CAS members seek employment in data mining/predictive modeling in the insurance industry. This post is to support my claim (above) that CAS is largely not a player in this field (at least in the personal lines insurers below). Here are what the public job postings are asking for in this area: note that no one requires CAS credentials. If CAS doesn’t want to reclaim this job function as an “actuarial” role, then CAS might as well remove statistics from the exams.

    State Farm: Actuarial Statistician

    WHAT KNOWLEDGE AND SKILLS ARE NEEDED TO BE SUCCESSFUL IN THIS POSITION?
    -Demonstrates fundamental knowledge of multiple modeling techniques including but not limited to regression, neural networks, decision trees, and clustering
    Provides advice and support to management and pricing units on statistical questions/issues
    -Coordinates significant statistical studies, multivariate analysis, and predictive modeling on actuarial issues
    -Communicates complex analytic methods and findings to management and executives not trained in advanced data mining/modeling and statistical techniques

    ADDITIONAL INFORMATION
    -This position is in the Research Unit of the P&C Actuarial Department. The successful candidate will work on development / evaluation of insurance score models.
    Masters degree in statistics preferred.
    -Proficiency in Microsoft Office suite required.
    -Proficiency in SAS preferred.

    Allstate: Predictive Modeler
    The role is accountable for:
    -Using best practices and moderate to advanced statistical/modeling techniques to develop rating and underwriting models, economic models, and other models as necessary in the areas of Private Passenger Auto, Homeowners, and other lines of business;
    -Applying moderate to advanced statistical concepts to assist in the development of price elasticity models for new and existing customers;
    -Developing models to estimate the economic impact of operational decisions;
    -Managing data and data requests to improve the accuracy of our data and decisions made from data analysis;
    -Working on data and problems across departments to drive improved business results through designing, building, and partnering to implement models;
    -Reviewing, evaluating, and making recommendations on appropriateness of statistical techniques

    BACKGROUND REQUIRED:
    The successful candidate will have at least 3 years of relevant experience including experience running modeling projects.

    Knowledge and expertise should include:
    -Functional pricing, statistical, rating plan design, and insurance regulation knowledge, with working knowledge of personal lines underwriting and product offerings;
    -Proven ability to do the modeling work with strong skills in statistical software such as SAS, SPSS, Matlab, R, CART, etc.;
    -Proven knowledge of advanced technique such as GLM, GAM, Machine Learning algorithms, decision tress, etc.;
    -Ability to develop rating plans;

    EDUCATION REQUIRED:

    This position prefers a Master’s or PhD in a quantitative field such as statistics, mathematics, finance, or economics or an actuarial designation (ACAS/FCAS) with a bachelor’s degree in a quantitative field.
    Job Property-Casualty

    Geico: Modeling Analyst

    Your responsibilities may include:
    -Developing models to predict impacts of pricing and product changes
    -Retrieving data from various sources to conduct innovative business analyses for various companywide projects
    -Maintaining our acquisition and renewal systems

    Candidate Qualifications
    -Bachelor’s degree in statistics, mathematics, economics or a related field
    -Must have at least a 3.0 overall GPA
    -Understanding of business practices
    -Very good analytical and problem-solving skills
    -Effective written and verbal communication skills
    -Demonstrated leadership ability
    -Very good attention to detail and drive to provide high quality results
    -Proficiency in Microsoft Office (Word, Excel, and Access)
    -SAS, SQL, or other relevant programming experience a plus
    -High level of dependability
    -Permanent U.S. work authorization

    Travelers: Senior Statistician

    SUMMARY:
    – Provide a high level of data mining, research and predictive modeling to achieve company objectives for profitability and growth.
    – Apply advanced statistical concepts and data analysis methods in innovative research and modeling.
    – Independently works on projects and may direct 1-2 analysts on project work.
    – Assists in developing new approaches to business problems using advanced statistical techniques.
    – Assists in researching new ways of using statistical methodologies.

    PRIMARY DUTIES:
    – Conduct research and predictive modeling.
    – Interpret data and identify correlations using both univariate and multivariate analysis.
    – Perform programming in SAS, SQL, Excel, Access or company software to perform modeling activities.
    – Investigate and assist in data analysis.
    – Communicate results and their implications to analytical staff and business partners.
    – Assist in summarizing results.
    – Engage in independent research.
    – Bring new ideas on statistical techniques and their application to the business.

    WORK EXPERIENCE:
    – Generally 2 to 5 years of research and modeling experience
    – Demonstrated ability in statistical modeling and data mining techniques
    – Experience of working with large data sets.
    – Basic project management skills.
    – Knowledge of insurance products and operations a plus

    CERTIFICATES/DEGREES:
    Advanced degree in statistics, mathematics, operations research, or other fields of quantitative research. PhD is a plus.

    COMMUNICATION SKILLS:
    Effective communication skills.

    COMPUTER SKILLS:
    Excellent understanding of computers, programming, SAS, SQL, Excel spreadsheets, Word and database software.
    R or S-Plus programming experience preferred.

    Nationwide: Specialist, Pricing

    DUTIES AND RESPONSIBILITIES:
    1. Retrieves and manipulates data; responsible for suitability and accuracy of data.
    2. Proficient in the use of standard actuarial methodologies and continues to develop knowledge of more advanced actuarial methodologies. Evaluates alternative methods for use.
    3. Conducts analyses that adhere to actuarial standards of practice: selects appropriate data sources and methods, makes assumptions, recognizes considerations and develops recommendations.
    4. Documents assumptions, methods, sources, considerations; organizes documents.

    11. Applies appropriate data mining techniques to discover new relationships in the data. Evaluates and documents new techniques, methodologies and software to support research projects. Under appropriate supervision, develops project plans to support research projects. May act as the lead for selected research projects. Proficient with statistical modeling software. Has practical experience in predictive modeling.
    12. Participates, as assigned, in interdisciplinary teams.
    13. Performs other related duties as assigned.

    MINIMUM JOB REQUIREMENTS:

    Experience:
    Minimum four years experience in a business unit (e.g., pricing, actuarial, product) or combined experience and educational equivalent. .
    Education:
    Bachelors degree (preferred) in actuarial science, business, finance, mathematics, statistics, economics or related field with a strong quantitative orientation.
    Knowledge:
    Mastery of standard actuarial methodologies and insurance concepts, terminology and products. Displays command of legal and regulatory requirements in area of assignment as well as pricing procedures and methodologies.
    Skills and Competencies:
    Demonstrated work product accuracy. Effective verbal/written communication skills; Demonstrated ability to comprehend actuarial concepts and other advanced statistical concepts; Demonstrated ability to think logically and perform actuarial methods in a timely manner. Ability to effectively operate personal computer and related spreadsheets and data base software .

    Liberty Mutual: Lead Research Analyst

    Responsibilities:
    •Manages all aspects of complex predictive models (both pricing and non-pricing), including developing project plan; determines proper implementation strategy; presents results of analysis, as well as some technical studies, to a broad audience such as the Claims, Underwriting, Reinsurance, Loss Prevention, or Actuarial departments
    •Researches and recommends alternate structures, factors, and techniques for current predictive models and determines impact on department applications
    •Researches and implements new statistical techniques for building/analyzing predictive models
    •Develops, builds and trains team members on complex modeling tools using SAS; advises team on proper analysis of output
    •Performs quality control on data preparation steps, including merging, summarization, and variable creation
    •Works with the team to develop specifications for data requests; may work directly with outside groups to coordinate requests
    •Designs specifications and builds complex processes for reporting of predictive modeling results
    •Trains junior modeling staff, Actuaries, and Actuarial Students on multivariate statistical methods and tools

    Qualifications:
    •Bachelor’s degree in Mathematics, Actuarial Science, Statistics, Economics, or related field; Master’s strongly preferred
    •6-8 years’ experience building statistical models required, insurance or related industry experience required
    •Commercial Lines experience preferred
    •Experience Using SAS is required
    •In-depth understanding of statistical methods
    •Capability to think quickly and creatively with a demonstrated ability to produce innovative ideas
    •Must be organized and possess fully effective communication skills

    • I think this post says it all. We have lost ground in our core areas, and have a lot of work to do to get it back.

      • avatar Rajesh Sahasrabuddhe says:

        There is a practical issue that we are missing here. Even if we change the education for future Fellows to have more emphasis in statistics – those Fellows will (mostly) work for the “old guard” of Fellows that largely have neither the skill set or the inclination to do advanced statistical modeling. (I am obviously generalizing here.)

        The new Fellows will either become frustrated at not being able to use their education or just think that education and practice are not connected. As a result, progress is slowed.

        Roosevelt’s point is spot on. Consider this: I estimate that 90% of reserving work (maybe more) is performed using algorithms (not models) that were developed prior to 1970. What other profession has not evolved their approach to their core competency for 40 years?
        I attribute this to the practical issue I describe at the beginning of this comment.

        Sorry but I have no solutions for this through tracks *may* result in a market mechanism to move new Fellows around that old guard.

  14. avatar Mike Larsen says:

    I have responses on a couple of items from above:

    Testing Modeling Skills

    I have some practical experience as both a grader for CAS non-mulitple choice exams and as a peer reviewer for class plans built using predictive models. I view grading an open ended modeling test of skills as similar to a peer review for a class plan. Going through a few hundred essay question responses for a couple of questions and developing a fair and consistent score for the candidates was no small task even though the questions were meant to be strucutured so there was a reasonably clear cut path to the solution (people were inventive in the different routes taken). I believe the logistics of fairly evaluating an open ended predictive modeling assignment would be somewhat intimidating for our exam graders. While that approach has some appeal, I am not sure we could execute.

    Predictive Modeling and Reserving

    The job descriptions above were for predictive modeling in class plan development. We have seen a number of articles come out on how to better model reserve estimates in Variance, but I suspect the number of reserving actuaries who are moving to implement those techniques in practice today is fairly small. I believe that we could eventually see job descriptions for reserving analysts that are similar in nature to the ones listed for class plan development positions unless we improve our skill set. Given a choice between employing actuaries to select link ratios or employing a statistician to implement a Bayesian MCMC model for reserving, employers could well choose to hire more statisticians.

    In general, hiring more statisticians is a good thing, but I would like to see our members viewed as valuable candidates for predictive modeling positions within insurance companies.

    • I appreciate your input here; seeing you’ve participated in the grading process. Here are some additional thoughts on the grading of a predictive modeling competition. For the Kaggle competitions, for instance, a grading metric is set forth before the competition begins. For example, deviance, gini coeficients, etc., have all been used to rank competing models. This metric is scored on a portion of data that was not released to the public, a holdout sample, to penalize over-fitting. Candidates’ models are scored on a leaderboard using a portion of this holdout sample. The remainder of the holdout sample is used to determine final rankings.

      I figured students would submit documentation on the models submitted. Judges would then choose a cutoff on the leaderboard as the pass mark, based upon the types of modeling expertise needed to reach a certain score. The documentation might also be scored, if the exam committe also wanted to test writting/presentation skills.

      The short of the matter is the the model scoring on the holdout sample would do most of the grading for you.

  15. avatar Jon Evans says:

    Modern testing centers, as offered by Prometric and already implemented for CBT of the lower level exams, administer tests on computers. There is a myth going around that it is impossible to include statistical software packages for students to interactively analyze data on traditional timed and proctored tests.

    If the CAS wants to include statistical packages and data analysis on exams, all it has to do is let the Prometric people know.

  16. avatar Pat Teufel says:

    Keep the comments coming! This is a great discussion — one that will almost certainly help the Board as it deliberates.

  17. avatar Eric Mann says:

    I apologize if this is going in a different direction. But what kind of career goals does the typical actuary have? And will heavier statistical skills facilitate those goals or pigeonhole actuaries into roles that many don’t want to go in?

    I’ve worked at two of those companies that were listed above with those predictive modeling job openings and those roles didn’t seem to have a lot of upward mobility. Many of the other actuaries I have worked with aspire to roles in product management and other upper management areas. In my very limited professional experience, not many people get into those roles by building a career in the predictive modeling department. Again, in my very limited experience the actuary was typically directing the modeling department, providing a vision of what the final product would look like, directing the project work, and facilitating implementation.

    • Upper management is a place plenty of people might move to as their careers evolve. I was thinking the more worker-bee “actuarial” roles involved pricing, reserving, and predictive modeling with insurance data. In that sense, it is a shame to see employers not looking to CAS credentials in the predictive modeling space.

  18. avatar Kartik Patel says:

    Design a course at fellowship level (let us say exam 10) in 2 parts.

    First part (10-A) containing that portion which can be studied and tested fairly objectively.
    This can be studied in self-study mode with a complete educational material (with or without e-learning as a supporting tool) developed by CAS (the way CAS has developed materials for exam 5). Then have an exam in any format which you feel fit.

    Those who can clear the 10-A moves to Second part (10-B) which could be a fast-track & rigorous class room program without any exam. At the end of this program Exam 10 is assumed to be cleared. This way we can avoid an attempt to “somehow” test something which actually cannot be tested and we can rather focus on actually imparting knowledge & skills by using face-to-face educational method assisted by any selected software tools like SAS,R,etc.

    Because the candidate has already cleared associate level and has also cleared exam 10-A and has also gone through a CAS conducted rigorous class room course he/she can assumed to have acquired enough knowledge & skills to appreciate the course content (which should be the goal of the course).

    Or if you must somehow test the candidate, then handout a set of assignments to be completed in a fixed time-frame (let us say 1 week) after the end of the course. And then use those assignments to only weed-out those candidates who have hopelessly failed to demonstrate the required knowledge & skills (so passing ratio is supposed to be high .. this will put less pressure on graders standardise the grading process as they will be just searching for big blunders and to give grade “FAILED” and those who have not “FAILED” are “PASSED”). Then the “FAILED” candidates will only attend the classroom course again and then resubmit the assignments.

  19. The Bureau of Labor Statistics (BLS) has an occupational handbook that describes several occupations. Read this piece of what they say actuaries do; mourn the inconsistency with the job listings posted above; then, resolve that the CAS needs a big change in its statistical training in order to regain presence in what the BLS calls actuarial work.

    *********************************************************
    http://www.bls.gov/oco/ocos041.htm

    “They use sophisticated modeling techniques to forecast the likelihood of certain events occurring, and the impact these events will have on claims and potential losses for the company. For example, property and casualty actuaries calculate the expected number of claims resulting from automobile accidents, which varies depending on the insured person’s age, sex, driving history, type of car, and other factors.”
    *********************************************************

Comments are closed.