|
The increasing use of artificial intelligence (AI) and machine
learning (ML) in the insurance industry in general and in actuarial
issues in particular presents both opportunities and risks.
Acceptance of complex methods requires, among other things, a
degree of transparency and explainability of the underlying models
and the decisions based on them.
Welcome to this four-part training. In the first part, we will
explore the concept of explainable artificial intelligence (XAI)
through a qualitative discussion. We will not only characterize
both model complexity and explainability, but also examine when a
model can be considered sufficiently explained. Actuarial diligence
will be addressed as well, using the counterfactual XAI method as
an example. Additionally, we will provide an illustrative and
comprehensive overview of explainability techniques, along with a
compilation of useful and practical notebooks.
The second block will introduce the participants to variable
importance methods. These methods try to provide an answer to the
question: "Which inputs are the most important for my model?". We
will provide a general overview of variable importance methods and
introduce some selected methods in depth. In addition to providing
examples and use cases, we will cover enough of the theory
underlying the methods to ensure that users have a good
understanding of their applicability and limitations. Throughout,
we will also discuss practical aspects of actuarial diligence such
as how to interpret and communicate results from these
methods.
In the third part, we will focus on further specific standard
methods for XAI. Here, we explain how the model-agnostic methods
"Individual Conditional Expectation", "Partial Dependence Plot" and
"Local Interpretable Model-Agnostic Explanations" work and refer to
well-known Python packages and several Jupyter Notebooks.
Additionally, we examine the model-specific tree-based "Feature
Importance" of the Python package "scikit-learn". Throughout this
part, we also discuss aspects of actuarial diligence and
limitations of the considered methods.
The last part of the online training provides an interactive,
hands-on experience with explainable AI using a Jupyter notebook
designed around an actuarial use case. Participants will be guided
through a comprehensive machine learning workflow before exploring
the implementation of various XAI techniques. In analyzing several
XAI methods, we will study their main ideas on a conceptual level
and their concrete implementations, apply each to the given machine
learning problem, and discuss their advantages and disadvantages.
The interactive segment concludes as participants are given an
additional case study to tackle, applying the XAI methods they have
learned to deepen their understanding.
By the end of the web session, participants will leave with a
toolkit of explainability techniques, an in-depth understanding of
model interpretability, and the ability to use XAI approaches in
practical actuarial applications.
Participants will also understand mathematical principles behind
key XAI techniques, evaluate the strengths and limitations of XAI
methods, run a machine learning workflow that incorporates XAI
techniques, and analyse and interpret results in the context of
actuarial cases.
Early-bird discount is available for bookings made by 15
April 2025.
|
|