Abstracts

Invited talks

Volatility dynamics and memory: from the Quintic model to Signatures
Eduardo Abi Jaber (Institut Polytechnique de Paris, France)
Joint work with: Paul Gassiat, Louis-Amand Gérard, Yuxing Huang, Camille Illand, Shaun Li, Xuyang Lin and Dimitri Sotnikov
We introduce the Quintic Ornstein-Uhlenbeck model, designed for the joint calibration of SPX and VIX options. The model is also capable of aligning with the term structure of the skew-stickiness ratio. At the same time, it remains mathematically tractable, enabling fast pricing of SPX options via Fourier techniques and VIX options via simple integration against a Gaussian density.
We then turn to a broader class of stochastic volatility models in which volatility evolves as a (possibly infinite) linear combination of the time-extended signature of Brownian motion. We show that signature volatility models can be seen as a kind of universal machine for modeling path-dependent volatility. Their structure is rich enough to encompass a wide range of existing models, including the Bergomi model and its path-dependent extensions. Crucially, they retain enough algebraic structure to admit Fourier-based pricing and hedging. Finally, we provide a characterization of the martingale property of the price process in terms of the signature truncation order.
Mortality under stress: Modelling the impact of pandemic and climate shocks in life insurance
Karim Barigou (UCLouvain, Belgium)
Joint work with: Jens Robben and Torsten Kleinow
In this talk, I present two complementary modeling frameworks to understand the impact of climate variability and infectious disease outbreaks on mortality, using granular weekly mortality data from metropolitan France. The first framework employs a three-state regime-switching model to capture deviations from seasonal mortality trends due to temperature extremes and respiratory epidemics, with transition probabilities dynamically driven by covariates such as lagged temperature and influenza incidence. The second framework extends the classical Lee-Carter model by incorporating a penalized distributed lag non-linear model (DLNM) to quantify both immediate and delayed effects of heat waves, cold spells, and influenza outbreaks, accounting for overdispersion and spatial heterogeneity through region-specific mortality indices. These models enhance our ability to predict mortality trends under stress, improving public health preparedness and actuarial forecasting.
Infinite-dimensional stochastic volatility models with applications to energy forward markets
Asma Khedher (University of Amsterdam, Netherlands)
Joint work with: Sonja Cox, Christa Cuchiero, Jian He, Sven Karbach
We develop a unified framework for infinite-dimensional stochastic volatility modeling in which the volatility is represented by operator-valued processes driven either by Lévy subordinators or by finite-rank infinite-dimensional Wishart dynamics, and apply this framework to option pricing in forward markets. Our approach builds on Hilbert space–valued Ornstein–Uhlenbeck–type processes whose instantaneous covariance evolves in the cone of positive self-adjoint Hilbert–Schmidt or trace-class operators. We incorporate both pure-jump covariance structures—extending infinite-dimensional Barndorff–Nielsen–Shephard models with state-dependent jump intensity—and infinite-dimensional Wishart processes, for which we establish necessary and sufficient conditions for existence, analyze their finite-rank yet infinite-dimensional behavior, and derive exponentially affine Fourier–Laplace transforms via operator-valued Riccati equations. These joint affine structures guarantee analytic tractability, including explicit expressions for characteristic functionals and moment conditions. Within the Heath–Jarrow–Morton–Musiela (HJMM) framework, we apply these covariance models to the evolution of forward price curves and derive semi-closed Fourier-based pricing formulas for European options written on such curves. The resulting methodology enables flexible and tractable modeling of volatility surfaces with infinitely many risk factors, while capturing maturity-specific and term-structure effects essential in financial markets.
A decomposition framework for managing hybrid insurance liabilities
Daniel Linders (KULeuven, Belgium)
Joint work with: Biwen Ling, Jan Dhaene, Tim Boonen
In this presentation, we propose a four-step decomposition of hybrid liabilities into a hedgeable part, an idiosyncratic part, a financial systematic part, and an actuarial systematic part. We generalize existing approaches for decomposing hybrid liabilities by incorporating dependence between financial and actuarial markets and allowing heterogeneity in policyholder-specific risks. Our model provides a market- and model-consistent valuation framework.
Fast Bayesian calibration of option pricing models based on sequential Monte Carlo methods and deep learning
Eva Lütkebohmert (University of Freiburg, Germany)
Joint work with: Riccardo Brignone, Luca Gonzato and Sven Knaust
Model calibration is a challenging yet fundamental task in financial engineering. Using sequential Monte Carlo methods, we reformulate the non-convex optimization problem as a Bayesian estimation task. This allows to compute any statistic of the estimated parameters, mitigating the strong dependence on starting points and avoiding the troublesome local minima, that plague standard calibration methods. To accelerate computation, we incorporate Markov Chain Monte Carlo methods with delayed acceptance and a neural network-based option pricing approach. When applied to S&P 500 index options, our Bayesian algorithms significantly outperform the standard approach in terms of runtime, accuracy, and statistical fit.
Preference robust distortion risk measures
Silvana Pesenti (University of Toronto, Canada)
Joint work with: Carole Bernard
We introduce a framework for preference-robust decision making when preferences over risk are modeled through generalized distortion risk measures. Unlike distributional robustness, our approach addresses ambiguity in the risk functional itself. We construct ambiguity sets on distortion (weight) functions using the Wasserstein distance and Bregman–Wasserstein divergences, and derive closed-form expressions for the worst- and best-case distortion risk measures. We further extend the framework to Rank-Dependent Expected Utility, yielding preference-robust behavioral models.
Risk evaluation under dependence uncertainty
Ludger Rüschendorf, University of Freiburg, Germany
This talk is concerned with a review of some developments on the topic of risk bounds under dependence uncertainty over the last 15 to 20 years. Determination of worst case portfolio vectors leads to the question of generalization of the notion of comonotonicity and also to some new related nonlinear mass-transportation problems. For several classes of structural and dependence type information on the underlying models improvements of the unconstrained risk bounds of considerable practical relevance have been determined. We present some main directions and tools of these developments like dual representations of the involved optimization problems, the construction of adequate algorithms and the development of stochastic ordering results for their solution . We also indicate some real applications to risks in finance and insurance.

Contributed talks

Differential measurement of proxy discrimination
Zahra Abootalebi Naeini (Bayes Business School, City St George’s, University of London, United Kingdom)
Joint work with: Andreas Tsanakas, Rui Zhu
We present a derivatives-based method for detecting proxy discrimination in insurance pricing. Starting from a fitted pricing model, we examine how small, user-specified changes in inputs affect predicted premiums. In the absence of direct discrimination, we distinguish between those effects of permitted covariates, which act directly on the response, from the effects that act implicitly via protected attributes -- thus giving rise to proxy discrimination. We apply this method to an insurance claims dataset and show evidence of (weak) discriminatory impacts of policyholders' gender. As many features are categorical, to make a derivatives-based sensitivity analysis feasible, we map them into a continuous space using Multiple Correspondence Analysis (MCA). The suggested approach supports assessment of the materiality of proxy discrimination at both the individual and portfolio levels and can be used with GLMs or more complex models without retraining, as long as the response surfaces are differentiable.
On expectiles and almost stochastic dominance
Corrado De Vecchi (University of Verona, Italy)
Joint work with: Matthias Scherer
We investigate the relationship between almost first order stochastic dominance (AFSD), the statistical functionals called expectiles, and the corresponding expectile-based monetary risk measure. From a methodological point of view, we show that expectiles provide a ready-to-be-used criterion for the comparison between a deterministic and a random payoff in the sense of AFSD. Furthermore, we obtain a consistency result for expectile-based monetary risk measures with respect to the AFSD ordering. Finally, we discuss applications to robustify some utility-based risk management procedures when there is uncertainty on the utility function to be considered. This includes preference robust portfolio optimization problems and worst-case shortfall risk measures.
The Implications of Side Bequest Motives on the Life Insurance Decisions of Retired Couples
Martijn de Werd (University of Groningen, The Netherlands)
Joint work with: Bertrand Achou (University of Groningen, Netspar) and Ki Wai Chau (University of Groningen, Netspar)
Recent empirical evidence shows that the death of a first spouse in retired couples leads to a sharp decline in wealth, reflecting not only reduced income but also additional transfers to heirs outside the couple. Such `side' bequests have significant financial consequences for a surviving spouse, but the existing literature on financial decision-making does not account for them. To fill this gap, we build a model for optimal life insurance, consumption and portfolio decisions of a retired couple, with side bequest motives. Using analytical results and numerical simulations, we show that side bequests substantially alter couples' optimal life insurance and consumption decisions. In particular, we show that life insurance is an important tool that allows couples to balance their side bequest motive with the utility of a surviving spouse. Our model, therefore, highlights the importance of accounting for side bequests when making these decisions.
Exploratory Optimal Reinsurance under the Mean-Variance Criterion
Austin Riis-Due (University of Waterloo, Canada)
Joint work with: Bin Li, David Landriault
This paper proposes a Reinforcement Learning (RL) approach to the optimal reinsurance problem when the insurer faces uncertainty about the claim frequency or severity distributions. To this end, we first formulate an exploratory version of the problem as a relaxed stochastic control problem. Within a broad class of parametric retention functions and general risk loading functions, we derive the closed-form optimal policy under the continuous-time mean--variance criterion. This is achieved through a formal verification theorem and solving classical solutions of a system of exploratory extended Hamilton-Jacobi-Bellman (EEHJB) equations. We then establish a policy iteration theorem, showing that starting from any time- and state-homogeneous policy, policy iteration converges to the derived optimal policy. Next, we develop a martingale orthogonality theorem, which serves as the foundation of our RL algorithm. Finally, we demonstrate the convergence of the algorithm through numerical studies.
Model Ambiguity in Risk Sharing with Monotone Mean-Variance
Emma Kroell (University of Copenhagen, Denmark)
Joint work with: Sebastian Jaimungal and Silvana M. Pesenti
We consider the problem of an agent who faces losses over a finite time horizon and may choose to share some of these losses with a counterparty. The agent is uncertain about the true loss distribution and has multiple models for the losses. Their goal is to optimize a mean-variance type criterion with model ambiguity through risk sharing. We construct such a criterion by adapting the monotone mean-variance preferences of Maccheroni et al. (2009) to the multiple models setting and exploit a dual representation to mitigate time-consistency issues. Assuming a Cramér-Lundberg loss model, we fully characterize the optimal risk sharing contract and the agent’s wealth process under the optimal strategy. Furthermore, we prove that the strategy we obtain is admissible and prove that the value function satisfies the appropriate verification conditions. Finally, we apply the optimal strategy to an insurance setting using data from a Spanish automobile insurance portfolio, where we obtain differing models using cross-validation and provide numerical illustrations of the results.
The distribution of out-of-sample returns of estimated optimal portfolios
Nathan Lassance (UCLouvain, Belgium)
Joint work with: Raymond Kan, Xiaolu Wang
We derive a stochastic representation for the joint distribution of the out-of-sample mean and variance of a large class of portfolio rules that combines the sample optimal mean-variance portfolio with the sample global minimum-variance portfolio. Our results allow the combining coefficients to be either constant or estimated from historical data. Such a representation enables us to obtain the distributions and moments, asymptotically and in finite samples, of different out-of-sample mean-variance portfolio performance measures. These results are useful for a variety of applications, and we develop optimal double shrinkage portfolio rules as an illustration. Our paper provides a comprehensive toolkit that researchers can use to evaluate the out-of-sample performance of existing portfolio rules and develop better portfolio rules in the future.
From Theory to Practice: Optimal Asset Allocations for Endowment Funds Using Dynamic Programming and Reinforcement Learning
Andrea Buffoli (City St George's, University of London, Bayes Business School, UK)
Joint work with: Dr. Iqbal Owadally, Dr. Russell Gerrard, Dr. Francesco Menoncin
Endowment funds are specialised investors that utilise resources collected through donations from philanthropists, alumni, or other organisations for a variety of purposes. These include supporting educational, cultural, or social initiatives, promoting research and development, or aiding charitable causes. The funds play a crucial role in ensuring the financial sustainability of non-profit institutions, often serving as a key mechanism for advancing their missions by carefully managing and growing the capital through long-term investments.
In this paper, we aim to derive closed-form solutions to the proposed target-oriented optimisation problem in the case of stochastic donations. This approach seeks to ensure not only the sustainability of the fund but also its ability to meet its long-term disbursement goals under uncertainty.
Subsequently, we will relax some of the assumptions underlying the model in order to make it more realistic and aligned with the complex, dynamic nature of financial markets and donation patterns. Therefore, we will compare the model’s performance with that of an agent employing deep reinforcement learning, which operates on the same problem but uses less restrictive assumptions, allowing it to capture a broader range of dynamics and uncertainties inherent in managing endowment funds. This comparison will provide valuable insights into the strengths and limitations of both approaches, with implications for future research and practical fund management strategies.
Optimal Investment for Retirement with Multiplicative External Habit Formation
Luke Servat (Maastricht University, Netherlands)
Joint work with: Antoon Pelsser
Differences in pensions between generations and cohorts have become a great worry for both retirees and funds, due to the tendency to transition to a defined contribution plan. Therefore, this paper investigates the optimal investment strategy for a cohort that evaluates their pension relative to an exponentially weighted moving average of pensions of past cohorts. More specifically, we find a closed-form solution to an optimal terminal wealth problem with external multiplicative habit formation. The life-cycle that follows differs substantially from the classic CRRA solution and can be an effective tool in preventing large differences between generations.

Poster sessions

Gambler's Ruin Problem in a Markov-modulated Jump-diffusion Risk Model with Hyperexponential Jumps
Ruizhe Bu (Beijing Normal-Hong Kong Baptist University, China)
Joint work with: Zhengjun Jiang
This study addresses gambler's ruin problem for an insurance entity whose risk reserve dynamics are described by a Markov-modulated jump-diffusion risk model, specifically incorporating hyperexponential jumps for claim severities. We focus on two-sided ruin probability, which quantifies the likelihood of the insurer's insolvency occurring before its reserve fund attains a specified upper barrier level $b\in(0, \infty)$. Our methodology employs Banach contraction principle in conjunction with $q$-scale functions to affirm that the two-sided ruin probability corresponds to the unique fixed point of a well-defined contraction mapping. Building on this, we devise an iterative algorithm designed to approximate the ruin probability. It is determined that the two-sided ruin probability, as well as the Lipschitz constant pertinent to the contraction mapping, exhibits dependency on the upper barrier $b$, premium rate normalized by squared volatility, the Markov transition intensities normalized by squared volatility, Poisson arrival rate of claims normalized by squared volatility, and parameters defining the hyperexponential claim size distribution. To conclude, a numerical illustration featuring a two-regime economic environment is presented, showcasing the computational efficiency and practical applicability of the developed iterative algorithm.
Convex and non-convex stochastic multi-stage models with recourse actions for active investment decisions
Alessandro Cariolaro (Università degli Studi di Verona, Italy)
Joint work with:
We develop a convex and non-convex stochastic multi-stage optimization framework with recourse actions for dynamic ex-ante decision-making under uncertainty. The framework is motivated by the active fund problem and tactical asset allocation in portfolio management, but it applies more generally to multi-stage allocation problems with transaction costs, tracking-error constraints, asset and liability management constraints and non-linear risk measures. The proposed methodology integrates (i) a convex quadratic stochastic program for the long-term allocation stage, and (ii) a non-convex stochastic program with recourse for short-term tactical adjustments, featuring a novel turnover constraint expressed via a relative Value-at-Risk measure. To enhance robustness, we employ Monte Carlo simulation with scenario averaging across multiple runs, providing a practical decision-support tool under uncertainty. Our empirical analysis, based on 15 asset classes from Bloomberg and MSCI indices (1998-2025), demonstrates how the model adapts portfolio weights dynamically in response to market opportunities, while maintaining risk control relative to a strategic benchmark. The framework outperforms static strategies in terms of cumulative return during the test period. The contribution is twofold: (i) the introduction of a generalizable stochastic multi-stage model with recourse under non-convex constraints, and (ii) the demonstration of its applicability to tactical portfolio optimization, highlighting its potential use in financial decision-making and other domains of resource allocation under uncertainty.
Profiling Actuarial Discrimination via Causal Decomposition
Olivier Côté (Université Laval, Canada)
Joint work with: Marouane Il Idrissi, Marie-Pier Côté, Arthur Charpentier.
Fairness in actuarial pricing is transitioning from ethical aspiration to legal obligation. Common fairness metrics -- global or local -- flag disparities but rarely explain their origin or how to reduce unfairness. In parallel, transparency is now a regulatory expectation, as exemplified by the GDPR's ``right to explanation'' \citep{Selbst/al:2018}. Recent work \citep{Lindholm/al:2024, Cote/al:2025_scalable} blends fairness and transparency into ``unfairness interpretability'', profiling discrimination back to modeling choices. We introduce a Shapley-styled, causally grounded decomposition of local fairness metrics that assigns observed disparities to specific covariate values. Practically, we build on the decomposition recipe of \cite{Idrissi/al:2025}. We show how the three choices -- \textit{target}, \textit{value function}, and \textit{allocation rule} -- can be guided by causal thinking, favouring interventional value functions and asymmetric allocations. This narrows the design space to causally sound decompositions, on which any explanation should rest. In a case study on automobile insurance pricing (on about $750{,}000$ Canadian insured vehicles), we apply our causal decomposition to proxy vulnerability with respect to a collected measure of financial fragility \citep{Cote/al:2025_scalable}. Under stated assumptions, our explanations for proxy vulnerability (an estimated quantity) extend to broader proxy effects linked to financial fragility. The causal assumptions behind our ``valid'' decomposition are explicit.
Microscopic Foundations of Inhomogeneous Heston Models
Maren Dück (Justus-Liebig-Universität Gießen , Germany)
Joint work with: Ludger Overbeck
Classical stochastic volatility models, such as the Heston model, are central tools in mathematical finance, especially in derivative pricing. However, the assumption of constant parameters stands in contrast with the time-varying volatility observed in financial markets. Hawkes processes provide a microscopic framework to study such effects. Since it is known that the rescaled intensity of a nearly unstable Hawkes process converges to a Heston-type volatility model, we extend this approach by introducing non-stationarity into the Hawkes framework.
To this end, we allow the exogenous intensity to vary in time and introduce a time-dependent factor in the kernel. This factor determines the strength of self-excitation at each point in time and thereby controls the degree of market endogeneity along the horizon. By further encoding stylized facts, as in the classical results, into the microscopic model, we obtain an inhomogeneous Heston-type stochastic volatility model in the macroscopic limit. It can be seen as a natural extension of the classical Heston model, retaining its diffusion structure while allowing the parameters such as the mean reversion level and speed to vary over time.
Our results demonstrate that, even under non-stationarity, Hawkes processes admit diffusion limits which are better suited to capture the non-stationary nature of volatility in financial markets.
Climate-Adjusted Credit Scoring Incorporating Supply Chain Network for European SMEs
Antoine Duysinx (UCLouvain, Belgium)
Joint work with: Raffaella Calabrese, Frédéric Vrins
Climate disasters such as wildfires, floods, or storms have become more frequent and severe, posing significant threats to firms. Recent credit scoring models account for climate risk but typically overlook the cascading nature of these shocks by focusing solely on borrower-specific characteristics. SMEs are particularly vulnerable to climate disruptions compared to larger firms, due to their limited resources and less diversified supply chains or customer bases, often resulting from their operations in narrower geographic and market segments. This suggests that ignoring supply-chain network effects in SMEs’ credit risk assessment might underestimate the financial impact of those extreme events.
This study proposes a novel credit scoring model based on a binary spatial autoregressive structure that integrates industry-level supply and demand linkages, capturing both direct and network-driven impacts of climate events on SME default risk. By including supply chain dependencies, the model reveals how a severe climate disaster can trigger higher default probabilities not only for directly affected borrowers, but also for those connected upstream or downstream. To the best of our knowledge, this is the first work that explicitly embeds supply-chain network into a credit scoring model. Drawing on a comprehensive dataset of European SME loans and regional climate disaster records from 2013 to 2022, our findings underscore that the supply chain network acts as a key transmission channel for climate-induced credit risk. Our research insights can help SMEs to better understand the drivers of their credit risk and support financial institutions and policymakers in designing more effective risk management strategies and regulation policies.
Super-Hedging an Arbitrary Number of European Options with Integer-Valued Strategies
Meriam El Mansour (Université Paris Dauphine-Tunis, Tunisia)
Joint work with: Dorsaf Cherif - Emmanuel Lepinette
The usual theory of asset pricing in finance assumes that the financial strategies, i.e. the quantity of risky assets to invest, are real-valued so that they are not integer-valued in general, see the Black and Scholes model for instance. This is clearly contrary to what it is possible to do in the real world. In this paper, for arbitrary \Omega , we show that, in discrete-time, it is possible to evaluate the minimal super-hedging price when we restrict ourselves to integer-valued strategies. We formulate a dynamic programming principle that can be directly implemented on historical data and which also provides the optimal integervalued strategy.
Early Warning System for Non-Performing Clients
Arnaud Germain (UCLouvain, Belgium)
Joint work with: Frédéric Vrins
In its "Guidance to banks on non-performing loans", ECB requires banks to implement an Early Warning System (EWS) to identify potential non-performing clients at a very early stage. Relying on a unique dataset provided by a systemic European bank including 5.5 million observations of anonymized data from 2018 to 2022, we aim to predict the corporate clients who will become non-performing in a given warning horizon. We propose two solutions to address time and client heterogeneity issues. Regarding the latter, we divide our dataset into several clusters using k-means, fit a prediction model on each cluster, and combine those models together. This boosts the out-of-sample performance compared to a case where we fit a single prediction model on the whole dataset and a case where we rely on domain knowledge to determine the clusters. Second, to address time heterogeneity, we forecast the unconditional probability to be positive using macroeconomic variables and then rescale the output of the prediction model using Bayes’ theorem. This enhances the out-of-sample performance compared to a case where the macroeconomic variables are directly included as predictors of the prediction model. Both approaches are complementary in the sense that the best predictive performance is achieved by combining them together. Our findings help to increase the performance and the robustness of EWS but can also be useful in a wide range of pattern recognition problems.
Affine Volterra covariance processes and applications to commodity models
Boris Günther (Justus-Liebig-Universität Gießen, Germany)
Joint work with: Ludger Overbeck
Wishart processes, a matrix-valued extension of the Cox-Ingersoll-Ross process, are widely used in many areas of finance. As an affine process on the cone of positive semidefinite symmetric matrices, they provide a natural framework for multivariate stochastic volatility modeling. Inspired by recent results on stochastic affine Volterra equations and the need to capture the rough nature of volatility, we propose an affine Volterra covariance process, with a Volterra-Wishart process as a special case in mind. In line with the theory of affine Volterra processes, we show that this new class of processes admits an explicit exponential-affine representation of the Fourier–Laplace functional in terms of matrix-valued Riccati–Volterra equations.
We apply this framework to commodity markets by extending popular models, such as the Gibson-Schwartz and Schwartz-Smith models, to incorporate a Volterra-Wishart covariance process. The resulting model preserves the exponential-affine transform property while incorporating stochastic volatility, time-varying correlations, and rough volatility features. This specification captures stylized facts such as volatility clustering, volatility spillovers, and the dynamic co-movement of spot prices and convenience yields, thereby providing a tractable framework for pricing, hedging, and risk management in modern energy and commodity markets.
Improved optimal investment and consumption strategies under inflation risk with stochastic volatility
Wouter Honig (University of Amsterdam and Dutch Central Bank, Netherlands)
Joint work with: This is joint work with Michel Vellekoop and Roel Beetsma (both University of Amsterdam) and Bart Diris (Dutch Central Bank).
This paper develops and calibrates a financial market model that incorporates stochastic volatility in inflation dynamics. Traditional models typically assume constant inflation volatility, which limits their responsiveness to economic shifts like geopolitical tensions and inflation surges such as the one in 2021. Our multidimensional affine model extends the classical Brennan $\&$ Xia framework with a stochastic variance component that allows us to capture the interdependence of asset price returns and inflation dynamics. Various model specifications are calibrated using both historical economic time series and current market prices of derivatives, in order to fit both historical and current market expectations.
We derive optimal consumption and portfolio strategies to maximize the utility of consumption in real terms for finite-horizon investors with CRRA risk preferences by solving the resulting Hamilton-Jacobi-Bellman equation numerically. Our findings demonstrate that incorporating stochastic volatility in these strategies significantly alters optimal asset allocations and yields substantial welfare gains. Sensitivity analyses confirm the robustness of these strategies across various model specifications and different data periods.
Controlled refinement of actuarial models: balancing interpretability and predictive performance
Enej Kovac (University of Lausanne, Switzerland)
Joint work with:
Explainability and interpretability are important aspects of actuarial models, given the need for transparency with various stakeholders such as customers, decision-makers and regulators. Actuaries have traditionally relied on relatively simple models that are easy to justify and interpret, but these may not reflect more complex relationships in the data. More advanced models, such as modern machine learning methods, often provide better predictive performance but at the expense of interpretability. This creates a clear trade-off between performance and interpretability. In this work we present an idea that considers both perspectives together. The starting point is an explainable base model that anchors the prediction. A neural network is then allowed to refine this prediction through limited and controlled adjustments. The extent of these adjustments is set in advance and can be tailored to the preferences or requirements of the stakeholders. This gives the modeller a way to control the balance between interpretability and predictive performance. As the permitted adjustments increase, accuracy typically improves fastest at the beginning, and then the gains gradually level off. This means that some improvement in predictions can often be achieved without losing much interpretability. We first illustrate the proposed idea on a simple synthetic example and then test it on an actuarial dataset.
Leveraging Generative AI to Assess the Impact of Climate Change on Housing Mortgage Prices and Credit Risk
Leonard Mushunje (The University of Sydney, Australia)
Joint work with: David Edmund Allen and Shelton Peiris
Climate change is emerging as a critical risk factor in housing and mortgage markets. Physical risks (e.g., floods, wildfires) and transition risks (e.g., regulatory shifts, green retrofits) may impact property valuations and borrower creditworthiness. This project proposes a novel approach, employing Generative AI (GenAI) to analyze and forecast the complex relationships between climate risk exposures, mortgage pricing, and credit risk. The research will integrate multi-source data—including satellite climate data, property listings, loan-level mortgage data, and socioeconomic indicators—to uncover patterns and generate scenario-based insights for risk managers and policymakers.
Adaptive Multilevel Fourier–RQMC Methods for Multivariate Shortfall Risk
Truong Ngoc Nguyen (Utrecht University, Netherlands)
Joint work with: Dr. C. (Chiheb) Ben Hammouda
Systemic risk measures were introduced to capture the global risk and the corresponding contagion effects generated by an interconnected system of financial institutions. Among these, the multivariate shortfall risk measure (MSRM) provides a principled framework for pre-aggregation capital allocation, determining the minimal distribution of capital across institutions required to secure the system. While the theoretical foundations of MSRM are well established, efficient numerical methods for their computation remain limited. In this work, we develop a new class of algorithms that combine Fourier methods with randomized quasi-Monte Carlo (RQMC) to compute the multivariate shortfall risk and the associated optimal allocations. We provide a rigorous mathematical foundation for the Fourier-based approach, including an analysis of convergence rates. Beyond the single-level RQMC method, we introduce an adaptive multilevel (ML) RQMC scheme, which leverages the geometric convergence of the allocation optimization to achieve further variance reduction and computational gains. Several numerical examples confirm the superior performance of the proposed Fourier–RQMC approach over the Sample Average Approximation (SAA) and stochastic optimization benchmarks. Moreover, ML-RQMC yields additional speedups over single-level RQMC while preserving accuracy in optimal risk allocation.
Optimal Investment and Entropy-Regularized Learning Under Stochastic Volatility Models with Portfolio Constraints
Pertiny Wilfried Nkuize Ketchiekmen (Université Laval, Canada)
Joint work with: Thai Nguyen
We consider a continuous-time portfolio optimization problem under a stochastic volatility model, where both the drift and volatility of the risky asset are unknown and evolve stochastically. The investor seeks to maximize the expected utility of terminal wealth, subject to portfolio constraints such as no short-selling and borrowing prohibition. Since model parameters are not known in advance, our work adopts a reinforcement learning framework that does not rely on parameter estimation, but instead explores the market in a model-free manner to learn optimal investment policies. Our contribution fits into the growing body of work on continuous-time entropy-regularized reinforcement learning for portfolio selection under model uncertainty and trading constraints, as illustrated for instance by the studies of Dai et al. (2023) and Chau et al. (2024). Our learning procedure follows three main stages: policy evaluation, policy improvement, and simulation. In the evaluation step, we analyze candidate policies through the solution of the Hamilton–Jacobi–Bellman (HJB) equation under portfolio constraints. We rigorously establish the existence of a classical solution to this HJB equation by employing the analytical framework developed by Ladyzhenskaya, Solonnikov, and Uraltseva (1968), which provides sufficient conditions for the existence and regularity of solutions to second-order parabolic partial differential equations. Policy improvement is performed iteratively using actor-critic algorithms. Finally, we evaluate the learned strategies through numerical simulations based on real-world volatility data such as the VIX index. Our results suggest that the learned policies are robust, stable, and perform near-optimally in uncertain market environments.
Spectral Analysis of a Partial Integro-Differential Equation Governing European-Style Asian Option Pricing in Jump-Diffusion Markets
Liju P (National Institute of Technology Calicut, India)
Joint work with: Dr. Ashish Awasthi
The Merton jump-diffusion model has been widely used in Asian option pricing to account for both continuous volatility and sudden jumps in asset prices. In this work, we extend the framework by incorporating proportional dividends and transaction costs, which are essential features in realistic market environments. The resulting partial integro-differential equation (PIDE) governing European-style Asian option prices is solved numerically using the Chebyshev–Tau spectral method for spatial discretisation, coupled with an implicit–explicit time integration scheme. We analyse the spatial convergence of the proposed approach using maximum norm errors and assess its stability under varying time step sizes. Numerical experiments are conducted across a range of parameter settings, including different transaction cost levels, highlighting their quantitative impact on option prices. The results confirm that the scheme achieves the expected convergence rates, remains stable under appropriate time step restrictions, and closely matches Monte Carlo benchmarks. This study demonstrates that the proposed methodology provides a robust and computationally efficient tool for pricing Asian options in jump-diffusion markets with dividends and transaction costs, offering valuable insights for both academic research and industry practice.
Optimal design of energy retail contracts for agents with habit-formation attitudes
Xiaodan Wang (EMLYON Business School , France)
Joint work with: Daniël Linders (KULeuven, Belgium) and Bertrand Tavin (EMLYON Business School, France)
The recent energy crisis in Western Europe led to tremendous volatility in energy markets, with sharp increases in electricity and gas prices. In some countries, such as France and the Netherlands, governments set up price caps, or fixed prices, for electricity and gas supplies to individuals to spare household budgets from being hit by these sudden and large upward price shocks. However, these government policies were funded by the state budgets, hence raising concerns about the efficient use of taxpayers' money. In parallel, consumption habits for necessities such as food, heating, and electricity are sticky over time, which means individuals’ welfare depends not only on the current income but also on consumption levels during the previous periods, which act as a reference level. Building on this background, this paper deals with the problem of optimal multi-period energy contract design for individuals. More precisely, we address this question for individuals exhibiting a habit-formation attitude regarding their consumption levels, in the context of volatile market prices of energy. Specifically, we address the following two research questions. First, for the government, we want to find the most efficient mechanism to protect consumers’ welfare for a given budget. Second, for an energy supplier, we want to identify the type of contract that should be proposed to the customers, to improve their multi-period welfare, while at no extra cost to the supplier. In our setting, a customer (e.g. an individual, a household) receives an income and pays an energy bill, and the remaining amount is available for consumption that underpins the obtained welfare over the period, while the obtained welfare is driven by the comparison of the current level of consumption with a reference level, which is updated according to the levels from the previous periods. We first formalize and solve the problem in a one-period (monthly) setting. We then consider the multi-period setting, under which agents have habit-formation attitudes. The formalization of the problem combines a maximization of the agent’s welfare, expressed as an expectation under the historical measure P, with a budget constraint involving an expectation under a risk-neutral measure Q. More specifically, the optimization is written with respect to the contract’s features, and the welfare is expressed in terms of the expected utility of consumption streams. We craft a methodology to solve this problem and obtain the optimal contract by allowing for inter-period smoothing. The proposed methodology relies on a weighted Monte Carlo approach to compute expectations under both measures simultaneously. This method is flexible in that it can handle a wide range of realistic energy price dynamics and agent attitudes. In our approach, the contract operator (i.e., the energy supplier, or government spending taxpayers’ money) operates under a fair market valuation constraint, which, as a by-product, allows for the hedging of the operator’s commitments with instruments available on the market. We provide empirical applications with electricity market data for France and the Netherlands. In these applications, we quantify the welfare improvement for retail customers when relying upon the optimal contract instead of the standard ones. We identify the drivers of welfare improvement in terms of individual agent attitude and energy price dynamics.