MAF2018: EIGHTH INTERNATIONAL CONFERENCE ON MATHEMATICAL AND STATISTICAL METHODS FOR ACTUARIAL SCIENCES AND FINANCE
PROGRAM FOR THURSDAY, APRIL 5TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session PS2: The Amazing Power of Dimensional Analysis in Finance: Market Impact and the Intraday Trading Invariance Hypothesis. Walter Schachermayer, University of Vienna

A basic problem when trading in nancial markets is to analyze the prize movement caused by placing an order. Clearly we expect - ceteris paribus - that placing an order will move the price to the disadvantage of the agent. This price movement is called the market impact.

Following the recent work of A. Kyle and A. Obizhaeva we apply dimensional analysis - a line of arguments wellknown in classical physics - to analyze to which extent the square root law applies. This universal law claims that the market impact is proportional to the square root of the size of the order.

We also analyze the dependence of the trading activity on a stock, i.e. number of trades per unit of time, in dependence of some suitable explanatory variables. Dimensional analysis leads to a 2/3 law: the number of trades is proportional to the power 2/3 of the exchanged risk.

The mathematical tools of this analysis reside on elementary linear algebra.

Joint work with Mathias Pohl, Alexander Ristig and Ludovic Tangpi. 

Location: Salón de Grados
10:05-11:25 Session 6A: Life insurance and related issues (II)
Location: Room 0.A.02
10:05
Optimal annuitization under regime switching mortality

ABSTRACT. This paper examines the issue of how mortality rate transitions impact on the timing of a life annuity purchase. The annuitization decision has been the subject of a whole research field since the seminal contribution of Yaari (1965). Most of the papers in this field assume a deterministic force of mortality. This paper would like to contribute to the recent literature on the annuitization decision with stochastic mortality rates. More precisely, we study the effect of individual’s mortality rate changes on annuity demand. At this aim, we assume that the individual’s mortality force is modulated by a continuous-time Markov chain with finite state space. The idea, classical in the actuarial literature, is that the Markov chain describes the environmental conditions influencing the mortality, such as changes in the individual’s health status or improvements in medical treatments. The individual aims at determining the time to switch from a financial investment to a life annuity. From a mathematical point of view the problem is formulated as an optimal stopping problem under regime switching. At first, we analyze the regularity of the value function, and we study the so called smooth fit property. The Markovian setting allows us to cast the optimization problem as a free boundary problem, based on the associated Hamilton-Jacobi-Bellman (HJB) equation of dynamic programming, taking the form of a system of coupled variational inequalities. The optimal stopping time is actually the first hitting time to a regime dependent boundary, characterized as the unique solutions to a system of algebraic equations. Finally, we present numerical examples comparing the optimal annuitization time with and without regime switching.

10:25
Market-Consistent Valuation of Participating Life Insurance Contracts under Longevity Risk: The Case with Heterogeneous Groups of Policyholders

ABSTRACT. The purpose of this paper is to conduct a market-consistent valuation of life insurance participating liabilities sold to a population of partially heterogeneous customers under the joint impact of biometric and financial risk. In particular, the heterogeneity between groups of policyholders stems from their initial contributions, offered minimum interest rate guarantees and contract inception and maturity. We analyse the combined effect of these features on the company’s insolvency while maintaining the insurer’s constraint to achieve the same expected return for different cohorts of policyholders. Within our extensive numerical analyses, we determine the fair participation rates and other key figures and discuss how the insurance company can equitably act and the implications for the customers. Furthermore, we stress the predominant effect longevity risk, which cannot be diversified away through pooling, has on the market values of life insurance liabilities and on the "wealth transfer" occurring among different groups of policyholders.

10:45
Pre- and post-retirement savings choices with longevity-linked securities

ABSTRACT. The paper studies the optimal portfolio, consumption and labour decision of an household that, alongside traditional assets can invest in longevity-linked securities. We solve the life-cycle problem of the agent in closed form, under different pension schemes arrengements: a) absence of any scheme; b) presence of a pre-defined pension scheme, in which both the contributions and the pension are set in advance and satisfy a fairness condition; c) a personalized private pension scheme, in which the agent can optimally decide the instantaneous contribution during the working life-time and then receives the pension accordingly. Numerical simulations compare the three alternatives and provide sensitivity analysis to relevant parameters.

11:05
Dynamic policyholder behaviour and surrender option evaluation for life insurance.

ABSTRACT. Since 2016 the insurance industry sector has been involving in a new capital requirement task named Solvency II. Under Solvency II framework the calculation rules of life insurance liabilities changed due to a new risk measures introduced to evaluate the total capital absorption derived by the interaction among risks affected the management of a life insurance portfolio (interest risk, equity risk, spread risk, longevity risk, lapses risk, mortality risk etc). In particular this paper aims to analyze the effect of the dynamic policyholder behaviour on the evaluation of lapses risk under the market consistent framework for the insurance liabilities i.e. best estimate liabilities. A theoretical investigation about different mathematical models is shown, then a model is proposed to evaluate the best estimate liability in a portfolio of participating life insurances to assess a price of surrender options embedded in a contract.

10:05-11:25 Session 6B: Financial Econometrics
Location: Salón de Grados
10:05
Conditional Autoregressive Quantile-Located Value-at-Risk

ABSTRACT. Conditional VaR proposed by Adrian and Brunnermeier (2016) has established itself as the reference risk measure to capture systemic risk, by explicitly taking into account the conditional effect of a company in distress on the tail risk of the financial system. Here, we extend such risk measure in two dimensions which respectively lead to: i) the Conditional Autoregressive VaR, where we include autoregressive components in the conditional quantiles to explicitly deal with volatility clustering and heteroskedasticity; ii) the Conditional Quantile-Located VaR, which captures the extreme risk when both the financial system and the (conditioning) company simultaneously lie in the left tail of their returns’ distributions, that is, when both are in distress. We evaluate the two risk measures and their combination, that is, the Conditional Autoregressive Quantile-Located VaR, with an extensive in- and out-of-sample analysis.

10:25
Forecasting Optimal Portfolio Weights Using High Frequency Data

ABSTRACT. The paper evaluates the contribution of conditional second moments, from high frequency data, to optimal portfolio allocations. Using the DCC model as a benchmark, we put forth two novel approaches: a model for the inverse conditional correlation matrix (DCIC) and the direct modeling of the conditional portfolio weights (DCW). We assess their out-of-sample ability by comparing the corresponding minimum-variance portfolios built on the components of the Dow Jones 30 Index. Evaluating performance in terms of portfolio variance, certainty equivalent, turnover and net certainty equivalent, we find that exploiting conditional second moments gives marked improvements upon volatility timing and na ̈ıve strategies: DCC and the computationally convenient DCIC perform in a similar way; DCW, the simplest and fastest to implement, exhibits superior performances with respect to all measures considered.

10:45
A copula-based quantile model

ABSTRACT. Risk management has typically focused on the Value-at-Risk (VaR) as the main risk measure. This measure is financially interpreted as the worst loss expected over a given period of time with a given probability. From a statistical point view VaR is a quantile of the losses distribution, that is an unobservable quantity that can be estimated once the distribution of the losses is known. The most traditional technique is the estimation of a dynamic VaR as a byproduct of a heteroskedastic model, e.g. a GARCH model. A second approach stresses the possibility of estimating directy the dynamics of the quantile: instead of considering a time-varying variance which brings a time-varying quantile, the dynamics is estimated directly on the quantiles. Engle and Manganelli (2004) have proposed the CAViaR model, with a specified quantile a time depending mainly on its own lagged values and on some function of past returns. Recently, the univariate approach has been extended to take into account possible spillovers over VaRs (White, Kim and Manganelli, 2015) in a bivariate formulation. In this paper the quantiles estimated using the bivariate CAViaR model are compared to the quantiles estimated exploting a copula function linking some estimated univariate quantiles. An application has considered daily returns (from January 2008 to February 2014, for a total of 1584 observations) of 20 assets included in Eurostoxx50. Each quantile dynamics is studied in relation to the quantiles of a portfolio formed by all the assets with equal weight. The comparison is firstly made in terms of Kupiec and Christoffersen test. Moreover, a further comparison is made using two loss functions that evaluate the distances between the losses and the VaR measures in presence of a violation. The results show that the copula approach is highly competitive providing, in particular, estimated quantiles which generally imply a lower value for the two loss functions.

11:05
Combining multivariate volatility models

ABSTRACT. Forecasting conditional covariance matrices of returns involves a variety of modeling options. First, the choice between models based on daily or intradaily returns. Examples of the former are the Multivariate GARCH (MGARCH) models while models fitted to Realized Covariance (RC) matrices are examples of the latter. A second option, strictly related to the RC matrices, is given by the identification of the frequency at which the intradaily returns are observed. A third option concerns the proper estimation method able to guarantee unbiased parameter estimates even for large (MGARCH) models. Thus, dealing with all these modeling options is not always straightforward. A possible solution is the combination of volatility forecasts. The aim of this work is to present a forecast combination strategy in which the combined models are selected by the Model Confidence Set (MCS) procedure, implemented under two economic loss functions (LFs).

10:05-11:25 Session 6C: Information and Arbitrage in Financial Markets
Location: Room 0.A.03
10:05
Extending classical stochastic calculus for insider trading scenarios

ABSTRACT. Discerning the strategies of a dishonest trader who possesses privileged information in a financial market has become a classical problem in the field of stochastic analysis applied to finance. Such a trader, usually known as "the insider", is assumed to possess information on some future events that will affect the market. Despite its relatively long history, this problem continues to be of interest within the realm of stochastic analysis. The reason is related to the fact that the mathematical formulation of this problem leads to stochastic differential equations provided with non-adapted terms, what in turn forbids its interpretation in the classical It\^{o} sense. The necessity of introducing more advanced probabilistic methods, such as Malliavin calculus or enlargements of filtrations, highlights the intrinsic difficulty that approaching the problem of insider trading possesses. In this talk we will summarize how anticipating stochastic calculus and conditioned stochastic differential equations can be used to tackle this sort of problems.

10:25
The Value of Informational Arbitrage

ABSTRACT. In the context of a general complete semimartingale financial model, we aim at answering the following question: How much are you willing to pay for learning some private information that will allow you to achieve arbitrage profits? In particular, we are interested in the case where the private information can yield arbitrage opportunities but not arbitrages of the first kind. In the spirit of Amendinger, Becherer & Schweizer (2003, Financ. Stoch.), we shall give an answer to this question by relying on an indifference pricing approach for general preferences over consumption and terminal wealth, relying on recent results on initial enlargement of filtrations.

10:45
Equilibrium under imperfect competition and asymmetric information

ABSTRACT. In this paper we consider a framework due to Kyle (1985), extended by Back (1992) to the continuous time setting, to model the asymmetry of information under imperfect competition. We assume three kinds of actors in the market: market makers, uninformed traders and one insider who knows the fundamental value of an asset at any time. There is also a release time when the fundamental value is made public and the release time is the horizon of our market as well. The horizon can be predictable or not. The imperfect competition is modeled through very general pricing rules extending all the previous cases studied in the literature. In this context we study the effect of all these factors in the equilibrium.

11:05
A continuous auction model with insiders and random time of information release

ABSTRACT. In a unified framework we study equilibrium in the presence of an insider having information on the signal of the firm value, which is naturally connected to the fundamental price of the firm related asset. The fundamental value itself is announced at a future random (stopping) time. We consider two cases. First when the release time of information is known to the insider and then when it is unknown also to her. Allowing for very general dynamics, we study the structure of the insiderís optimal strategies in equilibrium and we discuss market efficiency. In particular, we show that in the case the insider knows the information release time, the market is fully efficient. In the case the insider does not know this random time, we see that there is an equilibrium with no full efficiency, but where the sensitivity of prices is decreasing in time according with the probability that the announcement time is greater than the current time. In other words, the prices become more and more stable as the announcement approaches.

11:25-12:00Coffee Break
12:00-13:20 Session 7A: Aging, uncertainty, savings and pensions
Location: Room 0.A.02
12:00
A TWO-STEPS MIXED PENSION SYSTEM: AN AGGREGATE ANALYSIS

ABSTRACT. The change in economic and sociodemographic reality, characterized by a continuous increase in longevity, the consequences of the economic crisis as well as the lack of adequate adjustments of the Social Security retirement pension systems everywhere, entail risks for workers and the Social Security itself. Many reforms of public pension systems have been carried out in recent years, based on modifying system parameters and structural changes. Some reforms aim at increase capitalization in the determination of the final pension through a life annuity to complement the public retirement pension as a second retirement income.

Against the background of the change of agents’ behaviors throughout the life cycle and the presence of an adverse selection problem in the annuities market, we describe in this paper a ‘two-steps mixed pension system’ that tries to solve the pressure that increasing longevity is putting on pension schemes to provide adequate and sustainable pensions for all.

In our two-steps mixed system, when workers reach their ordinary retirement age they receive a ‘term annuity’ generated by their previous capitalized savings to be replaced by a Social Security defined contribution ‘pure life annuity’ when the so-called ‘grand age’ is reached. The analysis is carried out from a aggregate perspective, through the calculation of the Social Security implicit debt. We also analyze some possible transition strategies to the new system.

12:20
Automatic Balancing Mechanisms for Mixed Pension Systems

ABSTRACT. The decline in fertility rates, the increase in longevity and the current forecasts for the ageing of the baby-boom generation all point to a substantial increase in the dependency ratio, and this will raise serious concerns for the sustainability of Pay-As-You-Go (PAYG) pension schemes.

Consequently, many European countries have already carried out some parametric reforms, or even structural reforms, of their pension systems. Specifically, two-thirds of pension reforms in OECD countries in the last 15 years, OECD (2011), contain measures that will automatically link future pensions to changes in life expectancy, compared with only one country (Denmark) a decade ago.

Other countries, such as Sweden, Latvia and Poland, combine funded and PAYG elements within a compulsory basic pension system. These mixed systems have been advocated, particularly by the World Bank, as a practical way to reconcile the higher financial market returns compared with GDP growth with the costs of a scheme with a greater funded element.

With this in mind, the aim of this research is threefold. First, using nonlinear optimization based on Godínez-Olivares et al (2016), it seeks to assess the impact of a compulsory funded pension scheme that complements the traditional PAYG. Second, we study the consequences of introducing a sustainability factor linked to life expectancy (or any other demographic factor) on the financial stability of mixed pension systems. Finally, in the case of a partial financial sustainability, we design different optimal strategies, that involve variables such as the contribution rate, age of retirement and indexation of pensions, to restore the long-term financial equilibrium of the system.

12:40
Automatic Balancing Mechanisms in Practice: What lessons for pension policy makers?

ABSTRACT. Despite numerous reforms and the introduction of automatic or semi-automatic adjustment mechanisms, the solvency of pension systems is not guaranteed in the future. To circumvent this difficulty, setting up an automatic balancing mechanism can provide a variety of benefits. The purpose of this article is to take stock of the specific properties of various adjustment rules as they may exist in different countries and to see to what extent the understanding of these rules could be useful for public choices in countries wishing to ensure sustainability of their retirement systems. Several adjustments are possible. Three rules will get our attention. The American case is radical. The prohibition to resort to public debt, the so-called "fiscal cliff", forces the balance by a drastic reduction of pensions whenever the reserve fund is exhausted. The underlying idea is that this socially unacceptable perspective will force the parliament to take measures to restore solvency. The Swedish approach sets in stone an adjustment by the general level of pensions to guarantee a notional asset / liability ratio. A huge reserve fund is used to smooth the shock associated with the aging of the population. The Canadian approach is based on an "inadequate rate provision" which induces an increase in the contribution rate and a pension freeze as long as the federal and provincial finance ministers do not reach an agreement.

13:00
The challenges of wealth and its intergenerational transmission in an aging society

ABSTRACT. Since the beginning of the 80’s, our developed societies face a process of “property accumulation” both unprecedented, massive and particularly harmful for the economic growth, the equality of opportunities and the equity between generations. This process results in an increasing and unequal weight relatively to income, but also in an “unproductive over-accumulation” by the elderly, who hold an inactive mass of low-risk wealth, a “coming-back” of inheritance and bequest, received though later and later, and young households strongly constrained in their property projects. Even if decline in mortality at old age is not the sole reason of this process, due also to the slowdown of growth and to the mutations of capital, it has significantly contributed to worsen its effects. Remedying this situation is certainly not an easy task, as shows the analysis of possible social and fiscal reforms. This analysis leads to sustain the Taxfinh (Tax Family Inheritance) program, which proposes, first, to increase the rate and progressivity of family inheritance taxation for the 10%-15% wealthiest families and, second, to supply, as a counterpart, additional means to avoid this legacy over-taxation, favoring both gifts (to family members, to charities, or to professional assets), property consumption (real estate), or long-term productive investment of elderly savings. More than any other fiscal or social reform, this program seems to be the appropriate answer to the daunting challenges of a noxious property situation, while avoiding the present unpopularity of standard wealth transfer taxation.

12:00-13:20 Session 7B: The Econometric Analysis of Financial Time Series
Location: Salón de Grados
12:00
Predicting the Conditional Distribution of Energy Returns using Score Driven Dynamics

ABSTRACT. While the literature concerned with forecasting the conditional volatility of energy returns is large, relatively less emphasis is given to the ability of econometric models to predict the conditional distribution of energy returns. A “good” model should be able to accommodate salient features of energy data, which are: (i): Energy returns are heteroskedastic, (ii): Large (small) changes in conditional volatility tend to be followed by large (small) changes, i.e. volatility clustering,(iii): Returns and changes in volatility are negatively correlated, i.e. the leverage effect, (iv): The empirical distribution of returns is skewed with relatively large kurtosis, and (v): Return series contains several outliers. In this study, we examine if accounting for (i)-(v) through Generalized Autoregressive Score (GAS) dynamics has any bearing on our ability to generate accurate predictions of the conditional distribution of daily energy returns. Five key findings are unraveled from our empirical analysis using daily crude oil, heating oil, gasoline spot and future prices. From an full-sample perspective, GAS models outperform their GARCH counterparts in terms of model fit, and perform well when the out-of-sample criterion focuses on the properties of the data generating process. GAS models also generate statistically significant more accurate density prediction than their GARCH counterparts. However, these gains are relatively modest, at best 2%.

12:20
Quantitative Risk Management for Cryptocurrencies

ABSTRACT. Cryptocurrencies have recently gained a lot of interest from investors, central banks and governments worldwide. The lack of any form of political regulation and their market far from being efficient, requires new forms of regulation in the near future. From an econometric viewpoint, the process underlying the evolution of the cryptocurrencies' volatility has been found to exhibit at the same time differences and similarities with other financial time-series, e.g. foreign exchanges returns. In this paper, we analyse how quantitative risk management techniques need to be implemented when dealing with cryptocurrencies time-series. We focus on the estimation and backtesting of the Value-at-Risk and Expected shortfall risk measures and report advices for quantitative risk managers and investors. Our results indicate that naive approaches generally used by practitioners, like variance estimation via Exponential Smoothing, can be extremely dangerous when dealing with cryptocurrencies.

12:40
Forecasting risk with Markov-switching GARCH models: A large-scale performance study

ABSTRACT. We perform a large-scale empirical study to compare the forecasting performance of single-regime and Markov-switching GARCH (MSGARCH) models from a risk management perspective. We nd that Markov-switching GARCH models yield more accurate Value-at-Risk and left-tail distribution forecasts than their single-regime counterpart. Also, our results indicate that accounting for parameter uncertainty helps for left-tail predictions, independently of the inclusion of the Markov-switching mechanism.

13:00
Testing for VIX forecast densities: A Generalized Autocontour Approach

ABSTRACT. We apply Bootstrap Generalized Autocontour (BG-ACR) tests to the Heterogeneous Autoregressive (HAR) model and the Multiplicative Error Model (MEM) of the U.S. volatility index VIX. We find strong evidence against the parametric assumptions of the conditional densities, i.e. normality in the HAR model and semi non-parametric Gamma (GSNP) in the MEM. In both cases, the true conditional density seems to be more skewed to the right and more peaked than either normal or GSNP densities, with location, variance and skewness changing over time. The preferred specification is the heteroscedastic HAR model with bootstrap conditional densities of the log-VIX. The BG-ACR test for the dynamic specification of in-sample conditional densities and for evaluation of out-of-sample forecast densities. The tests are based on probability integral transforms (PITs) computed from bootstrap conditional densities that incorporate parameter uncertainty. Then, the parametric specification of the conditional moments can be tested without relying on any parametric error distribution yet exploiting distributional properties of the variable of interest. The BG-ACR tests are easy to implement and are accompanied by graphical tools that provide information about the potential sources of misspecification.

12:00-13:20 Session 7C: Contributed session (Finance – Portfolio)
Location: Room 0.A.03
12:00
Market Crashes and the Capital Asset Pricing Model

ABSTRACT. It is widely believed that “all correlations go to one in a crisis”. Under the market model, unit correlations can only arise if residual risks vanish and asset returns are determined only by the return on the market portfolio. This theoretical paper investigates the changes in correlation that occur when the conditional distribution of asset returns given a market crash is used. Assuming a multivariate normal distribution for asset returns, it is shown that under this model (i) returns have a skewed distribution when there is a market crash, (ii) the betas remain unchanged but (iii) that specific changes are required to the parameters of the underlying multivariate normal distribution of returns if unit correlations are to be observed. The results are extended to a linear factor model of the Fama and French type. Under the underlying normal distribution the required parameter changes are arguably arcane. The paper therefore also considers the effect of a crisis when returns follow a multivariate Student t distribution, which is well established in finance. The results and findings are qualitatively similar in some respects, but are very different in a number of important details.

12:20
Optimal stopping and Volterra type equations: application to the Brownian bridge

ABSTRACT. The problem of finding the optimal stopping time to exercise an option often arises in the framework of financial mathematics. Here we address the situation where the underlying diffusion process is a Brownian bridge, and the gain function is the identity. Although this problem has been studied previously in the literature, our contribution employs an alternative approach that is not based on the explicit form of the optimal boundary. Therefore, the methodology is likely to be applied to a wider class of underlying processes and gain functions, different from the particular combination studied in this paper, for whom explicit solutions are not available. Importantly, we are able to obtain an integral equation representation of the boundary function that allows its numerical computation, which is a key point for making effective inference of the boundary.

12:40
Risk/Return analysis on credit exposure: do small banks really apply a pricing risk-based on their loans?

ABSTRACT. The financial crisis of 2007, the following actions of Regulator on regulatory framework, the structural decrease of interest rates and the ECB expansive mone-tary policy caused an impact on financial institutions both in terms of a deep re-duction of banks profitability and a change in their business model. As a conse-quence, in 2016 the cost-to-income ratio of the leading European banks has been mainly leaded by direct and indirect remuneration payments. These data highlight how the banking system is trying to readapt its strategic choices to the changed market conditions, supporting a very onerous framework of fixed costs but, at the same time, continuing credit underwriting. In this context, an efficient and effec-tive credit underwriting based on a pricing adjusted to the internal credit risk poli-cies is a pillar for the existence of the banks. Despite the recent Regulator's indications and the advanced methodol-ogies available, small banks struggle to equip themselves with complex systems for risk measurement and choose to adopt less structured methods which are anyway capable of providing an adequate risk as-sessment. Our study empirically analyzes a Small Bank portfolio that includes 16.216 loans (Retail and Corporate) underwritten between 2012-2016 and try to investigate the coherence between borrower cred-it worthiness and the effective price applied to the loans. The outcomes of our analysis highlight the presence of some differ-ences and incoherence between borrower credit worthiness and the price applied, mainly due to a misalignment between strategic choices and risk management, actually the bank in order to gain new clients is underpricing its loans and then not completely remunerate the risk tak-en.

13:00
Probability of Default Modeling: A Machine Learning Approach

ABSTRACT. The last 12 months have been characterized by a technological evolution and a "digital" wave offering new opportunities for the evolution of operational practices and the adoption of more advanced methodological approaches in different fields of research. In a context of growing competition and falling of profit margins, Machine Learning can play a vital role in both technology and business, by enabling financial institutions to maximize the value of their own data. In the field of Credit Risk modeling, there is an extensive literature find-ing out a massive use of traditional statistical techniques in Probability of Default modeling but few studies until now have been focused on the adop-tion of Machine Learning techniques in modeling credit risk parameters. Our study empirically investigates the results of applying different machine learn-ing techniques through the overall estimation process: starting from big data available: more than 800,000 Retail customers of a European Bank under ECB Supervision, with 10 years of historical information and more than 600 variables to be analyzed for each customer. We have divided the development process in three different stages: variable selection, model identification and Calibration & rating scale building. For each step, we have identified the best machine learning algorithm in order to reduce the running time and maximize the predictive power and contribute of each variable to the estimation of PDs. In the second stage, we have identified the best multivariate combination of drivers by comparing the results of a set of supervised machine learning algorithm. In the last development stage, we have applied an unsupervised machine learning to calibrate parameters and ranked the customers within an ordinal n-class scale obtained through the application of an unsupervised learning classification technique. Finally, we have verified the calibration goodness through classical calibration test (e.g. binomial tests).

13:20-15:00Lunch Break

Hotel Ganivet is located in Calle de Toledo, 111. (Just at 150 m from the Conference site).

15:00-16:00 Session 8A: New products in insurance industry
Location: Room 0.A.02
15:00
Cyber risk management: a new challenge for actuarial mathematics

ABSTRACT. The Internet evolution is one of the greatest innovations of the twentieth century and has changed lives of individuals and business organizations. As a consequence, cyber risk has emerged as one of the top challenges faced by companies worldwide. Executives and security professionals are accepting that it is not a matter of if but a matter of when their organization will be hit by a cyber-attack. Companies have to include cyber risk in their risk management framework, depicting their risk profile, assessing their risk appetite and looking for corresponding risk transfer solutions. A specific kind of insurance that is emerging within the domain of cyber-systems is that of cyber-insurance. Cyber-insurance is the transfer of financial risk associated with network and computer incidents to a third party. Insurance companies are increasingly offering such policies, in particular in the USA, but also in Europe. Cyber-security insurance is designed to mitigate losses from a variety of cyber incidents, including data breaches, data theft, business interruption and network damage. A robust cyber-security insurance market could help to reduce the number of successful cyber-attacks by promoting the adoption of preventive measures in return for more coverage and encouraging the implementation of best practices by basing premiums on an insureds level of self-protection Scientific interest on this topic is growing, but the literature on cyber risk and information security is mainly limited to the field of information technology. Aim of this contribution is to offer a review of the recent literature on cyber risk management in the actuarial field. Moreover, basing on the most significant results in IT domain, we outline possible synergies between the two lines of research.

15:20
”Money purchase” pensions: contract proposals and risk analysis

ABSTRACT. The growth and the consolidation of the third pillar of retirement, posit a significant challenge to the combination of long-term savings and long-term investments. The varied world of the Personal Pensions basically consists of open funds or Private Personal Pensions (PPP), the latter being individual plans built up by insurance con- tracts, people may join as a consequence of discretionary choices. Those who enter this kind of contracts contribute to the accumulation of an individually accounted private pension fund to complement the pension income provided by a basic public and/or occupational pension scheme, in order to meet the need of future adequacy of income at old age. We propose a personal pension product, consisting of a non-traditional profit sharing life insurance contract where the insured is allowed to share the profit of the pension’s invested funds all along the contract duration, that is from the issue time till the insured’s death. In its concrete realization, the idea comes true as a sequence of premiums characterized by a level cap, followed by the sequence of benefits characterized by a level floor. The two embedded options are inserted in the basic structure of a pension annuity. Due to the negligibility of the pooling effect in such kind of portfolios, in respect of their size and also the current phase of market evolution, the impact of the accidental demographic risk source is investigated.

15:40
International longevity risk pooling

ABSTRACT. This paper studies the problem of an insurance company that has to decide whether to expand her portfolio of policies selling contracts written on a foreign population. We propose a parsimonious continuous-time model for longevity risk, that captures the dependence across different ages in two populations and evaluate the diversification gains due to the international expansion. We present a calibrated example, based on annuity portfolios of UK and Italian males aged 65-75. We show that diversification gains, evaluated as the reduction in the portfolio risk margin following the international expansion, can be non-negligible, in particular when interest rates are low. We describe how the expansion can obtain through a swap, instead of opening a foreign affiliate.

15:00-16:00 Session 8B: Recent advances in time series analysis - ANSET session
Location: Salón de Grados
15:00
Robust time--varying undirected graphs

ABSTRACT. Undirected graphs are useful tools for the analysis of sparse and high– dimensional data sets. In this setting the sparsity helps in reducing the complex- ity of the model. However, sparse graphs are usually estimated under the Gaussian paradigm thereby leading to estimates that are very sensitive to the presence of outlying observations. In this paper we deal with sparse time–varying undirected graphs, namely sparse graphs whose structure evolves over time. Our contribution is to provide a robustification of these models, in particular we propose a robust esti- mator which minimises the γ–divergence. We provide an algorithm for the parame- ter estimation and we investigate the rate fo convergence of the proposed estimator. Finally we consider a simulation example in order to test the effectiveness of the method.

15:20
Periodic autoregressive models with multiple structural changes by genetic algorithms

ABSTRACT. We present a model and a computational procedure for dealing with seasonality and regime changes in time series. The seasonality is accounted for by subset PAR modelling, for which each season follows a possibly different Autoregressive model. Levels, trend, autoregressive parameters and residual variances are allowed to change their values at fixed unknown times. The identification of number and location of structural changes, as well as PAR lags indicators, is based on Genetic Algorithms, which are suitable because of high dimensionality of the discrete search space. An application to Italian industrial production index time series is also proposed.

15:40
Forecasting energy price volatilities and comovements: New evidences from fractionally integrated multivariate GARCH models

ABSTRACT. Energy price volatilities and correlations have been modelled extensively with short memory multivariate GARCH models. This paper investigates the potential benefits deriving from the use of multivariate fractionally integrated GARCH models from a forecasting and a risk management perspective. Several multivariate GARCH models for the spot returns on three majoir energy markets are compared. Our in-sample results show significant evidence of long memory decay in energy price returns volatilities, of leverage effects and of time-varying autocorrelations. The forecasting performance of the models is assessed by means of three different approaches: the SPA test, the Model Confidence Set and the Value at Risk. The results seem to indicate that the multivariate models incorporating long-memory outperform the short memory benchmarks in forecasting the one day ahead conditional covariance matrix and associated magnitudes, such as VaR forecasting.

15:00-16:00 Session 8C: Random sets on finance and insurance
Location: Room 0.A.03
15:00
The $d_\theta$-median as a tool to provide robust estimates of the location of interval-valued data: the influence of Brexit on IBEX 35

ABSTRACT. Interval-valued data can arise from different real-life situations, such as the representation of quality by perceptions or opinions, attributes quantifying ranges or fluctuations along a period of time, aggregation of a large dataset into one of a reduced size or the partial recording of intrinsically real-valued data to guarantee a certain level of confidentiality. A clear example of interval-valued data from the Finance field is the study of the daily fluctuation of the IBEX 35 (the benchmark stock market index of the Bolsa de Madrid, Spain's principal stock exchange): such a fluctuation can be measured through the recording of both the minimum and maximum values of the index along a period of time.

The standard statistic to estimate the location of this kind of data is the interval-valued mean (the so-called Aumann mean). Unfortunately, it lacks robustness and is highly influenced by outliers, which are not so uncommon when data come from real-life experiments. For instance, we could think of the atypical fluctuation of the IBEX 35 just after the results of the referendum which took place in the UK to decide whether to leave the European Union. The victory of Brexit supporters was unexpected and caused instability, which led to outlying interval-valued observations.

The aim of this paper is to provide robust estimators of the location of interval-valued data and, among them, focus on the concept inspired by the spatial median (the $d_\theta$-median). Apart from its application to the IBEX 35 dataset, some empirical studies will show the practical suitability of this measure.

15:20
Some connections between stochastic orders and financial derivatives, the case of the condor financial derivatives

ABSTRACT. Some relations between stochastic dominance criteria and some popular financial derivatives are established in this communication. Namely, we will see how some common stochastic orders like the usual stochastic order, the increasing convex order, or the convex order, can be used to compare expected benefits of investments in bull call spreads, call options or long straddles, respectively. Moreover, we will analyze the case of the so-called condor financial derivatives. For that purpose, we introduce a stochastic dominance. Main characterizations and properties of that criterion will be described. Conditions to order expected benefits of condor financial derivatives by means of that ordering, when prices of the underlying assets follow Brownian movements, or geometric Brownian movements, will be developed. An empirical application of the proposed method with real data to compare investments in condor derivatives, whose assets are based on some relevant economic indexes, will be discussed.

15:40
The loss function approach in multivariate risk measurement

ABSTRACT. A multiasset portfolio is composed of several assets and modelled as a random vector each of whose components represents one of them. Assume that the profits from each of the assets or business lines cannot be used directly to offset the losses from another. This is rather natural if they are priced on different currencies and subject to currency transaction costs and/or different regulatory restrictions. As for transaction costs, Kabanov's model introduces the exchange cone as the set of all portfolios available at price zero. In the same way, together with a multiasset portfolio, it is possible to consider the set of all portfolios that can be obtained at price zero from it, which is the random set defined as the vector potfolio plus the exchange cone. In the selection approach, such set portfolio is acceptable as long as at least one of the random vectors lying inside it is componentwisely acceptable, that is, all of its components are acceptable. Observe that each of the components is given in the currency and subject to the regulatory restrictions of the corresponding asset at the original multiasset portfolio, and has been obtained after applying some allowed transactions to it. In the loss function approach, the multiasset portfolio (converted into a set-valued one or not) is transformed through a loss function, while it is acceptable as long as the expected losses at the different components compensate eachother. That is, apart from having some allowed transactions among the assets there also exist some allowed compensations for the losses.

16:00-16:30Coffee Break
16:30-17:30 Session PS3: Longevity Trends: Modelling and Forecasting. Steve Haberman, City University of London

The fact that we are living longer in many developed (and developing) countries has a significant financial effect on individuals, governments, social security systems, pension plans and insurance and reinsurance companies. In this context, we will examine the background in terms of historical trends and consider the financial implications of longevity trends and “longevity risk”. In order to plan in advance for the impact of these changes, we require reliable models that enable the accurate forecasting of future longevity trends and the measurement of uncertainty. We will examine different approaches to the modelling of the trends in the underlying mortality rates as well as the mortality improvement rates. We will present some results from comparative studies of modelling and forecasting and offer some reflections on the current state of the science and practice.

Location: Salón de Grados
17:35-18:55 Session 9A: Contributed session (Actuarial Sciences - Non life)
Location: Salón de Grados
17:35
An experience based premium rate discounts system in crop insurance using Tweedie’s regressions

ABSTRACT. We develop an experience based premium rate discount system in crop insurance able to cope with adverse years when high losses happen within some return period. Technically, it consists of the application of Tweedie’s model and regressions embedded in a mean discount model. We also de-velop our system using censored regression (Tobit model) and compare results. The proposed rating system should be able to cope with those years that destabilize the technical result caused by chronic premiums in-sufficiency. We use data taken from the Spanish “table grape” line of busi-ness to exemplify the methodology.

17:55
A stochastic model to evaluate the pricing distortions in the indemnity insurance methods for the MTPL insurance

ABSTRACT. The aim of this paper is to analyze empirically and formalize, by using non-life actuarial methodologies, the distortive effects due to the mechanism produced by Direct Reimbursement system (DR) when there are involved in one accident, two vehicles belonging to two different sectors (i.e. automobile and motorbike). DR is the indemnity insurance method, used by a lot of European and American countries to manage a motor liability claims in which the driver that suffers an accident is paid by this own insurance company, that in a second moment should, in some model, receives a forfeit reimbursement. A stochastic model applied to a case study evaluates the pricing impact of the new models proposed by the authors to avoid the distortion that should be caused an unfair increase in tariff for some type of vehicle (e.g. for motorcycle and motorbike policyholders) and consequently a decrease for tariff of the other sector policyholders.

18:15
The Contribution of Usage-based Data Analytics to benchmark Semi-autonomous Vehicle Insurance

ABSTRACT. Semi-autonomous vehicles will have a significant impact for the automobile insurance industry. Rating methods must be revised to ensure that risks are correctly measured in the new context and it is still unknown whether this would reduce or increase the premiums. We analyze telematics information and present methods for Usage-Based-Insurance to identify the effect of driving patters on the risk of accident. These results can be used as a starting point and a benchmark for addressing risk quantification and safety for semi-autonomous vehicles. Automatic speed control devices, which allow the driver to keep the vehicle at a predetermined constant speed and can ensure that the speed limit is not violated, could be considered a first example of semi-autonomy. We show scenarios for a reduction of speed limit violations and the consequent decrease in the expected number of accident claims. If semi-autonomous vehicles would completely eliminate the excess of speed, the expected number of accident claims could be reduced to almost one third its initial value in the average conditions of our data. We also note that an advantage of automatic speed control is that the driver does not need to look at the speedometer and just needs to concentrate on the road, which may contribute to safer driving.

18:35
The Impact of Feedback Trading on Option Prices

ABSTRACT. This paper examines whether S&P 500 index option prices are affected by feedback trading. Our testing framework is a heterogeneous agent’s option pricing model, where agents differ in their level of rationality. Hence, they form different beliefs about the future level of market volatility, and trade options accordingly. We introduce feedback traders, who incorporate noisy signals into their volatility beliefs. We find that feedback trading appears to be an important determinant of index option prices. In addition, we find that feedback trading partly explains the term structure of the index option smile, because it primarily affects short-term options.

We assume that traders in the options market can be classified into three different groups that have different expectations about the future evolution of index volatility: fundamentalists, who trade on the principle of mean reversion; chartists, who trade on exogenous shocks; and feedback traders, who trade on noisy signals from the mutual fund market. Typically, stochastic volatility option pricing models are implanted using Monte Carlo simulation techniques. We propose a new method to calibrate the model based on the Filtered Historical Simulation approach. We make use of the fact that we can match the average fund flows – our indicator for noise trader sentiment’ – with the standardized residuals for all days that we use to estimate the empirical distribution. Therefore, in the calibration of the models by sampling from the empirical distribution, we obtain not only the historical standardized residual, but pairs of the news innovations and the noise signals. The method has the implicit advantage that we pick up the exact historical correlation between returns and fund flows without the need to explicitly model them as a separate process.

In our empirical application, we calibrate all models on each Wednesday of the years 2005, 2006 and 2007, and calculate the RMSE’s in-sample as well for one week out-of-sample. We find that calibrating the model with a nonparametric innovation distribution compared to standard Monte Carlo techniques results in slightly lower pricing errors. When we incorporate noise trading in the pricing model, we recognize that chartists and noise traders incorporate their information into their beliefs about volatility in a similar way. They expect volatility to go down for good news (positive returns or fund flows) and to go up for bad news (negative returns or fund flows). However, once we include noise traders’ trades, in-sample as well as out-of-sample pricing errors of the models are reduced. Given that we only make minor modifications to the specifications, the differences are sizeable. Results further suggest that on average 25% of traders follow a fundamentalist strategy, 19% a chartists strategy and, as a result, about 56% a noise traders strategy. Additionally, noise traders react more strongly to fund flows for shorter maturities compared to longer maturities, but long-term investors are more sensitive to differences in forecasting performance between the chartists and noise trader strategies compared to short-term traders.

Hence, we present an interesting alternative to the well-known Monte Carlo simulation techniques and investigate the benefits in an option pricing exercise. We show that noise trader risk is an important determinant of prices and that noise trading can partly explain the volatility dynamics underlying option prices.

17:35-18:55 Session 9B: Contributed session (Volatility - Risk)
Location: Room 0.A.02
17:35
Implied volatility indices: an empirical analysis based on stochastic volatility continuous-time models

ABSTRACT. This paper tries to identify the continuous-time stochastic process that fits better the time evolution of implied volatility indices. We consider a general stochastic volatility model and ten particular cases to analyze the dynamics of seven implied volatility indices in the United States stock market. The results of our empirical analysis show that, depending on the time period and the implied volatility index under study, we get different models that fit better the behavior of these volatility indices. The main qualitative conclusion is that, for both the volatility index and its logarithm, the mean reversion is the relevant issue to be considered in the different stochastic processes under analysis.

17:55
European Insurers: Interest Rate Risk Management

ABSTRACT. This paper studies the interest rate risk of some relevant European insurers during the period 2003-2015, using the Quantile Regression (QR) methodology and including the state of the economy. So, this research analyzes the sensitivity of the stock returns to changes in interest rates in the entire period, and also in two different sub-samples: pre-crisis and crisis sub-period. The results show that, in general, the European insurers’ returns have a statistically significant sensitivity to interest rates, although there are relevant differences between the different companies analyzed, the different subperiods and between quantiles. In general, the sensitivity of the European insurers to movements in the European interest rates tends to be more pronounced in extreme market conditions (with upward or downward fluctuations). This relevant result has been detected due to the use of the Quantile Regression (QR) methodology, which helps reveal the relationships between the European interest rates and the European insurer returns that could not have been detected using more traditional techniques.

18:15
Automatic detection and imputation of outliers in electricity price time series

ABSTRACT. In high frequency time series of electricity prices, one frequently observes a feature which is common for most electricity markets, namely sudden extreme prices. The present study relates to a method for automatically determining outliers in such time series. The core of our method is the construction of a reference time series through the rolling decomposition in trend-cycle and seasonal components of the original time series. Deviations of residuals above a given threshold indicate anomalies which are replaced with a more reasonable alternative price.

18:35
The Limits to Volatility Predictability: Quantifying Forecast Accuracy Across Horizons

ABSTRACT. Volatility forecasting is crucial for portfolio management, risk management, and pricing of derivative securities. Still, little is known about how far ahead one can forecast volatility. First, in this paper we introduce the notions of the spot and forward predicted volatilities and propose to describe the term structure of volatility predictability by the spot and forward forecast accuracy curves. Both curves are highly relevant in practice because in financial markets there is a trade in long-term option contracts (LEAPS) with maturities up to 39 months in the future and contracts on forward volatility (FVA) with maturities up to 24 months ahead. The traders in these contracts are naturally interested in the horizon of the spot and forward volatility predictability. Then, by employing a few popular time-series volatility models, we perform a comprehensive empirical study on the horizon of volatility predictability. Specifically, using data on 23 individual stocks, 39 world stock indices, 16 bond indices, 17 exchange rates, and 8 commodities, we estimate the term structure of volatility predictability for all major financial markets. Our results suggest that, whereas the spot volatility can be predicted over horizons that extend to 35 weeks, the horizon of the forward volatility predictability is rather short and limited to approximately 7.5 weeks. Thereby, our results suggest that the horizon of volatility predictability is much shorter than the longest maturity of traded LEAPS and FVA contracts. Finally, we suggest a plausible explanation for why standard models fail to provide sensible longer-horizon volatility forecasts. We demonstrate that volatility is less persistent and does not revert to its long-run mean as the models assume. All this suggests the desirability of developing volatility models that embed the new stylized fact about volatility dynamics. Such models can potentially significantly extend the horizon of volatility predictability.

17:35-18:55 Session 9C: Higher moments in Finance and Insurance
Location: Room 0.A.03
17:35
An explicit solution of a portfolio optimization problem in a mean-variance-skewness model

ABSTRACT. Observing that most of portfolio returns follow a non-symmetric distribution, we consider the problem of expected utility maximization of portfolio returns under the assumption that the underlying distribution is skewed normal. We suggest a functional to be optimized which depends on the portfolio mean, variance and skewness, and derive a novel explicit closed-form solution to the optimization problem. For a special case, when the skewness parameters of the portfolio become zero this functional reduces to the classical mean-variance functional and the solution, as expected, coincides with the classical mean-variance solution. The results will be illustrated with data on stocks from NASDAQ.

17:55
Kurtosis Maximization for Outlier Detection in GARCH Models

ABSTRACT. Outlier detection in financial time series is made difficult by serial dependence, volatility clustering and heavy tails. We address these problems by filtering financial data with projections achieving maximal kurtosis. This method, also known as kurtosis-based projection pursuit, proved to be useful for outlier detection but its use has been hampered by computational difficulties. This paper shows that in GARCH models projections maximizing kurtosis admit a simple analytical representation which greatly eases their computation. The method is illustrated with a simple GARCH model.

18:15
An Extension of Multidimensional Scaling to Several Distance Matrices, and its Application to the Italian Banking Sector

ABSTRACT. Human vision is naturally apt at detecting patterns from interpoint distances. Unfortunately, this gift is limited to Euclidean distances between points on the line, in the plane and in the space. Multidimensional scaling (MDS) overcomes these difficulties by approximating the interpoint distances with Euclidean distances between numbers, pairs of numbers or triplets of numbers. A major limitation of MDS is its reliance on the chosen distance matrix. Different distances might lead to different MDS results. For example, some points might resemble a cluster in a MDS graph while being very distant from each other in a MDS graph based on a different distance matrix, albeit computed from the same data. The problem would not arise with proportional distance matrices, since they give the same information about the data points configuration. Similarly, the problem would be eased if the distance matrices were roughly proportional to each other. We therefore propose to replace several distance matrices which are believed to satisfactorily represent distances between units with a single distance matrix which is as proportional as possible to all of them. Assume that distance between units are adequately summarized by the distance matrices Δ1, ..., Δk. We shall now describe an algorithm for finding another distance matrix which is as close as possible, in the squared norm sense, to matrices proportional to the given distance matrices. First, obtain the matrix X by lining side by side the vectorized distance matrices Δ1, ..., Δk. Second, compute the first singular value l of X, and the associated right and left singular vectors v and u. Third,assess the approximation by the ratio l =∥X∥. Fourth, matricize the left singular vector u into Δ , so that u satisfies the identity u = vec (Δ ). Fifth, use the matrix Δ for the MDS analysis. We shall show that the algorithm leads to an optimal choice of the approximating matrix, and also that it is a valid distance matrix.

18:35
Exploratory Projection Pursuit for Multivariate Financial Data

ABSTRACT. Projection pursuit is a multivariate statistical technique aimed at finding interesting low-dimensional data projections. It deals with three major challenges of multivariate analysis: the curse of dimensionality, the presence of irrelevant features and the limitations of visual perception. In particular, kurtosis-based projection pursuit looks for interesting data features by means of data projections with either minimal or maximal kurtosis. Its applications include independent compo- nent analysis, cluster analysis, discriminant analysis, multivariate normality testing and outliers detection. To the best of the author’s knowledge, this paper constitutes the first application of kurtosis-based projection pursuit to the exploratory analysis of multivariate financial time series.

20:30-23:30 Session : Conference banquet

The conference banquet is not included in the registration fee. Tickets can be bought online or on site in the registration desk. (Upon request) 60.00 EUR per person. 

Day and time: Thursday, April 5, 2018 | 20:30 – 23:30

Dinner will take place in “La Masía”.

With 1,722.60 hectares of natural space, this location is the most important public park in Madrid. The history of Casa de Campo began with the decision by Philip II to move the Court to Madrid, and reside there. The King built a manor house linking the Palace with the hunting area called El Pardo. Later, farms and fields bought from the areas surrounding the Casa de Campo were added. 

 It was declared a Royal Forest under Fernando VI. As a result of the hunting and country atmosphere of the farm, as well as its proximity to the Palace, the original country house belonging to the Vargas Family was enlarged to host the Royal Family for such activities. Carlos III gave it a new twist when he introduced livestock and agriculture as one of its purposes, which would be continued by Queen María Cristina.

During the Spanish Civil War, many battles were fought there, and the numerous bombings affected its antique construction, giving rise to new military constructions, which still can be seen.

“La Masía” is located inside the Casa de Campo. For those attendees staying in other areas of Madrid, we suggest to take the underground (called “Metro”) to Lago station (line 10 – blue).

 

The dinner will begin at 20:30 hours. Staff will be checking dinner bookings on arrival at the dinner venue. Please wear your badges.

Special dietary requirements, vegetarian and vegan meals should have been ordered on your booking form. If you have requested so, please inform your assigned waiter.

*After dinner a shuttle service will be available to take all participants to PUERTA DE TOLEDO.