Dimension selection in distance-based generalized linear models with application to pricing
ABSTRACT. The problem of dimension selection in distance-based generalized linear models (DB-GLM) is studied. In DB-GLM there are a vector of continuous responses and a distances matrix between n observed individuals playing the role of predictor information in the model. A predictor (Euclidean) space is given by multidimensional scaling (MDS) from the distances matrix. The dimension of the configuration can be as larger as n-1. In some cases an over parametrization occurs and the model does not fit well the data. The main problem is that of determining the number of dimensions or of latent predictor variables that improve the model fitting. The problem is similarly to that of the selection of predictors in classical GLM. The optimal dimension is called the effective rank. To choose the effective rank of a DB-GLM three possible solutions are presented: to use the generalized cross-validation (GCV) criterion, and to use the Akaike or the Bayesian information criteria (AIC or BIC). These dimension selection tools are implemented in function dbglm of the dbstats package for R.
The motivations of this study are: first, the application of the rank selection tools of DB-GLM to the actuarial problem of pricing in the calculus of the pure premium, and second, the comparison of the results with those obtained using classical GLM. Real data sets of claim frequency and claim severity are analyzed to illustrate the concepts, obtaining empirical evidence of lower deviances when using DB-GLM and effective rank instead of using classical GLM and the same dimension and information.
A cluster distance-based procedure for dimension reduction and prediction
ABSTRACT. A two-steps procedure of cluster distance-based regression is proposed. In the first step, the dimension reduction step, the number of elements to be represented using dissimilarities is reduced. Given a dissimilarity matrix obtained from the original data set, a combination of a k-means procedure for dissimilarities and multidimensional scaling (MDS) is employed to determine a classification of the observed elements and to determine a reduced latent predictor space. The classification of the objects into clusters while simultaneously the cluster centres are represented in a low dimensional space is the aim of this cluster-MDS procedure. In the second step, the prediction step, the reduced clustered space is the latent predictor in a distance-based regression, where the weighted average vector within each cluster on the continuous response variable is projected. Distance-based linear models or distance-based generalized linear models can be fitted using functions dblm and dbglm of the dbstats package for R. The performance of the proposed cluster distance-based methodology is illustrated with a real data set of the automobile insurance market. The random variable mean claim amount of a claim, claim severity, is analyzed. In particular claim severity is used in the a priori ratemaking, where clusters constitute the different risk groups for the final tariff, and thus in this context the dimension reduction is of interest. To standardize the data, a subset related to own damages coverage for passenger cars with category equal to sedan type is selected. The period is of 21 month, from 01/06/2010 to 29/02/2012. There are a total of 1439 cases for the study. First, the clusters and the MDS configuration have been found, obtaining a total of 21 risk groups for the tariff in a low 7-dimensional space. Finally, a DB-GLM is fitted to the clustered data, assuming a Gamma distribution and the logarithmic link.
Logistic classification for new policyholders taking into account prediction error
ABSTRACT. An expression of the mean squared error, MSE, of prediction for new observations when using logistic regression is showed. First, MSE is approximated by the sum of the process variance and of the estimation variance. The estimation variance can be estimated by applying the delta method and/or by using bootstrap methodology. When using bootstrap, e.g. bootstrap residuals, it is possible to obtain an estimation of the distribution for each predicted value. Confidence intervals can be calculated taking into account the bootstrapped distributions of the predicted new values to help us in the knowledge of their randomness.
The general formulas of prediction error (the square root of MSE of prediction), PE, in the cases of the power family of error distributions and of the power family of link functions for generalized linear models were obtained in previous works. Now, the expression of the MSE of prediction for the generalized linear model with Binomial error distribution and logit link function, the logistic regression, is obtained.
Its calculus and usefulness are illustrated to solve the problem of Credit Scoring, where policyholders are classified into defaulters and non-defaulters. The aim is the classification of new policyholders taking into account the information given by PE. Two sets of real credit risk data are analyzed and the probabilities of default are estimated. Other measures as are: error rates based on counting misspredictions, sensitivity, specificity, ROC curves and the Brier’s score are calculated for comparison with the proposed MSE measures.
Modelling and Forecasting Suicide: A Factor-Analytic Approach
ABSTRACT. We consider a model for suicide rates that can incorporate both observable, such
as economic, and possibly unobservable, such as psychosomatic, factors. We take
into account measurement errors typically found in suicide rates through the allowance for these unobservable components, which makes cross-country comparisons valid. Using age-standardized annual data for the period 1960-2011 for 18
OECD countries, we show that suicide rates contain deterministic trends and can
display persistent behavior. We adjust the model accordingly and forecast male
and female suicide rates employing common factors estimated from the panel.
Our ndings indicate that the model has a superior forecasting performance than
the ones used in the literature. Our forecasts also show that there is a generally
declining trend in the suicide rates but for certain countries this trending behavior
may be in an upward fashion, which highlights the importance for fine-tuning preventive interventions. Last but not least, the forecasts provide information as to
how insurance and pension policies based on suicide mortality should be adapted.
The effect of Marital Status on Life Expectancy: Is Cohabitation as Protective as Marriage?
ABSTRACT. This paper examines the e¤ect of cohabitation on life expectancy and access whether it is as protective as the marriage state. Marital status has significant implications for an individual's health and mortality and the advantages in life expectancy enjoyed by married individuals compared to singles have been well documented in the literature. With evidence of considerable changes in partnership status, living arrangements and increasing divorce rates, it becomes increasingly important to understand the e¤ects of these trends on changes in health in later life and mortality. This paper contributes to the literature in four ways. Firstly,
among the singles (divorced, widowed, and never married) we also account for cohabitation status in the life expectancy calculations. Secondly, we assess the marital state at each age and point in time for an entire population thus allowing individuals to move between states over time and forecast marital specific
future life expectancies. Thirdly, we show how health status evolves for each of the marital categories on a monthly basis in the years prior to death. Finally, we discuss the basis risk associated with the marital specific deviations from the general mean mortality.
Brownian Semistationary models and Fractional Brownian Motions
ABSTRACT. In this talk a class of continuous-time stationary processes termed as
Brownian semistationary (BSS for short) are presented.We discuss their applications
in financial econometrics as well as their connection with fractional Brownian
motions. More precisely, we find a pathwise decomposition of a certain class of
BSS processes in terms of a fractional Brownian motion and a process with absolutely
continuous paths. We briefly discuss how this result can be interpreted as a
pathwise decoupling of the short- and long-term behaviour for stochastic volatility
models based on BSS processes.
Estimation and prediction for the modulated power law process
ABSTRACT. The modulated power law process has been proposed by Lakey and Rigdon in 1992
as a compromise between the non-homogeneous Poisson process and the renewal process models.
It is useful in analyzing duration dependence in economic and financial cycles.
In this paper we consider a problem of estimation and prediction for the modulated power law process.
Using the estimating functions approach we propose new estimators of the parameters of the modulated power law process.
The estimators proposed we apply to construct predictors of the next event time.
We also present algorithms for effective calculating the values of estimators and predictors proposed.
In the simulation study conducted we compare the accuracy of the estimators proposed with the maximum likelihood ones and examine the precision of predictors presented.
The results obtained we apply in analysing a real data set of U.S. stock market cycles.
An empirical analysis of the lead lag relationship between the CDS and stock market: Evidence in Europe and US
ABSTRACT. This paper complements the recent literature providing a thorough research of the lead lag relationship between stock and CDS markets in terms of returns and volatilities and distinguishing between sovereign and financial sectors. We use data for 14 European countries and US over the period 2004-2016 and a rolling VAR framework is estimated. This methodology enables us to analyse the transmission process evolution over time covering both crisis and non-crisis periods. We find that the transmission channel between the credit and stock market exist. We confirm that stock market returns anticipates the CDS market returns, and CDS market returns anticipate stock market returns volatilities, closing a relationship circle between markets. This phenomenon is time varying, it seems to be related with the economic cycle and in general, it’s more intense in US than in Europe.
Pricing of agricultural derivatives: an approach based on models with mean reversion and seasonality
ABSTRACT. This paper analyzes the \textit{in-sample} empirical behavior of the model proposed in Moreno, Novales, and Platania (2017) using prices of futures on three agricultural assets: wheat, corn and soja. This model assumes that these prices show mean reversion and seasonality. In short, it is assumed that the prices converge to a long-term value that experimenta different periodic and smooth fluctuations during long time periods. This assumption is modeled by using a Fourier series. The performance of this model is compared versus that of the models proposed in Schwartz (1997) and Lucía and Schwartz (2002). The main conclusion is that the long-term fluctuations are very relevant drivers of the prices of these agricultural assets and that the Moreno, Novales, and Platania (2017) model outperforms both benchmarks.
Numerical solution of the regularized portfolio selection problem
ABSTRACT. We investigate the use of Bregman iteration method for the solution of the portfolio selection problem, both in the single and in the multi-period case.
Our starting point is the classical Markowitz mean-variance model,
properly extended to deal with the multi-period case.
The constrained optimization problem at the core of the model is ill-conditioned. We consider l1-regularization techniques to stabilize the solution process,
since this has also relevant financial interpretations.
The influence of dynamic risk aversion in the optimal portfolio context
ABSTRACT. Despite the influence of risk aversion in the optimal portfolio context, there are not many studies which have explicitly estimated the risk aversion parameter. In-stead of that, researchers almost always choose random fixed values to reflect the common levels of risk aversion. However, the above could generate optimal port-folios, which not reflect the actual investor’s attitude towards risk. Otherwise, as it is well known, an individual is more or less risk averse according to the eco-nomic and political circumstances. Given the above, we model the risk aversion attitude so that it changes over time, in order to take into account the variability in agents’ expectations. Therefore, the aim of this paper is to shed light on the choice of the risk aversion parameter that correctly represents the investors’ be-haviour. For that purpose, we build optimal portfolios for different types of in-vestment profiles in order to compare whether is better to use a constant risk aversion parameter or a dynamic one. In particular, our proposal is based on es-timating the time-varying risk aversion parameter as a derivation of the market risk premium. For that purpose, we implement several statistical univariate and multivariate models. Specifically, we use conditional variance and correlation models, such as GARCH (1, 1), GARCH-M (1, 1) and DCC-GARCH.
Approximate EM algorithm for sparse estimation of multivariate location--scale mixture of normals
ABSTRACT. Parameter estimation of distributions with intractable density, such as the Elliptical Stable, often involves high--dimensional integrals requiring numerical integration or approximation. This paper introduces a novel Expectation--Maximisation algorithm for fitting such models that exploits the fast Fourier integration for computing the expectation step. As a further contribution we show that by slightly modifying the objective function, the proposed algorithm also handle sparse estimation of non--Gaussian models. The method is subsequently applied to the problem of selecting the asset within a sparse non--Gaussian portfolio optimisation framework.
ABSTRACT. The paper proposes a general basic pension system backed by a mixed financing model. A “basic social pension” (BSP) brings together all different aids given by different administration bodies, in a single scheme would do away with the inconsistencies and shortfalls observed in many current schemes, which result in disparities in the degree of protection received by different segments of the population. Such minimum basic social coverage would need to be backed up by a financing structure capable of guaranteeing its viability and sustainability over time in financial and social terms. It needs to reach most of the population and cover their basic necessities. Setting up a level of social protection sufficient to cover basic necessities would of course entail increasing the level of social assistance provided by the social security system, and this in turn would mean redefining the amounts payable through contributions. This redefinition is of vital importance because it has implications for sources of financing: public funding from taxation to cover the social assistance part and contributions from employers and workers to fund the contributory part.
Helping Long Term Care coverage via differential on mortality?
ABSTRACT. This paper seeks to help draw up a flexible design for pensions for dependents that can help reduce the costs of their situation while precisely increasing the amounts that they receive. The way is a system for the automatic adjustment of pension benefits taking into account the dependency level of the beneficiary. Thus, pension benefits increase in the new state as the cost of care increases. To that end we propose a model with a benefit correction factor that includes a specific mortality rate for dependents, thus enabling us to adapt benefits to the profile of each beneficiary. Special attention is paid to mortality rates among dependents as the determinant for the correction factor. This new model has many practical implications, as it can be implemented without much difficulty and indeed at no additional cost. This enables coverage to be universal in private capitalization-type pension plans. However it does increase the cost of social security systems funded on a pay-as-you-go basis.
A minimum pension for older people via expenses rate
ABSTRACT. For 2050, 21,8 % of the world population will overcome 60 years, 16 % the 65 years old and 4,4 % 80 years, due partly, to the reduction of the rates of fecundity and mortality. This fact will provoke problems in the costs of public and private systems of long term care coverage.
As the major population increases, the expenses increase as well in health and geriatrics services. These ones imply costly technologies and major sectorial in-flation and is another reason to provoke problems in the cost of public and pri-vate systems of long term care coverage. To face them, the pensioners have principally, of their pensions and savings. For it, this paper proposes to develop the methodology to determine minimal pensions under the presumption of sup-porting the expense in vital consumption and to adapt it to the situation of se-vere or great dependent.
A review of the literature is realized, for presenting the methodology used as well as the results of the application for Spain.
Disability Pensions in Spain: A Factor to Compensate Lifetime Losses.
ABSTRACT. Among the different instruments used by the welfare state to protect vulnerable population, there are the disability pensions. These pensions appear as key elements to protect disable people in the absence or breaks in their labour careers. The Spanish pay-as-you-go social security system is a good example of disability pension provision. In Spain, a safety-net combines means-tested and non-means-tested elements to guarantee a certain level of income to individuals with a given degree of disability. In this chapter attention is focused on the second type of component; that is to say, the pensions entitled to the people who having contributing for a certain period of time, have later in life, caused a disability pension. The pension entitled in this later case is computed according to the labour profile of the individual and, consequently, linked to his or her past contributions to the social security system. However, the method used to compute the main component of the pension leaves the beneficiaries of a disability pension in a disadvantageous situation compared to the beneficiaries of a regular pension. This paper discusses this loss and defines a factor to com-pensate that loss.
Equation-by-Equation Estimation of Multivariate Periodic Electricity Price Volatility
ABSTRACT. Electricity prices are characterised by strong autoregressive persistence, periodicity (e.g. intraday, day-of-the week and month-of-the-year effects), large spikes or jumps, GARCH and - as evidenced by recent findings - periodic volatility. We propose a multivariate model of volatility that decomposes volatility multiplicatively into a non-stationary (e.g. periodic) part and a stationary part with log-GARCH dynamics. Since the model belongs to the log-GARCH class, the model is robust to spikes or jumps, allows for a rich variety of volatility dynamics without restrictive positivity constraints, can be estimated equation-by-equation by means of standard methods even in the presence of feedback, and allows for Dynamic Conditional Correlations (DCCs) that can - optionally - be estimated subsequent to the volatilities. We use the model to study the hourly day-ahead system prices at Nord Pool, and find extensive evidence of periodic volatility and volatility feedback. We also find that volatility is characterised by (positive) leverage in half of the hours, and that a DCC model provides a better fit of the conditional correlations than a Constant Conditional Correlation (CCC) model.
Estimation risk for the VaR of portfolios driven by semi-parametric multivariate models
ABSTRACT. Joint estimation of market and estimation risks in portfolios is investigated,
when the individual returns follow a semi-parametric multivariate
dynamic model and the asset composition %of the portfolio
is time-varying.
Under ellipticity of the conditional distribution, asymptotic theory for the estimation of the conditional Value-at-Risk (VaR) is developed.
An alternative method - the Filtered Historical Simulation - which does not rely on ellipticity, is also studied.
Asymptotic confidence intervals for the conditional VaR, which allow to simultaneously quantify the market and estimation risks, are derived.
The particular case of minimum variance portfolios is analyzed in more detail.
Potential usefulness, feasibility and drawbacks of the two approaches are illustrated via Monte-Carlo experiments and an empirical
study based on stock returns.
Asymptotics of Cholesky GARCH Models and Time-Varying Conditional Betas
ABSTRACT. This paper proposes a new model with time-varying slope coefficients. Our model, called
CHAR, is a Cholesky-GARCH model, based on the Cholesky decomposition of the conditional
variance matrix introduced by Pourahmadi (1999) in the context of longitudinal data. We derive
stationarity and invertibility conditions and prove consistency and asymptotic normality of the
Full and equation-by-equation QML estimators of this model. We then show that this class of
models is useful to estimate conditional betas and compare it to the approach proposed by Engle
(2016). Finally, we use real data in a portfolio and risk management exercise. We find that the
CHAR model outperforms a model with constant betas as well as the dynamic conditional beta
model of Engle (2016).
Modelling Time-Varying Volatility Interactions with an Application to Volatility Contagion
ABSTRACT. In this paper, we propose an additive time-varying (or partially time-varying) structure where a time-dependent component is added to the extended vector GARCH process for modelling the dynamics of volatility interactions. In this setting, co-dependence in volatility is allowed to change smoothly between two extreme volatility regimes and contagion is identified from these crisis-contingent structural changes. Furthermore, volatility contagion is investigated by testing significant increases in cross-market volatility spillovers. For that purpose, a Lagrange multiplier test of volatility contagion is presented for testing the null hypothesis of constancy co-dependence against a smoothly time-varying interdependence. Finite sample properties of the proposed test statistic are investigated by Monte Carlo experiments. The new model is illustrated in practice using daily returns on 10-year government bond yields for Greece, Ireland, and Portugal.
Financial Networks and Mechanisms of Business Capture in Southern Italy over the First Global Wave (1812-1913). A Network Approach
ABSTRACT. Networks – at different levels (transport, communication, finance, etc.) – have been recognized as a key driving force in the rise and expansion of global waves over the time. During the 19th century the rise of a cohesive and politically organized global financial elite and its ability to build, and capitalize on, relationships were crucial in allowing capital to move from core financial centers towards peripheral countries, this way exploiting large investment opportunities (trade, sovereign bonds, technology transfers, infrastructure development, etc.). Both before and after Italian political unification (1861) the peripheral area of Southern Italy was a crossroads of international capital flows. It became the hub of a complex network of supranational relations, being embedded in a wider process of integration within the ‘space of flows’ of the developing global capitalism. In this context local actors played a role in fostering integration dynamics, as vehicles through which international capital flows found their way to and rooted in Southern Italy. The work aims to analyse the chains of relations that international actors set up around the business opportunities progressively offered by this peripheral area. It is based on a unique and original database, storing data on the whole amount of enterprises and companies operating in Naples between 1812 and 1913 (www.ifesmez@unina.it). Data, directly collected from original archival sources, are organized as to show the relations existing among actors, among firms and among firms and actors. The work aims to explore how the networks of actors and of business and financial entities reorganized over Italian political unification, in order to uncover how and in what hands power was vested, and consequently understand some of the workings of international financial elite power in catching a peripheral area into a global wave. Within this paper, we will focus on the time varying two-mode networks given by the relations among agents and institutions, and on the derived networks among actors and among institutions. In order to analyze these complex networks we propose to use Multiple Factor Analysis designed for the time-varying two mode networks. This method allows us to take into account at the same time the multiple affiliations among actors and institutions along with the actors’ and institutions’ attribute, i.e. explain the relational pattern according to the different role of the actor in the institution (ceo, director, shareholder, partner, etc) varying along the time. We will explore both the micro level, looking at the trajectories over time of the most central actors and institutions, and the macro level, exploring the variability along the time of the whole networks structure. Particular attention will be paid to structural change in the global relational structures, denoting a change in economic power distribution.
ABSTRACT. The analysis of news in the financial context has gained a prominent interest in the last years. This because of the possible predictive power of such content especially in terms of associated sentiment/mood.
In this paper we focus on a specific aspect of financial news analysis: how the covered topics modify according to space and time dimensions. To this purpose, we employ a modified version of topic model LDA, the so called Structural Topic Model (STM), that takes into account covariates as well. Our aim is to study the possible evolution of topics extracted from two well known news archive - Reuters and Bloomberg - and to investigate a causal effect in the diffusion of the news by means of a Granger causality test.
Our results show that both the temporal dynamics and the spatial differentiation matter in the news contagion.
Community detection based association rules in bipartite networks
ABSTRACT. The interest of enterprises towards the extraction of knowledge patterns out of big masses of data has increased consistently during the last years. An even more interesting problem can be transforming such knowledge in something that is actionable, meaning that business decision must be self-contained into it. This represents the so-called “prescriptive analytics”.
In this work we start from a real world business problem and describe the way we model it from a statistical and network analysis point of view, the algorithms we use and the obtained results.
The approach we propose is to use raw transaction data to develop rules to extract recommendations. The recommendations are ego-centric and based on the idea to study the similarity between agents in terms of purchasing behavior. Profiling each agent with the items that characterize the purchasing of tied customers then produces the recommendations.
We model the raw transaction table as a bipartite graph and, by means of algebraic projection operators and statistical similarity measures, we propose to measure the distance between agents and grouping them into homogeneous clusters. Then, recommendations are produced and ranked in order to extract, for each agent, the best "p" items to recommend.
Forecasting the Equity Risk Premium in the European Monetary Union
ABSTRACT. In this paper, we investigate the capacity of multiple economic variables and technical indicators to forecast the equity risk premium. Similarly to Neely et al (2014), we evaluate several variables and econometric models, but for a different data set. In particular, we work with European Monetary Union data, and for a period that spans from the EMU foundation at the beginning of 2000 to the end of 2015.
The paper first analyzes monthly in-sample forecasting ability of 24 economic and technical variables. To analyze the predictive power of each variable, we start with a traditional simple linear regression, where the equity risk premium in a period is regressed on a constant and the lag of a macroeconomic variable or technical indicator. Later, we also estimate multivariate predictive regressions, but instead of using all the predictor variables, we use principal components analysis to summarize all relevant information of our predictors.
Once in-sample forecasting is finalized, we continue with an out-of-sample forecasting exercise. To check the out-of-sample accuracy, predictive regressions are compared to a popular benchmark in the literature: the historical mean average. Forecasts performance is analyzed in terms of the Campbell and Thompson R2 〖(R〗_os^2), which compares the MSFE of regressions constructed with selected predictors, against the MSFE of the benchmark. To conclude, we measure if there is any economic value in the out-of-sample predictions for a risk-averse investor that has a quadratic utility function.
In-sample, we find similar results to Neely et al (2014). Technical indicators display predictive power, matching or exceeding that of traditional economic forecasting variables. Nevertheless, out-of-sample exercises do not confirm in-sample results. Economic predictors show stronger out-of-sample forecasting ability than technical indicators. Overall, only a few economic and technical predictors display forecasting power in-sample and out-of-sample, and provide economic value for a risk-averse investor.
Measuring financial risk co-movement in commodity markets
ABSTRACT. The dynamics of commodity prices is of undoubted importance because it has po-tentially large welfare and policy implications. These implications are of special interest in the case of commodity prices synchronization, because this fact might cast doubts on the competitiveness and efficiency of commodity markets. We analyze the co-movement of a number of commodity indices in extreme world fi-nancial episodes, where such synchronization appears even for assets that usually show low correlation. The main aim of this study is to provide downside risk co-movement maps of a selected commodity markets during seven recent distress periods. For the construction of such maps, we follow an expected shortfall-multidimensional scaling approach, which enormously facilitates interpretation and allows for an easy classification of commodity markets according to their dy-namics in risky episodes. The results suggest that the less risky markets tend to have the highest degree of risk co-movement, whichever the distress period is, and that the risk dynamics in the Energy market behaves as an outlier; the Energy index overreacts to the different distress shocks, thus becoming the most risky market. This results are of evident great for diversification purposes in portfolio management.
Hybrid tree-finite difference methods for stochastic volatility models
ABSTRACT. In this paper we consider the Heston-Hull-White model, which is a joint evolution for the equity value with a Heston-like stochastic volatility and a generalized Hull-White stochastic
interest rate model which is consistent with the term structure of
the interest rates. We consider a further situation where the dividend rate is
stochastic, a case which is called here the ``Heston Hull-White2d model'' and can be of interest in the multi-currency (the dividend rate being interpreted as a further interest rate).
We concern the problem of numerically pricing European and American
options written on these models.
In this paper, we generalize the hybrid tree/finite-difference
approach that has been introduced for the Heston model in the paper
\cite{bcz}. In practice, this means to write down an algorithm to
price European and American options by means of a backward induction
that works following a finite-difference PDE method in the direction
of the share process and following a recombining binomial tree method
in the direction of the other random sources (volatility, interest
rate and possibly dividend rate).
So, the idea underlying the approach developed in this paper is in some sense very simple: we apply the most efficient and easy to implement method whenever we can do it. In fact, wherever an efficient recombining binomial tree scheme can be settled (volatility, interest rate and possibly dividend rate), we use it. And where it cannot (share process), we use a standard (and efficient, being in dimension 1) numerical PDE approach. Hence we avoid to work with expensive (because non recombining and/or binomial) trees or with PDEs in high dimension. Moreover, for the Cox-Ingersoll-Ross (hereafter CIR) volatility component, we apply the recombining binomial tree method firstly introduced in \cite{bib:acz}, which theoretically converges and efficiently works in practice also when the Feller condition fails.
The description of the approximating processes coming from our hybrid tree/finite-difference approach, suggests a simple way to simulate paths from the Heston-Hull-White models. Therefore, we propose here also a new Monte Carlo algorithm for pricing options which seems to be a real alternative to the Monte Carlo method that makes use of the efficient simulations provided by Alfonsi \cite{al}.
Our approaches allow one to price options in the
original Heston-Hull-White process. Moreover, in the Heston-Hull-White2d model, we allow the dividend rate to be stochastic and correlated to the equity process.
Revisiting the dynamics between CDS spreads and stock returns under a nonlinear approach
ABSTRACT. This paper analyzes the relationship between the US stocks and CDS markets between 2007 and 2015 under a non-linear vector autoregression approach. We apply cointegration analysis to test the existence of stable long-term relationships between the both markets considering that the disequilibrium adjustment process may present nonlinearities. The analysis is performed on an individual level, by studying daily short and long term relationships between the stock price and the corresponding CDS spread of 75 single issuers. The study of the lead-lag relationships is done with a non-linear error correction model, and shows that although both variables respond to disequilibria in the long term relationship, stock returns leads changes in the spread of the CDS, indicating that the price discovery process mostly take place in the stock market. We find evidence that adjustment to imbalances presents a signifi-cant non-linear behavior. The analysis of volatility transmission between the two markets show no significant evidence of volatility transmission. However, there are simultaneous movements in volatility during the periods of crisis and stress in the markets. When we consider separately the relationship be-tween markets distinguishing the crisis period (2007 - 2010) and the subse-quent period of economic recovery (2011 - 2015) the results indicate that the stock market leads the CDS market in both periods, although the latter has greater predictive capability in the recovery period.
ABSTRACT. In the actuarial field, mortality and longevity experience is analyzed through life tables. In the insurance market, life tables are generally used in the Reserving and Pricing process. It is common that those life tables are built from aggregate data and they incorporate margins as a prudence measure to ensure the insurance company’s viability. The new regulations under Solvency II require insurance companies to calculate technical provisions using best-estimate assumptions for future experience (mortality, expenses, lapses, etc) in order to separate (i) the risk free component and (ii) the component that covers the adverse deviation of claims. However, nowadays the methods used by insurance companies (in most countries, included Spain) to study mortality experience do not guarantee that these components can be separated given the use of simplifying methodologies, for instance using a factor or percentage from general life tables in the insurance sector. This simple method is not based on a biometric fundamental and it is equivalent to assuming certain restrictive hypotheses such as: ”the risk free life tables of the insurance company have the same behavior ‘age to age’ as general life tables from the sector”. Given such a scenario, the aim of this work is to develop a new cohort-based estimator (extended) to build life tables. Using a real database of the insured population, we believe it is more appropriate to build life tables based on own experience. The methodology proposed improves the results compared to classical methodologies.
Stochastic Mortality in a Lévy Process Framework and Application to Longevity Products
ABSTRACT. The increase of life expectation of individuals over the past decades has intensified the effects of longevity risk –underestimated during the last years– in different fields of applications in actuarial sciences. The difficulty for insurers, reinsurers, and pension funds to hedge their longevity exposure has motivated the development
of new kinds of insurance-linked assets: longevity bonds, qx-forwards, survivor swaps, options, etc. Taking into account the limited size of this market, pricing these instruments as well as finding good longevity models in the long term remain as an important actuarial challenge. In this paper we propose first a new model of stochastic mortality in continuous time based on L´evy processes. In order to cover empirical observations, jumps –upwards and downwards– have been incorporated. In addition, due to the asymmetric mean-reversion phenomenon observed in existent actuarial data, mean-reverting and non-mean reverting effects are considered. Therefore, the model is able to capture short term movements, such as pandemic effects, terroristic attacks, etc., and long term improvements, such as new health care techniques, technological developments that provide better life quality, etc. Moreover, the suggested model displays time-varying mean-reverting level which follows an alternative affine process based on the Gompertz mortality model with jumps. Regarding applications, due to the intensification of the decreasing behavior of the human mortality, different longevity products are priced such as longevity bonds and longevity swaps that provide hedging strategies against the longevity risk.
ABSTRACT. In 2012, the European Court of Justice made clear that gender equality is a fundamental right in the European Union and asserted that this right also applies to insurance contracts. This statement has a profound impact on life insurance pricing where the differences in life expectancy have naturally led to different insurance premia for men and women. While insurance companies might still want to use male/female life tables in their risk management, they are now forced to create and use unisex life tables for their premium calculation. In this article, we introduce criteria on how to consistently create unisex life tables that account for the relevant risk premia. The challenge is the assumption on the (future) proportion of males and females in the insurance portfolio. Offering unisex products, the insurance company is left with the additional uncertainty in this proportion, an uncertainty that might be affected by adverse selection effects or different mortality improvements between the genders. In this article, we introduce a joint male-female mortality model and add risk premia that account for model and parameter mis-specification risk for the gender-individual mortality rates. Both the model parameters and the risk premia are calibrated to actuarial male/female life tables used in Germany. Then, we show how the mortality rates can be aggregated to unisex mortality rates provided that the initial proportion of males and females in the portfolio is known. We introduce an additional risk premium that accounts for the uncertainty in the future proportion of males/females in this life insurance portfolio. For different life insurance products, this allows us to finally quantify the welfare loss due to the introduction of mandatory unisex tariffs.
ABSTRACT. Seemingly unrelated regression (SUR) models are useful in studying the interactions among economic variables. In a high dimensional setting, these models require a large number of parameters to be estimated and suffer of inferential problems. To avoid overparametrization issues, we propose a hierarchical Dirichlet process prior (DPP) for SUR models, which allows shrinkage of coefficients toward multiple locations. We propose a two-stage hierarchical prior distribution, where the first stage of the hierarchy consists in a lasso conditionally independent prior of the Normal-Gamma family for the coefficients. The second stage is given by a random mixture distribution, which allows for parameter parsimony through two components: the first is a random Dirac point-mass distribution, which induces sparsity in the coefficients; the second is a DPP, which allows for clustering of the coefficients.
ABSTRACT. We propose a Bayesian approach to the problem of variable selection and shrinkage in high dimensional sparse regression models where the regularisation method is an extension of a previous LASSO. The model allows us to include a large number of institutions which improves the identification of the relationship and maintains at the same time the flexibility of the univariate framework. Furthermore, we obtain a weighted directed network since the adjacency matrix is built “row by row” using for each institutions the posterior inclusion probabilities of the other institutions in the system.
ABSTRACT. We extend the study of rate of convergence to consensus of autonomous
agents on an interaction network. In particular, we introduce antagonistic interactions
and thus a signed network. This will allow to include the, previously discarded,
sign information, in the analysis of disagreement on statistical financial networks.
ABSTRACT. In this paper we present a binary regression model with tensor coefficients and present a Bayesian model for inference, able to recover different levels of sparsity of the tensor coefficient.We exploit the CONDECOMP/PARAFAC (CP) representation for the tensor of coefficients in order to reduce the number of parameters and adopt a suitable hierarchical shrinkage prior for inducing sparsity. We propose a MCMC procedure with data augmentation for carrying out the estimation and test the performance of the sampler in small simulated examples.
ABSTRACT. In this paper we introduce the literature on regression models with tensor variables and present a Bayesian linear model for inference, under the assumption of sparsity of the tensor coefficient. We exploit the CONDECOMP/PARAFAC (CP) representation for the tensor of coefficients in order to reduce the number of parameters and adopt a suitable hierarchical shrinkage prior for inducing sparsity. We propose a MCMC procedure via Gibbs sampler for carrying out the estimation, discussing the issues related to the initialisation of the vectors of parameters involved in the CP representation.
Computation of CVA, DVA and BVA adjustment using a copula approach for dependence modeling
ABSTRACT. During the 2007-2009 crisis, loss in market value of credit risk derivatives was more severe than loss due to defaults. This loss in market value, which was not accounted for at that time, is called CVA, DVA or BVA, depending on whether the losses are considered for the creditor, for the debitor or for both of them. The aim of this paper is to come up with a more precise computation of CVA, DVA and BVA adjustment by taking into account the dependence structure of the essential risk drivers in an adequate way.
Valuation of the instruments concerned means discounted all the related cash flows with appropriate discount factors. The cash flows consist of (i) the payoffs of the credit sensitive instrument when no default occurs; (ii) the cash flows related to the margining process of collateralization; (iii) default-related cash flows. These cash flows as well as the discount factors depend on various risk drivers. The most important ones are interest rates on the market risk side and probabilities of default (PD) on the credit risk side. PD, in turn, is expressed as probability of default if the default process is considered and credit spread if its effect on market valuation is considered. In our approach assume that interest rate risk and credit risk each depend on a single risk driver. The underlying statistical problem is thus to model the (bivariate) joint distribution of these risk drivers. This will be done by using different copula functions whose parameters will be estimated by historical data using different statistical methods.
An Integrated Approach to Explore the Complexity of Interest Rates Network Structure
ABSTRACT. We represent the relationships among interest rates of the same term structure using an integrated approach, which combines quantile regression and graphs. First, the correlation matrix estimated via the quantile regression (QR) is used to explore the inner links among interest rates with different maturity. This lets us possible to check for quantile cointegration among short and long-term interest rates and to assess the Expectations Hypothesis of the term structure. Second, we use these inner links to build the Minimum Spanning Tree (MST) and we investigate the topological role of maturities as centres of a network, in an application focusing on the European interest rates term structure in the period 2006-2017. To validate our choice, we compare the MST built upon the quantile regression to the one based on the sample correlation matrix. The results highlight that the QR exalts the prominent role of short-term interest rates; moreover, the connections among interest rates of the same term structure seem being better captured and described by our procedure rather than by the methodology relying on the estimation of the sample correlation matrix.
A model-based measure of network heterogeneity with an application to the Austrian interbank market
ABSTRACT. Recent research on systemic risk has shed light on the crucial role played by network heterogeneity in the propagation of financial shocks. However, it is not immediately clear how one can measure or predict the level of heterogeneity in an observed financial network. In this contribution, we propose a new statistical model for weighted networks evolving over time. Our model is specifically designed to represent the different behaviors of financial institutions; that is, whether they are more inclined to distribute edge weights equally among partners, or if they rather concentrate their commitment towards a limited number of other institutions. Crucially, a Markov property is introduced to capture time dependencies and to make our measures comparable across time.
We propose an application to an original dataset of Austrian interbank exposures. The temporal span ranges from the beginning of 2008 to 2012; hence, it encompasses the onset and development of the financial crisis. Our analysis highlights an overall increasing trend for network heterogeneity, whereby core banks have a tendency to distribute their market exposures more equally across their partners.
ABSTRACT. This work summarizes a line of research on the problem of determining a probability density of compound random variable, or of a sum of compound random variables. Even though an important source of applications is to problems in the banking and insurance business, the same mathematical problem also appears in system reliability and in operations research.
The main mathematical tool is the conncetion between Laplace transforms of positive random variables and fractional moment problems on $[0,1],$ plus the possibility of using the maximum entropy method to solve such problems.
We add that the the maximum entropy procedure has two important features. The first is that it needs a very small number of (real) values of the Laplace transform, and the other is that it can be extended to include errors in the data as well as data specifyied up to intervals.
In symbols, the basic typical problem consits in determining the density of a variable like
$S = \sum_{n=1}^N X_n$
or that of a sum of such random variables. There, $N$ is an integer random variable and the $X_n$ are a sequence of positive, continuous random variables, independent among themselves and of $N.$ Add at this point, that if there is empirical data about $S$ and $N$ can be modelled , the maximum entropy procedure can be used to determine the distribution of the individual losses.
Not only that, as the procedure yields explicit densities we can examine the variability of the reconstructed densities as a function of the empirical loss data, and more important for actuarial or financial problems, we can examine the variability of the expected values computed with the loss densities.
Valuing insider information on interest rates in financial markets
ABSTRACT. Information is the future's currency. Since Big Data's methods were developed, we know that a greater amount of information implies a better economic profit, in all areas we could work. In particular, it happens in Financial Mathematics. Motivated by the scandal about Libor index rate manipulation on July 2012, we consider the optimal portfolio problem where the interest rate is stochastic and the agent has additional information on its value at a finite terminal time. The agent's objective is to optimize the terminal value of her portfolio under a logarithmic utility function. Using techniques of initial enlargement of filtration, we identify the optimal strategy and compute the value of the information. Finally we study what happens if the information own by the agent is weaker than before and we present several examples where the insider trader will be infinitely rich, infinitely poor or with finite expectation.
Small Sample Analysis in Diffusion Processes: a Simulation Study
ABSTRACT. Diffusion processes are able to model stochastic phenomena, such us dynamics of financial securities and short-term loan rates. Several methods for the inference for discretely observed diffusions have been proposed in the literature, generally based on MLE or its generalizations, or on techniques based on estimating functions, indirect inference and efficient method of moments.\\
We focus on two well known diffusion processes, Vasicek and CIR models. Sample properties of MLE estimators for their parameters are well known when the sample size $n$ tends to infinity. However, in many applications data are yearly or quarterly observed, so the condition $n\to \infty$ means to observe the phenomenon for a very long period and most likely such kinds of time series present structural breaks. This is the case in which Vasicek and CIR models are used in insurance for the valuation of life insurance contracts or also to model short-term interest rates. In this talk we focus on small sample properties of some alternative estimation procedures in Vasicek and CIR models. In particular, we consider short time series, with a length $n$ between 10 and 100, tipically values observed in these contexts. We perform a simulation study in order to investigate which properties of the parameter estimators remain still valid. Moreover, we also investigate what extend the estimator accuracy remains accettable for very short time series, for example 20-30 yearly observations.
ABSTRACT. In the paper we propose a stochastic model, based on a Vasicek non-homogeneous diusion process, in which the trend coecient and the volatility are deterministic time-dependent functions. The stochastic inference based on discrete sampling in time is established using a methodology based on the moments of the stochastic process. In order to evaluate the goodness of the proposed methodology a simulation study is discussed.
13:20-15:00Lunch Break
Hotel Ganivet is located in Calle de Toledo, 111. (Just at 150 m from the Conference site).
Modeling equity release benefits in terms of financial needs of the elderly
ABSTRACT. On developed markets, equity release products are offered to elderly people (especially for “asset-rich, cash-poor” pensioners) allowing retirees to convert their illiquidity asset into cash payment. Equity release products are offered in two basic models: sale model (home revision) and loan model (reverse mortgage, lifetime mortgage). Equity release benefits are used by elderly people to fulfil their various needs, such as: help with regular bills, home and/or garden improvements, paying debts (e.g. loans, credit cards), financial help to family members, going on holiday. Depending on the contract, the pensioner can receive benefits in different forms. Among them, we can distinguish: a lump sum, income streams, a line of credit or a combination of these payment schemes. The amount of these benefits depends on many factors, including the value of the property, the age of the beneficiary, the form of pay-ment of the benefits, the level of interest rates. It should be emphasized that the value of the property de-pends on location (big city, small city, village).
The demographic changes unfolding in developed countries clearly show that people live longer. In addition, data from OECD and Eurostat indicate that we live longer maintaining a good health condition. Both of these issues should be included in the modeling of the benefits paid through equity release products (especially for those pensioners who use them to supplement their current income). Therefore, the aim of this paper is to build two models to estimate the value of equity release benefits (mainly for whole life benefits and term life benefits). The first model takes into consideration the fact that reaching subsequent stages of old age generates greater financial needs. These needs arise mainly from the progressive limitation of independent living, not infrequently coupled with having to finance a long-term care. Gerontologists distinguish three or four old age phases. In this paper, three phases of old age were considered: the young old, the middle old and the very old. If a beneficiary lives through the next phase of old age, the value of his or her equity release benefits will increase. The growth of the value of these benefits depends on the rate of inflation and health care expenditure. The second model chiefly takes into account the number of years over which the pensioner has lived maintaining a good health condition. Up to this point in time the amount of equity release benefits increases only by the inflation rate. After this period, the value of the benefit starts to increase more quickly (year by year). The value of this growth depends on the health care expenditures (from previous periods). The age of a retiree (who signs the agreement) is very important for both models. In many countries there is the following rule: the older the beneficiary- the higher the value of Loan to Value (LtV) indicator (for example in Australia: the value of LtV is between 15-20% for 60 year-old beneficiaries and increases with the pensioner’s age; 1 year equals 1 % of LtV). In addition, both models can be considered separately for people living in a city or in the countryside. According to life tables, life expectancy for city and country dwellers is different. To achieve the objective of the paper, life annuity calculations were used. The author focuses on the Polish market. Employing these models in practice is strongly dependent on economic and financial literacy of the elderly, while the approval of the pensioner’s relatives may be of key importance, too.
Actual sustainability of a pension system by logical sustainability theory. An application to the CNPADC Italian case
ABSTRACT. The object of research is the application of the logical sustainability theory to the Italian case of the Social Security Institution for Accountants (CNPADC).
The purpose of the theory of the logical sustainability is to allow the management of a defined contribution pension system, provided with assets, in order to ensure its sustainability over time jointly with the payment of an optimal rate of return on the participants' pension savings.
It is worth noting that the actuarial balance sheet provides a sustainability only of hypothetical-deductive type, i.e. it depends on the assumptions used for projections. Differently, the logical sustainability theory actually, i.e. with certainty, guarantees the sustainability of the system.
The logical sustainability theory provides for the separation of the pension system into two sub-systems: the pivot and auxiliary systems, which are perfectly aligned under the profile of the rate of return to recognize to the participants' contributions, and are financially managed according to the pay-as-you-go scheme and to the funded scheme, respectively.
The pension liability of a defined contribution pension system is a state variable that is highly stable and controllable, differently from defined benefit pension systems.
For this control, the proposed theory uses two main variables: the rate of return that the system recognizes to its participants' savings and the contribution rate.
The pension liability is a state variable also used in the Swedish pension system, which also implements a sustainability indicator, named balance ratio, whose control does not guarantee the system's sustainability. On the contrary, the theory of the logical sustainability, among many other useful features, provides the LSI indicator (Logical Sustainability Indicator), which ensures the system's sustainability.
Practical Problems with Tests of Cointegration Rank with Strong Persistence and Heavy-Tailed Errors
ABSTRACT. Financial time series have several distinguishing features which are of concern in tests of cointegration. An example considered in this paper is testing the approximate non-arbitrage relation between the credit default swap (CDS) price and bond spread. We show that strong persistence and very high persistence in volatility are stylised features of cointegrated systems of CDS prices and bond spreads. There is empirical support that the distribution of the errors is heavy-tailed with infinite fourth moment. Tests for cointegration have low power under such conditions. The asymptotic and bootstrap tests are unreliable if the errors are heavy-tailed with infinite fourth moment. Monte Carlo simulations indicate that the wild bootstrap (WB) test may be justified with heavy-tailed errors which do not have finite fourth moment. The tests are applied to CDS prices and bond spreads of US and European investment-grade firms.
ABSTRACT. We study the effect of drift in pure-jump transaction-level models for asset prices in continuous time, driven by point processes. The drift is as-sumed to arise from a nonzero mean in the efficient shock series. It follows that the drift is proportional to the driving point process itself, i.e. the cumulative number of transactions. This link reveals a mechanism by which properties of intertrade durations (such as heavy tails and long memory) can have a strong impact on properties of average returns, thereby poten-tially making it extremely difficult to determine long-term growth rates or to reliably detect an equity premium. We focus on a basic univariate model for log price, coupled with general assumptions on the point process that are satisfied by several existing flexible models, allowing for both long mem-ory and heavy tails in durations. Under our pure-jump model, we obtain the limiting distribution for the suitably normalized log price. This limiting distribution need not be Gaussian, and may have either finite variance or infinite variance. We show that the drift can affect not only the limiting dis-tribution for the normalized log price, but also the rate in the corresponding normalization. Therefore, the drift (or equivalently, the properties of dura-tions) affects the rate of convergence of estimators of the growth rate, and can invalidate standard hypothesis tests for that growth rate. As a rem-edy to these problems, we propose a new ratio statistic which behaves more
Modeling dependent and simultaneous risk events via the Batch Markov-Modulated Poisson Process
ABSTRACT. In this work, we consider the estimation for a wide subclass of the Batch Mar-kovian Arrival Process (BMAP), namely, the Batch Markov-Modulated Poisson processes (BMMPP) which generalize the well-known Markov-Modulated Pois-son process. The BMMPP is a general class of point processes suitable for the modeling of dependent and correlated batch events (as arrivals, failures or risk events). A matching moments technique, supported by a theoretical result that characterizes the process in terms of its moments, is considered. Numerical re-sults with both simulated and real datasets related to operational risk will be pre-sented to illustrate the performance of the novel approach.
Optimal investment strategies and intergenerational risk sharing for target benefit pension plans
ABSTRACT. In this talk, I will present a stochastic model for a target benefit pension fund in continuous time, where the plan members’ contributions are set in advance while the pension payments depend on the financial situation of the plan, with risk sharing between different generations. The pension fund is invested in both a risk-free asset and a risky asset. In particular, stochastic salary rates and the correlation between salary movements and financial market fluctuations are considered. A stochastic optimal control problem is set, which minimizes the combination of benefit risk (in terms of deviating from the target) and intergenerational transfers, and closed-form solutions are obtained for optimal investment strategies as well as optimal benefit payment adjustments using the standard approach. Numerical analysis is presented to illustrate the sensitivity of the optimal strategies to parameters of the financial market and salary rates.
A generalized moving average convergence/divergence for testing semi-strong market efficiency
ABSTRACT. We aim to monitor financial asset price series by
a generalized version of the moving average convergence/divergence
(MACD) trend indicator which is currently employed as technical
indicator in trading systems. We use the proposed indicator to test the
semi-strong form of market efficiency hypothesis
stating that asset prices follows a martingale model when, on the basis
of the information available up to a given time, the expected
returns are equal to zero. By assuming a martingale model with drift for prices, we propose an MACD-based test statistic for the local drift and derive its main theoretical properties. The semi-strong market efficiency hypothesis is assessed through a nonparametric bootstrap test, where under the null hypothesis the prices have no drift. A simulation study shows that the empirical size and power of the bootstrap test rapidly converges to the chosen nominal levels. Finally, we use our test to derive a long/short equity trading strategy where no active positions are taken or maintained under the null hypothesis (no drift). We apply this strategy to monitor 1,469 daily quotes of the crude oil prices over the six year period 2010-2016 finding that our methodology is often capable to detect market inefficiencies by accruing positive returns.
A market consistent framework for the fair evaluation of insurance contracts under Solvency II
ABSTRACT. In the last years regulators introduced, with the Solvency II directive, a market consistent and risk neutral valuation framework for determining the fair value of asset and liabilities of insurance funds.
In this work, we introduce an arbitrage free and market consistent economic scenario generator (ESG) which allows for different stochastic sources of risk: interest rates, sovereign credit spread and liquidity basis, corporate rating transition and default process. In this model, the dependence between different sovereign issuers (or corporate sectors) is also considered.
We give a wider perspective to our model specifying the dynamic of risk factors under both the real world and the risk neutral probability measures, making our model suitable for pricing and risk management, in particular, of the valuation risk due to correlation, basis and liquidity risk
Furthermore, we address valuation risk by trying our calibration procedure on different models, and proposing an invariant probabilistic sensitivity analysis for assessing the calibration risk, i.e. the impact that statistical uncertainty in the estimation of a model parameters can have on the probability distribution of risk factors and hence on pricing. To the best of our knowledge, this kind of analysis is an innovation for ESG practice.
Risk and return in Loss Portfolio Transfer (LPT) treaties within Solvency II capital system: a reinsurer’s point of view
ABSTRACT. Loss Portfolio Transfer (LPT) is a reinsurance treaty in which an insurer cedes policies that have already incurred losses to a reinsurer. In a loss portfolio transfer, a reinsurer assumes and accepts the insurer’s existing open and future claim liabilities through the transfer of the insurer’s loss reserves. The liabilities may already exist, such as claims that have been processed but not yet paid, or may soon appear, such as incurred but not reported (IBNR) claims. Following the collective risk approach, this paper examines the risk profiles, the reinsurance pricing, the economical and the financial effects of LPT treaties, taking into account the insurance capital requirements established by European law. In particular, quantitative evaluations are obtained through a stochastic model and from a reinsurer’s point of view. It is essential to calculate the capital need for the risk deriving from the LPT transaction. In the case analyzed, this requirement is calculated on the basis of the standard formula according to Solvency II, by determining the measure of variability via simulation. In addition, by dividing insurance risk and financial risk, the cost of capital is calculated by considering reserving risk and market risk separately, and aggregating individual capital charges by the standard formula. This evaluation is performed for different levels of the cost of capital and different levels of confidence for the definition of the VaR, and provides a range of possible charges to be applied to the premium; the specific level can be chosen by the reinsurance company based on its own Risk Appetite.
ABSTRACT. Conic finance models the market as a passive counterparty accepting any trade whose performance exceeds a certain threshold. It replaces the one-price market- model by a two-price model giving bid and ask prices for traded assets, see Madan, Dilip B., and Alexander Cherny. “Markets as a counterparty: an in- troduction to conic finance.” International Journal of Theoretical and Applied Finance 13.08 (2010): 1149-1177.
Modeling a stock by a binomial tree, it is possible to calculate bid and ask prices of European options in the framework of conic finance using recursively defined risk measures, which are based on concave distortions. We take the limit of the binomial model and provide a new continuous time pricing formula for bid and ask prices of European options. Our formula is an extension of the Black-Scholes formula.
We then extend the binomial model and allow the stock to jump at a time- point which is known in advance. Such a jump can model the effect of certain events on the stock price, for example an earning announcement, an announce- ment by the central bank, an important election etc. We show that the recently developed theory of conic finance predicts that the bid-ask spread will smoothly widen before a public announcement. The spread is at its highest level just before the announcement and jumps back to its former level after the event has taken place. A perfect hedge is not possible due to the jump, which explains the bid-ask spread just before the jump. This hedging risk is weakened inherited by the former trading dates.
This observation is totally in line with well-established information asym- metry models, which also predict that bid-ask spreads widen before a public announcement and fall to a normal level after the announcement has been an- ticipated: Market makers are usually less capable to follow the activities of a company as closely as well informed traders. Hence before a public announce- ment, market makers are likely to increase the bid-ask spread to protect against better informed market participants. After the information is processed, the information asymmetry due to the announcement disappears and bid and ask prices fall back to their former level.
The Drivers and Value of Enterprise Risk Management: Evidence from ERM Ratings
ABSTRACT. In the course of recent regulatory developments, holistic enterprise-wide risk management (ERM) frameworks have become increasingly relevant for insurance companies. The aim of this paper is to contribute to the literature by analyzing determinants (firm characteristics) as well as the impact of ERM on the shareholder value of European insurers using the Standard & Poor’s ERM rating to identify ERM activities. This has not been done so far, even though it is of high relevance against the background of the introduction of Solvency II, which requires a holistic approach to risk management. Results show a significant positive impact of ERM on firm value for the case of European insurers. In particular, we find that insurers with a high quality risk management (RM) system exhibit a Tobin’s Q that on average is about 6.5% higher than for insurers with less high quality RM after controlling for covariates and endogeneity bias.
Latent risk in the correlation assumption under insurance regulations
ABSTRACT. This paper aims to construct a comprehensive framework for the risk aggregation from the balance sheet of a non-life insurer using pair copula construction (PCC) method. This study is motivated by the fact that existing regulation frameworks (Risk-based Captial and Solvency II) restrict the correlation between different risk factors to the linear measure, which might not reflect the practical implementation. The comprehensive framework is composed of two levels of aggregation: base-level aggregation and top-level aggregation. Using a realistic portfolio and historical data on asset and liability sides of the balance sheet in Korean and German markets, we fit PCC-GARCH model with flexible copula choice for the base-level aggregation of the asset portfolio and the collective risk model with PCC for the insurance portfolio. We compare the risk measure by the estimated structure with three regulation standards: RBC, Solvency II and Swiss Solvency Test. Two main findings are derived in this study; Firstly, PCC model best fits for both portfolios compared to other competing models (elliptical copulas and hierarchical Archimedean copulas) and the independence structure needs to be established for the top-level aggregation. Secondly, current regulation frameworks significantly underestimate the potential risk size, producing almost 10% lower economic capital than the fitted model does. The findings of this study are significant for non-life insurers and regulators in that they can show the drawback of the correlation assumption by current regulation frameworks.
The assessment of longevity risk in a stochastic Solvency II perspective
ABSTRACT. This paper concerns the assessment of longevity risk. The aim is to establish a common understanding between regulators and insurers on how to assess the impact of population risk on capital charges. The framework is set up in a way that accommodates the regulatory regime of Solvency II as well as actuarial practice attempting, therefore, to bridge the gap between academia and practice. The objective of this paper is to provide a framework for modeling population risk and quantifying its impact. This should allow market participants (risk hedgers, regulators etc) to better understand the allocation between their exposure to the general longevity risk and their specific longevity risk. The goal is to understand if the structural longevity risk predominates the specific longevity risk. As a consequence, the Solvency II framework should be able to completely describe the evolution in time of longevity. Numerical results are provided.
Tuning a Deep Learning Network for Solvency II: Preliminary Results
ABSTRACT. Solvency II, a new directive for insurance companies requires insurance undertakings to perform continuous monitoring of risks and a market consistent valuation of liabilities.
The application of the Solvency II principles gives rise to very hard theoretical and computational problems; the market-consistent valuation, the estimation of Solvency Capital Requirement (SCR), the elicitation of the Probability Distribution Forecast rely on Monte Carlo simulation, in particular on nested Monte Carlo simulation, that is very computationally demanding and "time consuming", because of the complexity of contracts and the great number of contracts in each portfolio.
For this reason, a way to reduce the number of simulations, and consequently the calculation time, would be highly desirable.
Recently, different techniques have been proposed in literature with the aim to reduce the computational cost of "full" nested simulation; among the others the Least-squares Monte Carlo seems the most promising approach, nevertheless the investigation field is still open and several solutions can be explored.
A promising method is given by Deep Learning Networks, a powerful and flexible Machine Learning technique which has proven its effectiveness in several research areas in social sciences.
A Deep Network is a multilevel Artificial Neural Network (ANN), essentially mimicking the ways in which the human brain processes information.
The hierarchical structure enables multiple levels of representation, with inner layers constructing more abstract models based on the representations built by outer layers.
The expressive power of Deep Learning Networks is counterbalanced by the number of hyperparameters that need be tuned for the network to achieve an acceptable performance.
Such hyperparameters include the number of layers, the number of units in each layer, their activation function, in addition to other characteristics that influence the behavior of the learning algorithm.
In this work, we present initial results on the tuning of the hyperparameters of a Deep Learning Network to approximate the Probability Distribution Forecast and the Solvency Capital Requirement.
Multivariate dependence and spillover effects across energy commodities and diversification potentials of carbon assets
ABSTRACT. In a first step, we model the multivariate tail dependence structure and spillover effects across energy commodities such as crude oil, natural gas, ethanol, heating oil, coal and gasoline using canonical vine (C-vine) copula and c-vine conditional Value-at-Risk (CoVaR). In the second step, we formulate portfolio strategies based on different performance measures to analyze the risk reduction and diversification potential of carbon assets for energy commodities. We identify greater exposure to losses arising from investments in heating oil and ethanol markets. We also find evidence of carbon asset providing diversification benefits to energy commodity investments. These findings motivate for regulatory adjustments in the trading and emission permits for the energy markets most strongly diversified by carbon assets.
The behaviour of energy-related volatility indices around scheduled news announcements: Implications for variance swap investments
ABSTRACT. This paper investigates the behaviour of two energy-related volatility indices calcu-lated by the Chicago Board of Options Exchange (CBOE) around scheduled news announcement days and its implications for the design of profitable trading strategies using variance swaps. These volatility indices are the crude oil volatility index (OVX) and the energy sector volatility index (VXXLE), which measure the market’s expecta-tion of 30-day volatility of crude oil prices and energy sector returns by applying the VIX model-free methodology to options on the United States Oil Fund (USO) and the Energy Sector Select SPDR Fund (XLE), respectively. We find that energy-related volatility indices tend to fall following several US macroeconomic news releases (i.e., GDP and NFP reports and Federal Open Market Committee (FOMC) meetings) as well as OPEC meetings. Put differently, these news releases resolve investors’ uncer-tainty about the future development of energy-related markets. In contrast, the re-lease of the US manufacturing index (PMI) leads to the creation of uncertainty in the oil market. We further analyse whether profitable trading strategies using variance swaps can be devised based on the anticipated behaviour of volatility indices on announcement days. For that purpose, we use the result that proves the square of the CBOE’s volatility indices approximates the variance swap rate with a 30-day maturi-ty. Thus, we analyse the returns to positions in 30-day USO and XLE variance swaps opened the day before scheduled releases and closed at the end of the release day. Our results show that one-day short positions in USO (XLE) variance swaps taken the day before the GDP and NFP release days yield statistically significant average re-turns of 3.7% and 3.8% (2,9% and 5,6%), respectively. We also find that long posi-tions in USO variance swaps opened the day before the PMI release day and held for one day result in highly significant average returns of 5,1%.
Modelling the Australian electricity spot prices: A VAR-BEKK approach
ABSTRACT. In this paper we investigate the transmission of spot electricity prices and price volatility between the five Australian regional electricity markets belonging to the National Electricity Market (NEM) over the period February 8, 2013 to November 30, 2017. Despite being a highly sophisticated and interesting market, only a limited number of studies have concentrated on the dependence between the different regional markets through a multivariate analysis. In this respect, pioneer work has been carried out, among others, by Worthington et al. (2005) and Higgs (2009), who first use Multivariate GARCH (MGARCH) models applied to the daily series of spot prices. To this extent, our paper complement and extend the existing literature by employing a two step procedure: in the first step we estimate a VAR(k) model with optimized lag length and a dummy variable to capture the day-of-the-week effect; in the latter we consider a bunch of unrestricted BEKK(p,q) specifications under both normal and Student-t assumptions to find the preferred one to better explain the changed market conditions.
Testing normality for unconditionally heteroscedastic macroeconomic variables
ABSTRACT. In this paper the testing of normality for unconditionally heteroscedastic macroeconomic time series is studied. It is underlined that the classical Jarque-Bera test (JB hereafter) for normality is inadequate in our framework. On the other hand it is found that the approach which consists in correcting the heteroscedasticity by kernel smoothing for testing normality is justified asymptotically. Nevertheless it appears from Monte Carlo experiments that such methodology can noticeably suffer from size distortion for samples that are typical for macroeconomic variables. As a consequence a parametric bootstrap methodology for correcting the problem is proposed. The innovations distribution of a set of inflation measures for the U.S., Korea and Australia are analyzed.
Regulatory Learning: how to supervise machine learning models?
ABSTRACT. The arrival of big data strategies is threatening the latest trends in financial regulation related to the simplification of models and the enhancement of the comparability of approaches chosen by financial institutions. Indeed, the intrinsic dynamic philosophy of Big Data strategies is almost incompatible with the current legal and regulatory framework as illustrated in this paper. Besides, as presented in our application to credit scoring, the model selection may also evolve dynamically forcing both practitioners and regulators to develop libraries of models, strategies allowing to switch from one to the other as well as supervising approaches allowing financial institutions to innovate in a risk mitigated environment. The purpose of this paper is therefore to analyse the issues related to the Big Data environment and in particular to machine learning models highlighting the issues present in the current framework confronting the data flows, the model selection process and the necessity to generate appropriate outcomes. This work was achieved through the Laboratory of Excellence on Financial Regulation (Labex ReFi) supported by PRES heSam under the reference ANR10LABX0095. It benefited from a French government support managed by the National Research Agency (ANR) within the project Investissements d’Avenir Paris Nouveaux Mondes (investments for the future Paris New Worlds) under the reference ANR11IDEX000602.
Financial systems convergence in the EU and the crisis
ABSTRACT. The financial/euro crisis significantly changed the landscape of European financial markets. This paper investigates the convergence of financial structures across all countries in the European Union. The aim is to assess whether the financial crisis affected this convergence process, fostering the segmentation of financial systems across countries.
We borrow the concept of convergence as developed in the growth literature, and explore the beta convergence of financial systems. A country specific variable is said to beta converge across a set of countries if its growth in time is faster in countries where it exhibits a lower initial level. Convergence analysis in the European Union mostly concentrated on prices and yields across country specific markets. This paper contributes to the existing literature by exploring convergence in quantities (stocks of assets and liabilities) both for the whole economy and for several key economic subsectors.
The European System of Account (ESA) classifies the European economies by six sectors: monetary and financial institutions, other financial intermediaries, non financial corporations, insurance and pension funds, households and general government. We conduct our analysis using dynamic panel techniques: the sample covers all the 28 countries of the Union in the time span 1999-2015, on a yearly basis. The main data source for our analysis is the Eurostat financial balance sheets database. We repeat the convergence analysis in two sub-periods, namely, the pre-crisis period (1999-2007) and the post-crisis period (2008-2015), and compare the results.
Before the crisis, less developed financial systems were growing faster in size, driven by the total financial assets of households (mainly bank deposits and insurance products). We also detect convergence of bank loans and of the leverage ratio of households, defined as the ratio of loans to total assets. This shows that households demand for bank credit was converging faster than their total financial wealth.
The crisis broke the convergence process of financial systems size, and caused a slowdown in the growth of the assets of financial institutions in countries with an advanced banking sector. We thus detect convergence of financial institutions assets size and leverage, testifying the faster expansion of the banking sector in countries where it was less advanced before the crisis. We observe convergence of the leverage ratio also of non financial firms and other financial intermediaries: this combination leads to the convergence of leverage of total economy. The crisis did not break the convergence process of households’ financial assets. The convergence of insurance products is still a major driver but is now accompanied by the holding of securities rather than by bank deposits.
The banking crisis caused a break in the convergence of bank credit. The deleveraging of financial institutions in developed countries corresponds to a faster increase of the banking sector in countries where this sector was less advance before the crisis. This prosper of the banking sector is accompanied by a rapid increase of the leverage of the whole economy through several key sector, but is no more driven by the demand of credit of households. We observe instead a widespread increase of financial market size, driven by a growing presence of securities in households’ portfolios.
Optimal insurance contract under ambiguity. Applications in extreme climatic events.
ABSTRACT. Insurance contracts are efficient risk management techniques to operate and reduce losses. However, very often, the underlying probability model for losses - on the basis of which premium is computed - is not completely known. Furthermore, in the case of extreme climatic events, the lack of data increases the epistemic uncertainty of the model.
In this talk we propose a method to incorporate ambiguity into the design of an optimal insurance contract. Due to coverage limitations in this market, we focus on the limited stop-loss contract, given by I(x)=min(max(x-d1),d2), with deductible d1 and cap d2. Therefore, we formulate an optimization problem for finding the optimal balance between the contract parameters that minimize some risk functional of the final wealth. To compensate for possible model misspecification, the optimal decision is taken with respect to a set of non-parametric models. The ambiguity set is built using a modified version of the well-known Wasserstein distance, which results to be more sensitive to deviations in the tail of distributions. The optimization problem is solved using a distributionally robust optimization setup. We examine the dependence of the objective function as well as the deductible and cap levels of the insurance contract on the tolerance level change. Numerical simulations illustrate the procedure.
Structural Pricing of CoCos and Deposit Insurance with Regime Switching and Jumps
ABSTRACT. This article constructs a structural model with jumps and regime switching features that is specifically dedicated to the pricing of bank contingent convertible debt (CoCos) and deposit insurance. This model assumes that the assets of a bank evolve as a geometric regime switching double exponential jump diffusion and that debt profiles are exponentially decreasing with respect to maturity. The paper starts by giving a general presentation of the jumps and regime switching framework, where an emphasis is put on the definition of an Esscher transform applicable to regime switching double exponential jump diffusions. The following developments concentrate on the definition and implementation of a matrix Wiener-Hopf factorization associated with the latter processes. Then, valuation formulas for the bank equity, debt, deposits, CoCos and deposit insurance are obtained. An illustration concludes the paper and addresses the respective impacts of jumps and regime switching on the viability of a bank.
Optimal Strategies With Option Compensation Under Mean Reverting Returns or Volatilities
ABSTRACT. We study the problem of a fund manager whose contractual incentive is
given by the sum of a constant and a variable term. While the constant part is fixed and paid for sure, independently from the performances of the manager, the variable part is related to the relative return of the fund with respect to a benchmark.
More precisely, the variable term is a call option where the underlying is the managed fund and the strike is the value of the benchmark at maturity.
Contracts of this type are quite common: for instance among hedge funds managers, the so called 2-20 contract provides a constant 2\% of the assets under management and a 20\% of the over-performance.
Our objective is to determine an approximate solution for the optimal strategy of a manager endowed with such a contract and with a power utility of the manager, and under a set of assumptions on the dynamics of the market. An important part of the current financial research is concerned with the fact that expected returns and risk premia vary over time but are somewhat predictable. Following such a stream we
consider asset dynamics subject to volatility or risk premium that are random and mean reverting. We study the incentive problem for two models: the first one considers mean-reverting market prices of risk, the second one considers mean-reverting volatilities.
ABSTRACT. The minimum variance hedge ratio is the ratio between the covariance of
spot and futures returns, and the variance of futures returns. We propose a simple
estimate of the hedge ratio using only conditional variance estimates. This is computationally
more convenient than the traditional Bivariate GARCH approach for
which the estimation is more challenging because of the calculation of the likelihood
using a bivariate density function and the larger number of parameters. We
perform an extensive out-of-sample evaluation of the usefulness of the proposed
variance implied hedge ratio when cross-hedging the S&P 500 and the NYSE Arca
Oil spot indexes using the CBOT 10-y US T-Note, the CME Euro FX, the NYMEX
Gold and the NYMEX WTI Crude Oil futures indexes. We use weekly data over
an evaluation period from January 1, 2003 until December 31, 2016 and the out-ofsample
period starts January 1, 2010 which corresponds to half the sample length.
We find that, in terms of risk reduction, the variance implied hedge ratio significantly
outperforms the unconditional hedge ratio, and that it performs equally well
as the more complex approach using bivariate GARCH models.
Option price decomposition in spot-dependent volatility models and some applications
ABSTRACT. In this talk, In this paper we obtain a Hull and White type option price decomposition for a spot-dependent volatility model. We apply the obtained formula to CEV model. As an application we give an approximated closed formula for the call option price under a CEV model and an approximated short term implied volatility surface. These approximated formulas are used to estimate model parameters. Numerical comparison is performed for our new method with exact and approximated formulas existing in the literature.