We study financial contagion in the euro area stock markets during the European sovereign debt crisis. We look at both stock returns and volatility measures extracted from option prices. We also provide measures of excess dispersion and correlation by combining data on stock indices and their constituents. We find higher correlations for volatilities than returns. We also find that return correlations increase in volatile periods. The impact of the crisis across countries and sectors, however, was heterogeneous. Banks were mostly affected, those from core countries holding Greek bonds first, and Spanish and Italian banks holding mostly domestic sovereign debt later.

ABSTRACT. Variable annuities are very flexible life insurance contracts that can package
living and death benefits with a number of possible guarantees against financial
or biometric risks. Typically, a lump sum premium is paid at inception, and is invested in well diversified mutual funds. This initial investment sets up a reference portfolio (policy account) and all guarantees are financed by periodical proportional deductions (fees) from this account. Guarantees are often set in such a way that at least the lump sum premium is totally recouped. Then, when the account value is high, the policyholder has an incentive to surrender the contract, stopping to pay high fees for an out-of-the-money guarantee. Conversely, when the account value is low, the policyholder pays a low fee for an in-the-money guarantee. Summing up, there is an unfair misalignment between costs incurred by the insurer and premiums (fees) to cover them, and a great incentive, for policyholders, to abandon their contracts
when they become uneconomical. To eliminate this misalignment and reduce
the surrender incentive insurers can adopt a threshold expense structure, or state-dependent fees, according to which the fees, still proportional to the account value, are paid only if this value is below a given threshold. In this paper we consider a variable annuity with guarantees at death and maturity financed through the application of such fees. We propose a quite general valuation model, and then analyse numerically the interaction between fee rates, death/maturity guarantees, fee thresholds and surrender penalties under alternative policyholder behaviours. We apply Monte Carlo and Least Squares Monte Carlo methods (LSMC) for the numerical implementation of the valuation model. Special care is needed in the application of LSMC, due to the shape of the surrender region, but a theoretical result allows us to stem the numerical errors arising in the regression step of the valuation algorithm.

Evaluating variable annuities with GMWB when exogenous factors influence the policy-holder withdrawals

ABSTRACT. We propose a model for evaluating variable annuities with
guaranteed minimum withdrawal benefits (GMWB) in which a rational
policy-holder, who would withdraw the optimal amounts maximizing the
current policy value only with respect to the endogenous variables
of the evaluation problem, acts in a more realistic context where
her/his choices may be influenced by exogenous variables that may
lead to withdraw sub-optimal amounts. The model is based on a
trinomial approximation of the personal sub-account dynamics that,
despite the presence of a downward jump due to the payed withdrawal
at each anniversary of the contract, guarantees the reconnecting
property. The model is dynamic in that the policy-holder may choose
the amount to withdraw among a certain number of alternatives
associated to the nodes lain at each withdrawal epoch. The effect of
the exogenous variable is taken into account by assigning an
occurrence probability to each one of the possible amounts, which
may arise from optimal or sub-optimal withdrawals at the previous
contract anniversaries. All these features are finally taken into
account in the backward induction scheme proposed to compute the
policy value in each state of the trinomial approximation and, as a
result, the insurance fee paid for the GMWB guarantee.

Some empirical evidence on the need of more advanced approaches in mortality modeling

ABSTRACT. Force of mortality is defined using an exponential function of Legendre polynomials, as in Renshaw et al (1996), with an extra term which captures mortality shocks. For the extra term Ballotta and Haberman (2006) and Ahmadi and Gaillardetz (2015) consider an Ornstein-Uhlenbeck while we suggest using Lévy Continuous Autoregressive Moving Average (CARMA) models. The proposed model encompasses as special cases the existing approaches and allows us to construct a model selection procedure. We present some extensions in the theory of Lévy CARMA models useful for model estimation. Males life tables for USA, Taiwan and Japan for the time period 1998-2013 are used for the presentation of fitting and projection results. Empirical analysis suggests that considering only Ornstein-Uhlenbeck is not the best model to fit mortality rates.

ABSTRACT. The recent financial crisis has fueled the search for precise systemic risk measures. Girardi and Ergün (2013) modify Adrian and Brunnermeier's Conditional Value at Risk (CoVaR) that define as the VaR of the financial system conditional on an institution being at most at its VaR. In this paper, we extend Girardi and Ergün's work. First, we evaluate whether the multivariate GARCH specification can be relevant for forecasting CoVaR. Additionally to the DCC model used in Girardi and Ergün's strategy, we use other two specifications for multivariate GARCH models, the BEKK model of Engle and Kroner (1995), and the OGARCH model introduced by Alexander and Chibumba (1997). Second, we use Filtered Historical Simulation (FHS) that has emerged as one robust alternative to forecast CoVaR at risk horizons longer than one-day and that captures current market conditions without assumptions on the distribution of the return shocks. Third, we generate 1-day and 10-day ahead CoVaR forecasts using a multi-step modeling approach. Finally, we propose a new color-based systemic risk indicator, Traffic Light System for Systemic Stress (TLS42S), that provides a company ranking system, identifying various levels of systemic risk. The new TLS42S is an intuitive, clear and powerful tool for monetary authorities, being complementary to the other well-known systemic risk measures.

Left tail risk attribution in algorithmic portfolio strategies

ABSTRACT. Portfolio managers are in need of flexible diagnostic methods to improve upon their understanding of portfolio risk. However, the methodological decomposition of tail risk for multi-horizon, algorithmic portfolio returns with time-varying exposure to the underlying risk factor components is an unresolved issue. Since, periodical rebalancing in funds (with, for example, floor protection) leads to non- linearity and time-variation in the weight vector, standard Euler decompositions for one-homogeneous functions are not feasible. As a solution, we formalize a general- purpose approach to perform component risk attribution and decompose the shape of the left tail of the aggregate portfolio result into additive contributions from risk factors, weakening the restrictive assumptions of constant portfolio allocations and i.i.d. returns. We provide an explicit description as to how observable risk factors contribute to the aggregate portfolio risk by drawing on regression analysis. Initially, we adopt Value-at-Risk (VaR) and Expected Shortfall (ES) as downside risk measures, in which we re-express the partial derivatives as a linear function of risk factor exposures. Then, we identify in particular the effect of non-normal risk component interactions (co-dependence) and their impact on the distribution of multi- period portfolio returns. More specifically, we build on the analytical traceability of asymptotic expansions in modified Value at Risk (mVaR) and Expected Short- fall (mES), which introduces skewness and kurtosis corrections terms in the Gaussian quantile for the observed marginal return distributions. Finally, we show that our approach appropriately deals with the curse-of-dimensionality inherent in co- moment matrices and inefficiency of mVaR and mES in finite samples by means of a simulation-based framework. The usefulness of this approach is illustrated on a large-scale Constant Proportion Portfolio Insurance fund.

A Generalized Error Distribution-based method for Conditional Value-at-Risk evaluation

ABSTRACT. One of the most important issues in finance is to correctly measure the risk profile of a portfolio, which is fundamental to take optimal decisions on the capital allocation. Among the methods for assessing financial risk, Gaussian
distribution-based procedures are undoubtedly of large popularity. However, since asset returns are usually fat-tailed, the use of Gaussian processes leads to an underestimation of the risk. CVaR is used to quantify the risk of loss of an asset or a portfolio, and is a remarkable improvement of Value-at-Risk
(VaR). Indeed, CVaR is a coherent risk measure and, differently from VaR, it models the very important empirical evidence stating that diversification reduces risk. We propose a bivariate setting and a Copula-based method to model the stochastic dependence among the assets. In particular, we refer to two cases: the Gaussian Copula, which serves as benchmark, and a modified Gaussian Copula, where the correlation coefficient is replaced by a generalization of it, obtained as the correlation parameter of a bivariate Generalized Error Distribution (G.E.D.). In such two contexts, we propose the
computation of CVaR to assess the risk of a portfolio. We present an algorithm, tested on simulated returns, with the aim of verifying the performance of the new method over the classical RiskMetrics one.
The results obtained confirm the higher performance of the G.E.D. method, while the assumption of normality of the returns' distribution determines confidence intervals with the lowest predictive power. It does seem that CVaR-G.E.D. can constitute a valid generalization of CVaR-RiskMetrics, which it is close to in the case of Gaussian marginal distributions, while it moves away from it if the distributions are more fat-tailed.

The Rearrangement algorithm of Puccetti and Rüschendorf: proving the convergence

ABSTRACT. In 2012 G. Puccetti and L. Rüschendorf [J. Comp. Appl. Math., 236 (7), 1833-1840] proposed a new algorithm to compute the upper Value-at-Risk, at a given level of confidence, of a portfolio of risky positions, whose mutual dependence is unknown. The algorithm was called Rearrangement, as it consists precisely in rearranging the columns of a matrix, whose entries are quantiles of the marginal distributions, in order to minimize the variance of the row-sum vector and thus maximize the minimal sum of a row. In fact, the authors proved that, in case at each step the rearrangement is optimal, such a minimal row sum converges to the required upper VaR. However it was not proved that a given deterministic rearrangement procedure would guarantee the stepwise optimality and therefore the convergence. In the following years the algorithm has performed quite well in several practical situations and a number of refinements (see, e.g., Bernard, C. and D. Mc Leish (2016), Asia Pac. J. Oper. Res., 33) have been introduced, but the convergence has remained an open problem (some authors even doubted it could be generally proved). Vice-versa, we demonstrate that the rearrangement algorithm converges, once the deterministic procedure has been precisely defined and an initial optimality condition (already suggested in Embrechts, P., Puccetti, G. and L. Rüschendorf (2013),J. Bank. Financ., 37(8), 2750-2764) is satisfied. Basically, the proof consists in constructing suitable matrix paths, by exploiting beautiful topological properties of the so-called ordered matrices (i.e., matrices each column of which is oppositely ordered to the sum of the others).

Comparing possibilistic portfolios to probabilistic ones

ABSTRACT. Generally, portfolio selection consists in sharing a given starting capital among var- ious stocks whose future performances are unknown, this in order to optimize some risk-return target of its return. Given the uncertainty of the future stock returns, a crucial role in portfolio selection is played by the measurement of the risk associ- ated to the stock returns. The classical approach, due to H.M. Markowitz, is based on a standard way to manage uncertainty: future stock returns are represented in terms of random variables. Alongside to this approach, other alternative ways have been proposed. One of the most successful is the representation of the stock returns in terms of trapezoidal fuzzy numbers, also called possibilistic numbers.
In our contribution, we compare portfolios based on the standard probabilistic representation of the stock returns to portfolios built on the same returns represented as trapezoidal fuzzy numbers. In particular, the research is articulated in two steps: • In the first one, we investigated some theoretical properties of the possibilistic portfolios and compare them to those of the probabilistic portfolios. In general terms, we have proved that the possibilistic variance-covariance matrices have rank equal to 1 or 2. Therefore, such possibilistic variance-covariance matrices cannot be used in the basic models of portfolio selection whose solutions require the inverse of variance-covariance matrices having order greater than 2. Under this point of view, the probabilistic approach appear more flexible than the possibilistic one;
• In the second step, given a set of real stock returns: first we randomly generate their representation in terms of trapezoidal fuzzy numbers; then we select both the possibilistic portfolio based on such numbers and select the standard probabilistic portfolio; finally, we compare their future performances. Under this standpoint, the performances of the two portfolios are similar. Note that, given what stated in the previous point, we are obliged to use a portfolio selection model whose solution does not require the inverse of the variance-covariance matrices. Note also that the random generation of the possibilistic numbers is necessary to consider different financial views about the future behaviors of the stock returns.

ESTIMATING REGULATORY CAPITAL REQUIREMENTS FOR REVERSE MORTGAGES. AN INTERNATIONAL COMPARISON

ABSTRACT. In this paper, we estimate the value of the no-negative-equity guarantee (NNEG) embedded in reverse mortgage contracts and develop a method for calculating regulatory capital requirements according to Basel II and III. We employ a Monte Carlo simulation method that assumes an ARMA-EGARCH process for house prices in four European countries: France, Germany, Spain and the United Kingdom. The results show different estimated values for the NNEG among countries. Specifically, the value of the NNEG tends to be related to the level of the interest rates, the rental yield and house price volatility in each country, as well as the age of the borrower. Different values for value-at-risk and the expected shortfall among countries are also found, which depend on the volatility of each country’s house price series.

ABSTRACT. The three-way Lee-Carter (LC) model, was proposed as an extension of the original LC model when a three mode data structure is available. It provides an alternative for modelling mortality differentials. This variant of the LC model adds a subpopulation parameter that deals with different drifts in mortality. Making use of several tools of exploratory data analysis, it allows to give a new perspective to the demographic analysis supporting the analytical results with a geometrical interpretation and a graphical representation. When facing with a three way data structure, there are several choices on data pre-treatment that will affect the whole data modelling. The first step of three-way mortality data investigation should be addressed by exploring the different source of variations and highlighting the significant ones. In this contribution, we consider the three-way LC model investigated by means of a three-way analysis of variance with fixed effects, where the 3 main effects, the 3 two-way interactions and 1 three-way interaction are analyzed. Aim of the paper is to highlight the technical-applicative infrastructure behind the methodology. With this aim in mind, we consider a first dataset (splitted into training set and test set) based on the idea of a validation process and a further data set to run an extensive empirical test. The results will be presented also from a standpoint of basic methodological issues and choices dealing with the proposed method.

Mortality Projection using Bayesian Model Averaging

ABSTRACT. In this paper we propose Bayesian specifications of four of the most widespread models used for mortality projection: Lee-Carter, Renshaw-Haberman, Cairns-Blake-Dowd, and its extension including cohort effects. We introduce the Bayesian model averaging in mortality projection in order to obtain an assembled model considering model uncertainty.
We worked with Spanish mortality data from the Human Mortality Database. And the results suggest that applying this technique yields projections with better properties than those obtained with the individual models considered separately.

Using deepest dependency paths to enhance life expectancy estimation

ABSTRACT. Dependency, that is, lack of autonomy in performing basic activities of daily living (ADL) can be seen as a consequence of the process of gradual aging. In Europe in general and in Spain in particular this phenomenon represents a problem with economic, political and social implications. The prevalence of dependency in the population, as well as its intensity and evolution over the course of a person’s life are issues of greatest importance that should be addressed. From EDAD 2008 (Survey about Disabilities, Personal Autonomy and Dependency Situations, INE) Albarrán-Lozano, Alonso-González and Arribas-Gil (J R Stat Soc A 180(2): 657–677, 2017) constructed a pseudo panel that registers personal evolution of the dependency rating scale and obtained the individual dependency paths. These dependency paths help to identify different groups of individuals attending to the distances to the deepest path of each age/gender group. To estimate life expectancy free of dependency (LEFD) we consider several scenarios according to dependency degree (moderate, severe and major) and the distances to the deepest path. Via Cox regression model we obtain the ‘survival’ probabilities (in fact, the staying free of dependency probability at a given age given that a person is alive at that age). Then marginal probabilities are obtained by multiplying these estimates by survival probabilities given by the Spanish disabled pensioners’ mortality table. Finally, we obtain the LEFD for Spanish dependent population considering gender, dependency degrees and ages from 50 to 100.

Analysis of the evolution of changes in the Spanish mortality rates using functional data

ABSTRACT. Mortality rates have usually been estimated using different methodologies.
This task has been done either using raw data or their natural
logs. In any case, the final target is to express these rates either as
a function of an index calculated from the own rates, such as the Lee
Carter model, or using some smoothing functions as in the spline-based
models.
However, it is not common to study the evolution of these rates
using their rate of change. An example can be found in Mitchell et al
(2013). They suggest a Lee-Carter style model with variations in kt as
explanatory variable whereas the response variable is the increase in
natural logs of mortality rate. From another point of view, Fang and
Härdle (2015) has estimated the log of death rates using functional
principal components analysis.
Both approaches are joined in this paper. We try to estimate the
rate of change in the Spanish death rates using functional analysis,
in particular principal differential analysis. The analysis is done for
another countries in our environment in order to compare the results.
Data come from the Human Mortality Database.

On the tail behavior of a class of multivariate conditionally heteroskedastic processes

ABSTRACT. Conditions for geometric ergodicity of multivariate autoregressive conditional heteroskedasticity (ARCH) processes, with the so-called BEKK (Baba, Engle, Kraft, and Kroner) parametrization, are considered. We show for a class of BEKK-ARCH processes that the invariant distribution is regularly varying. In order to account for the possibility of different tail indices of the marginals, we consider the notion of vector scaling regular variation (VSRV), closely related to non-standard regular variation. The characterization of the tail behavior of the processes is used for deriving the asymptotic properties of the sample covariance matrices.

Bootstrap Non-Stable Autoregressive Processes: Modelling of Bubbles

ABSTRACT. In this paper we develop bootstrap-based inference for non-causal autoregressions with heavy tailed innovations. This class of models is widely used for modelling bubbles and explosive dynamics in economic and financial time series. In the non-causal, heavy tail framework, a major drawback of asymptotic inference is that it is not feasible in practice as the relevant limiting distributions depend crucially on the (unknown) decay rate of the tails of the distribution of the innovations. In addition, even in the unrealistic case where the tail behavior is known, asymptotic inference may suffer from small-sample issues. To overcome these difficulties, in this paper we study novel bootstrap inference procedures, using parameter estimates obtained with the null hypothesis imposed (the so-called restricted bootstrap). We discuss three different choices of bootstrap innovations: wild bootstrap, based on Rademacher errors; permutation bootstrap; a combination of the two (`permutation wild bootstrap'). Crucially, implementation of these bootstraps do not require any a priori knowledge about the distribution of the innovations, such as the tail index or the convergence rates of the estimators. We establish sufficient conditions ensuring that, under the null hypothesis, the bootstrap statistics estimate consistently particular conditional distributions of the original statistics. In particular, we show that validity of the permutation bootstrap holds without any restrictions on the distribution of the innovations, while the permutation wild and the standard wild bootstraps require further assumptions such as symmetry of the innovation distribution. Extensive Monte Carlo simulations show that the finite sample performance of the proposed bootstrap tests is exceptionally good, both in terms of size and of empirical rejection probabilities under the alternative hypothesis. We conclude by applying the proposed bootstrap inference to Bitcoin/USD exchange rates and to crude oil price data. We find that indeed non-causal models with heavy tailed innovations are able to fit the data, also in periods of bubble dynamics.

Detecting Time Irreversibility Using Quantile Autoregressive Models

ABSTRACT. The aim of this paper is twofold. First we propose to detect time irreversibility in stationary time series using quantile autoregressive models (QAR). We show that this approach provides an alternative way to identify causal from noncausal models. Although we obviously assume non-Gaussian disturbances, we do not need any parametric likelihood function to be maximized (e.g. the Student or the Cauchy). This is very interesting for skewded distributions for instance. Secondly, we propose to extend QAR models to quantile regressions in reverse time. This new modelling is appealing for investigating the presence of bubbles in economic and financial time series. We illustrate our analysis using hyperinflation episodes in Latin American countries.

Mixed Causal-Noncausal AR Processes and the Modelling of Explosive Bubbles

ABSTRACT. Noncausal autoregressive models with heavy-tailed errors generate locally explosive processes and therefore provide a natural framework for modelling bubbles in economic and financial time series. We investigate the probability properties of mixed causal-noncausal autoregressive processes, assuming the errors follow a stable non-Gaussian distribution. We show that the tails of the conditional distribution are lighter than those of the errors, and we emphasize the presence of ARCH e_ects and unit roots in a causal representation of the process. Under the assumption that the errors belong to the domain of attraction of a stable distribution, we show that a weak AR causal representation of the process can be consistently estimated by classical least-squares. We derive a Monte Carlo Portmanteau test to check the validity of the weak AR representation and propose a method based on extreme residuals clustering to determine whether the AR generating process is causal, noncausal or mixed. An empirical study on simulated and real data illustrates the potential usefulness of the results.

Integration of non-financial criteria in equity investment

ABSTRACT. In recent years the awareness of social, environmental and governance issues associated with investments have drawn relevant interest in the investment industry. Investors are more careful in considering investments that comply with their ethical and moral values, as well as with social impact.
Hence, the ethical and social responsibility of investments (SRI) is becoming more popular in the academic literature due to the fact that socially responsible investment provide profitability and social commitment together. In this contribution we discuss the main issues that arise when integrating socially responsible criteria into a financial decision problem.

Some critical insights on the unbiased efficient frontier à la Bodnar & Bodnar

ABSTRACT. In an interesting paper of Bodnar and Bodnar on the estimation of efficient frontier, the Authors improve the biased estimator usually used and show that, with respect to such an unbiased efficient frontier, the usual sample efficient frontier is overoptimistic in the sense that the later underestimates the variance of each efficient portfolio.
The unbiased estimator of the efficient frontier is based on the following assumptions: 1) All the considered assets are risky; 2) The returns of these assets are independently distributed; 3) The returns of these assets are normally distributed.

The empirical application presented in the paper considers monthly data for the equity markets returns of ten developed countries. Whereas the above mentioned assumptions are acceptable for this dataset, both the independence of the returns and their normality are questionable when we deal with daily data. In particular, the independence appears quite unrealistic.

With respect to this framework, in our contribution on one hand we study the asymptotic behavior of the unbiased estimator of the efficient frontier and investigate some anomalous behavior of the same estimator, on the other hand we carry out an analysis on the impact of the assumptions of independence and normality of the log return.

Our main results are:
1. The unbiased efficient frontier provided by Bodnar and Bodnar is statistically different from the sampling one, however, its asymptotic behavior is depending on the distribution of the return of the assets in the portfolio;
2. When we relax the assumption of independence, we find no evidence of different behaviour of the unbiased frontier which remains significantly different from the sample one;
3. Different distributions of the log-returns of the assets in portfolio affect in different way the bias of the sample efficient frontier so that the Bodnar and Bodnar unbiased frontier may be biased.

ABSTRACT. The capital regulatory policies imposed on banking institutions, increasingly reveal the need to consider the heterogeneity of regulated entities and, at the same time, to avoid obvious errors above or under assessment of the risks inherent in the various business models of modern banks.
The paper aims to fill this gap and to contribute to the identification of a synthetic indicator of company performance and long-term creditworthiness, which is also able to take into consideration the investor's risk aversion. This need arises from studies on rating models in order to make results easier within banking organizations. Indeed, it must be ensured that the indicator has three characteristics: both scientifically reliable and comprehensible to customers, and consistent with the credit policies adopted. For this reason, the paper moves from the results of the research that led to the methodology of the "integrated rating" that refers to, so as to expose more clearly: the problem, the relative solution, how the latter gave us the inspiration for the research we would like to deal with.
The Integrated Rating indicator provides a comparison between the permanent ROI of a given company indexed by “i” (used as a proxy for company performance) and a threshold value, called threshold ROI, for the same company. The ROI threshold is calculated through panel regressions, which consider 25 indicators calculated on company balance sheets, which include the income, risk, economic performance, financial management and technological status of the company. The difference between permanent ROI and threshold ROI shows how much the company has performed better (or worse) than its target. The target considers the company indicators and the weights of the reference market, that is, the coefficients of the regressions.
The problem encountered in the development of the integrated rating methodology is that, despite the underlying logic is rather easy (the higher the rating, the better performing the company), there are no extreme limits, neither lower nor higher than the numerical result; this involves some problems in the communication and understanding of the rating to third parties.
We hypothesized to apply a mathematical treatment appropriate to the raw results in order to obtain a transformation to the Integrated Rating indicator, which allows a simple, clear and linear reading.
To transform the indicator, through a logistic transformation, deriving from the logistic function is found to be the better fitting methodology into the whole model. The logistic transformation allows us to have an indicator included in a range between 1 and -1 and a unit standardize and concave curvature. However, it is possible to investigate to detect a multiplicative constant in the exponential component, which changes the degree of curvature of the function, going to change the degree of discrimination of the data set, compared to more extreme values. Therefore, it is precisely on this last point on which the analysis and research that underlie the paper are concerned. In finding an optimal method that determines the degree of curvature of the function, one would discover an optimal methodology for identifying the degree of risk aversion of the investor. The degree of curvature, therefore, would represent the degree of risk aversion of the investor.
In particular, the logistics transformation makes it possible to discriminate companies in the set of observations, but making the observations rather similar with anomalous behavior (so-called outliers). This allows us not to overestimate companies that have a better performance, than expectations and do not underestimate companies that are in line with expectations. This effect can be regulated by a multiplicative constant in the exponential component and allows to determine a degree of convexity / concavity that can adapt to the needs. The goal is to investigate an optimal method for determining the correct degree of convexity and concavity. This finally, proxies, the differences in risks’ attitude of the institutions.
In conclusion, the bank specific integrated rating project, here detailed, focus our research on the development of a mathematical / econometric method that allows us to identify the best algorithm, to determine a correct degree of convexity and concavity (and therefore, consequently, the correct degree of risk aversion of the investor), which can be dynamic and adaptable, consequently to heterogeneous banks.

Bond Portfolio Management in a Post-Solvency II Regulatory Environment

ABSTRACT. We examine bond portfolio optimization problem facing insurance companies operating within the Solvency II framework. Consideration of interest rate, spread, and concentration risk regulatory modules for determination of the Solvency Capital Requirements (SCR) makes the optimization problem highly complex. We address the problem by employing the Non-dominated Sorting Genetic Algorithm II (NSGA- II). Our results suggest that yield to maturity (YTM)-SCR ratios (an analog of a Sharpe ratio in Markowitz setting) were, on average, 5-fold higher in turbulent times (i.e. September 2008) than in post-crisis period (i.e. November 2014). The results also suggest that all efficient portfolios involve investing in more concentrated (i.e. less diversified) portfolios despite regulatory penalties for excessive concentration risk. After crisis (i.e. November 2014), interest rate risk requirements dropped to zero whilst requirements for spread and concentration risks increased. The zero capital charges for the interest rate risk, coupled with less diversified optimal portfolios, present a potential weakness of the Solvency II regulation.

A note on the shape of the probability weighting function

ABSTRACT. Cumulative prospect theory (CPT) has been proposed as an alternative to expected utility theory to explain actual behaviors. Formally, CPT relies on two key transformations: the value function, which replaces the utility function for the evaluation of relative outcomes, and a distortion function for objective probabilities. Risk attitudes are derived from the shapes of these functions as well as their interaction. The focus of this contribution is on the transformation of objective probability, which is commonly referred as “probability weighting” or “probability distortion”. Empirical evidence suggests a typical inverse-S shaped function: small probabilities are overweighted, whereas medium and high probabilities are underweighted; the function is initially concave (probabilistic risk seeking or optimism) and then convex (probabilistic risk aversion or pessimism).We review and compare different parametric families of weighting functions proposed in the literature, and then analyze some applications in finance to the evaluation of derivative contracts and in insurance to premium principles.

13:20-15:00Lunch Break

Hotel Ganivet is located in Calle de Toledo, 111. (Just at 150 m from the Conference site).

Coherent risk measures and infinite expectation risks

ABSTRACT. Though there is a growing interest on risk measurement in Economics,
Finance and Insurance, it is still under debate how to properly measure
risks when expected losses are unbounded. In this paper, a general
methodology for introducing coherent risk measures for risks with infinite
expectation is proposed. It is broadly recognized that the inclusion of
risks with unbounded expectation (for instance, risks with a Cauchy or a
Pareto distribution) presents many mathematical problems when extending the
notion of coherent or expectation bounded risk measure. Unfortunately,
previous literature has addressed this caveat by losing some desirable
mathematical properties. For instance, if we use the value at risk (VaR) as
a risk measure, we lose continuity and subadditivity. Though there are risk
measures for heavy tailed risks recovering sub-additivity along the
literature, we still lose continuity in all of them. Admittedly, using
continuous sub-additive risk measures has many important analytical
advantages, since the optimization of such functions is much simpler and
many classical financial and actuarial problems (pricing and hedging,
portfolio choice, equilibrium, optimal reinsurance, etc.) become much easier
to tackle.\

For these reasons, it may be worthwhile to looking for general
solutions which allow us to extend the concept of coherent risk measures and
deal with infinite expectation risks, while still preserving continuity and
sub-additivity. To sum up, the main contribution of this paper is to extend
the Artzner et al (1999) approach by allowing risks generating unbounded
expected losses preserving desirable mathematical properties as continuity
and sub-additivity. Furthermore, based on data of yearly claims in the
French business insurance branch (Zajdenweber, 1996), we provide numerical
examples and applications to classical insurance and operational risk
problems. We develop coherent extensions of the Conditional Value at Risk
(CVaR) to deal simultaneously with both, bounded (exposure to equity market
risks) and unbounded (generalized Pareto distributions associated to
operational risk) losses, and analyze actuarial insurance applications such
as extensions of the Expected Value Premium Principle.

ABSTRACT. Any risk measure can be represented by a preference relation. While the risk measures on large spaces are fairly discussed in the literature, to the best of the author's knowledge there is none on preference relation on large spaces. In this paper, we study monotone monetary preference relations on large spaces. It is shown that any monotone risk-order can be induced by a unique minimal certain-equivalent risk measure. In other words, as far as an agent knows how to monotonically rank the risk, for a given risk variable the minimum capital requirement is uniquely determined. Furthermore, it is shown that certain-strict monotonicity is a necessary and sufficient condition under which the minimal and the maximal certain-equivalent risk measures are equal. We will also discuss natural risk measures that are introduced on large spaces.

ABSTRACT. The most important classical risk measures (coherent risk measures, expectation bounded risk measures, etc.) and deviations (absolute deviation, standard deviation, etc.) can be represented by means of their convex and weakly compact sub-gradient. Moreover, representation theorems play critical roles in risk estimation, risk management and risk optimization. Nevertheless, these representation theorems do not apply for the value at risk (VaR), because this risk measure is not sub-additive. This is serious drawback when dealing with heavy tails, since VaR is the only finite risk measure when facing heavy tails with unbounded expected losses. In this paper we present a new VaR representation theorem also applying for heavy tailed risks, and show how this theorem simplifies both risk estimation and risk representation.

Stock prices dynamics through multidimensional linkages

ABSTRACT. The decisions and outcomes of financial firms are often influenced by different types of network associations. Prior research has focused on the propagation of economic shocks in determining systemic risk and firm-level volatility. Despite the recent progress in the literature, the multidimensional interactions at the micro level and the role of network propagation on stock price dynamics require closer investigation to understand how contagion occurs and spreads across firms. Drawing on the view that stock prices propagate through multidimensional linkages between economically-related companies, this study develops a dynamic network-based approach to characterize stock prices connected to supply-chain, competition and partnership channels. We derive the theoretical properties of our model which reveals rich insights on endogenous feedback mechanism as well as shock amplification patterns consistent with real data. Considering an empirical application on U.S. financial firms over 2003-2015, we find strong evidence of network effects on stock prices, whose impact varies with the business cycle.

Multiple testing for different structures of Spatial Dynamic Panel Data models

ABSTRACT. In the econometric field, spatio-temporal data is often modeled by means of spatial dynamic panel data models (SDPD). In the last decade, several versions of the SDPD model have been proposed, each one based on different assumptions on the spatial parameters and different properties of the estimators. In particular, the classic version of the model is the one that assumes the spatial parameters to be constant over locations. Another version of the model, proposed recently and called Generalized SDPD, assumes that spatial parameters are adaptive over locations.
In this work we propose a strategy for testing the particular structure of the spatial dynamic panel data model, by means of a multiple testing procedure that allows to choose between the generalized version of the model and some specific versions derived from the general one by imposing particular constraints on the parameters. The theoretical derivations of the testing procedure are made in the high-dimensional setup, where the number of locations may grow to infinity with the time series length. This makes our proposal also a nonstandard application of the multiple testing approach, since the dimension of the multiple testing scheme grows to infinity with the sample size. Moreover, an application to financial data is shown.

ABSTRACT. The crisis of the first decade of the 21st century has definitely changed the approaches used to analyze data originated from financial markets. This break and the growing availability of information have lead to revise the methodologies traditionally used to model and evaluate phenomena related to financial institutions.
In this context we focus the attention on the estimation of bank defaults: a large literature has been proposed to model the binary dependent variable that characterizes this empirical domain and promising results have been obtained from the application of regression methods based on the extreme value theory. In this context we consider, as dependent variable, a strongly asymmetric binary variable whose probabilistic structure can be related to the Generalized Extreme Value (GEV) distribution. Further we propose to select the independent variables through proper penalty procedures and appropriate screenings of the data that could be of great interest in presence of large datasets.

Real-world versus risk-neutral measures in the estimation of an interest rate model with stochastic volatility

ABSTRACT. In this paper, we consider a jump-diffusion two-factor model which stochastic volatility to obtain the yield curves efficiently. As this is a jump-diffusion model, the estimation of the market prices of risk is not possible unless a closed form solution is known for the model. Then, we obtain some results that allow us to estimate all the risk-neutral functions, which are necessary to obtain the yield curves, directly from data in the markets. As the market prices of risk are included in the risk-neutral functions, they can also be obtained. Finally, we use US Treasury Bill data, a nonparametric approach, numerical differentiation and Monte Carlo simulation approach to obtain the yield curves. Then, we show the advantages of considering the volatility as second stochastic factor and our approach in an interest rate model.

What if two different interest rates datasets allow for discribing the same financial product?

ABSTRACT. The chance to choose among more than one dataset for represent and describe the movements in the financial market of the same financial entity has noteworthy effects on the practical quantifications. The case we consider in the paper concerns two datasets, different and deemed to be equivalent between them, referred to risk free interest rates. In light of the volatility term structure discrepancies between the two databases and of some closed formulas for stochastically describing the behavior of the financial valuation discrepancies by means of the Vasicek interest rate process, we show two relevant practical evidences. The aim is to quantify how much the use of one dataset rather than the other impacts on the final result. The application concerns two derivative cases.

Dissecting Interbank Risk using Basis Swap Spreads

ABSTRACT. This paper analyses interbank risk using the information content of basis swap (BS) spreads, floating-to-floating interest rate swaps whose payments are associated with euro deposit rates for alternative tenors. To identify the nature of shocks affecting interbank risk, we propose an empirical model that decompose BS quotes into their expected and unexpected components. These unobservable constituents of BS spreads are estimated by solving a signal extraction problem using a particle filter. We find that expected components covariate with aggregate liquidity and risk aversion, while systemic risk arises as the main driver behind unexpected fluctuations. Our empirical findings suggest that macroprudential analysis emerges as a key device to ease asset pricing in a new multicurve scenario.

An individual risk model for premium calculation based on quantile: a comparison between Generalized Linear Models and Quantile Regression

ABSTRACT. In non-life insurance, it is important to develop a loaded premium for individual risks, as the sum of a pure premium (expected value of loss) and a safety loading.
In actuarial practice, the classification ratemaking is performed usually via Gener-alized Linear Model; the latter permits an estimate of individual pure premium and safety loading both, therefore the safety loading estimation works only under strong assumptions.
In order to investigate the individual pure premium, we introduce pricing model based on Quantile Regression, to perform a working classification ratemaking with weaker assumptions.

ABSTRACT. Recent research has investigated possible bridges between ruin theory for the Cramer-Lundberg risk model in actuarial science with risk measures in risk management.

Insurance risk models typically decompose into claim frequency and claim severity components, but also include other elements such as the premium loading.

These proposed bridges are characterized by only some elements of the insurance risk process, typically the claim severity. Here we propose new risk measures based
on solvency criteria that include all the insurance risk model components.

An application to the optimal capital allocation problem serves as an illustrative use of these new risk measures.

ABSTRACT. Ambiguity and heavy tails may affect many classical optimization problems of financial mathematics (e.g., portfolio choice), actuarial mathematics (e.g., optimal reinsurance) and other applications of operations research (e.g. the news vendor problem). In this paper we deal with a very general set of priors in order to define a robust value at risk also applying for heavy tailed risks. New risk/uncertainty optimization methods are introduced also applying when the robust expected losses are unbounded. Illustrative examples are presented.

Price convergence within and between the Italian electricity day-ahead and dispatching services markets

ABSTRACT. In the paper we study the convergence of prices in the electricity markets, both at the day-ahead level and for the dispatching services (such as balancing and reserves). We introduce two concepts of price convergence, the convergence of zonal prices within each market (within convergence), and the converge of prices in a given zone between the two markets (between convergence). We provide an extensive analysis based on Italian data of within and between convergence. The zonal time-series of the prices are evaluated, seasonally adjusted and tested to assess their long-run properties. This evaluation induces us to focus on the behavior of the three largest and most interconnected continental zones of Italy (North, Center-North and Center-South). The fractional cointegration methodology used in the analysis shows the existence of long-run relationships among the series used in our study. This signals the existence of price convergence within markets, even though for the dispatching services market the evidence is less robust. The analysis also shows the existence of price convergence between markets in each zone, even though the evidence is more clearly affirmed for the North (the largest Italian zone), less so for the other two zones. Results are interpreted on the basis of the characteristics of the markets and the zones.

Forecasting the volatility of electricity prices by robust estimators: an application to the Italian market.

ABSTRACT. In this paper we introduce a doubly robust approach to modelling the volatility of electricity spot prices, minimizing the misleading effects of
the extreme jumps on the predictions. With respect to a particular stream of literature on electricity price forecasting, which highlights the importance of predicting spikes, this paper moves the attention to the correction of the impact that spikes have on the estimation of the prices and, in particular, on their volatility. The idea is to allow the estimation process to restore the before-spike volatility. Our methodology is applied separately to the hourly time series of Italian electricity price data from January 1st, 2013 to
December 31st, 2015.
The robustness of the method is double because the estimation is performed
robustly both on the original (log) prices and on the
resulting residuals with a different, complementary, robust method.
In the first step, we apply a threshold autoregressive model
(SETARX) to the time series of logarithmic prices. The type and order of
the best generating process have been selected robustly to the presence
of spikes through robust tests
for stationarity and nonlinearity and robust information criteria. A
robust GM-estimator based on the polynomial weighting function is preferred with respect to the non-robust Least Squares estimator due to forecasting superiority,
further improved by the introduction of external regressors. In the second step
the weighted forward search estimator (WFSE) for GARCH(1,1) models is applied to the residuals extracted from the first step in order to estimate and forecast volatility. The weighted forward search is a modification of the Forward Search
intending to correct the effects of extreme observations on the estimates of GARCH(1,1) models. Differently from the original Forward Search, at each step of the search estimation involves all observations which are weighted according to their degree of outlyingness. An automatic cut of the forward estimates allows
simultaneously for the identification of the number of outlying values and the correction of the estimates. Modelling the residuals of the conditional mean equation via the WFS approach contributes to the improvement of the forecasting ability of the global model, as the one-day-ahead prediction intervals generated with the WFS are tighter than the OLS ones, particularly in their after spike periods.

An electricity price index for volatile spatially dependent data

ABSTRACT. The environmentally desirable expansion in intermittent renewables represents a growth opportunity also for the financial sector. Indeed, penetration by renewables in the electricity markets positively correlates with electricity price volatility, hence motivating a higher demand for hedging, given the risk preferences. Yet, in zonal electricity markets such as the Italian Power Exchange (IPEX), such volatility can be mitigated by simply revising the national electricity price index, which is a weighted average of zonal prices. In the current formula, weights equal zonal demand shares, hence do not reflect information about renewables or other sources of risk. Hence, the current formula is unlikely to minimise volatility. Goal of the paper is to compute and estimate revised weights of the national price index (in Italy, the Prezzo Unico Nazionale, or PUN), that minimise the index volatility while using statistical information drawn from time series of zonal electricity prices.

This paper includes a theoretical and an empirical part. In the theoretical part, the variance-minimising weights are computed under the assumption that zonal prices are generated by a spatial autoregressive (SAR) stochastic process, as in Abate and Haldrup (2017, Energy Journal). The weights are shown to depend on the SAR model coefficients, on the spatial weight matrix, and on zonal price variances and covariances. The empirical part of the paper performs estimates of the SAR model on a sample of zonal day-ahead electricity prices in the IPEX, observed at a daily frequency, in a time window between 2011 and 2016, under various assumptions on the spatial weight matrix. The estimated coefficients and the empirical (co)variances of zonal prices are then used to estimate a national electricity price index according to the weights presented in the theoretical section. The volatility of the revised national price index is favourably compared with the historical volatility of the PUN.

A Self-Excited Switching Jump Diffusion (SESJD): properties, calibration and hitting time

ABSTRACT. A careful observation of financial time series reveals that jumps of prices arrive grouped. The existence of such a jump clustering has important implications in many areas of finance. A way to deal with this clustering consists to decompose the overall price variability into two components - a Brownian process and a self-excited jump process, called Hawkes process. However, these processes have two important drawbacks. Firstly, the calibration of self-excited jump diffusions is a difficult exercise. A second drawback is that hitting time properties are unknown.

This article proposes a new alternative to Hawkes processes, based on regime switching processes. In a pure regime switching model, the parameters are modulated by a hidden Markov chain. These processes fail to duplicate the clustering of jumps because they use memory-less exponential random variables for defining the length of the period of staying in a certain regime. To remedy to this issue, we construct a Markov chain with several ordered states. Each of these regimes corresponds to a value for the intensity of a discretized self-excited counting process. These intensities are involved in the definition of the matrix of transition probabilities. This matrix is designed such that when the chain moves to a higher state, the probability of climbing again in the scale of states increases instantaneously. If the chain does not move up, the probability that it falls in the scale of states, raises also with time. This Markov chain serves in the modelling of the asset price dynamics. The asset returns are modelled by the sum of a diffusion and a jump process, where the jumps of the prices are synchronized with the transitions of the Markov chain towards higher states. This model is called the Self-Excited Switching Jump Diffusion (SESJD) model.

Compared to Hawkes processes, the SESJD model presents several substantial advantages when jumps are exponential. First, this model is easy to fit. Indeed, a slightly modified version of the Hamilton's filter leads to a simple parameter estimation method. Second, the model explains various forms of option volatility smiles. Third, a fluid embedding technique can be applied in order to deduce properties about the hitting times of the SESJD. This leads to closed form expressions for some exotic derivatives, like perpetual binary options.

ABSTRACT. We derive a generic decomposition of the option pricing formula for models
with finite activity jumps in the underlying asset price process. Both constant
and stochastic volatility jump diffusion (SVJ) models are considered for pricing
European options. Not only we derive an approximation for option prices, but we
come up with an approximation of implied volatility surfaces as well. In particular,
we inspect in detail the SVJ models of the Heston type. A numerical comparison
is performed for the Bates model with log-normal jump sizes. The main advantage
of the proposed pricing approximation lies in its computational efficiency, which is
advantageous for many tasks in quantitative finance that need a fast computation of
derivative prices.

Modeling High-Frequency Price Data with Bounded-Delay Hawkes Processes

ABSTRACT. Hawkes processes are a recent theme in the modeling of discrete financial events such as price jumps, trades and limit orders, basing the analysis on a continuous time formalism. We propose to simplify computation in Hawkes processes via a bounded delay density. We derive an Expectation-Maximization algorithm for maximum likelihood estimation, and perform experiments on high-frequency interbank currency exchange data. We find that while simplifying computation, the proposed model results in better generalization.

A Single Factor Model for Constructing Dynamic Life Tables.

ABSTRACT. The objective of this paper is to develop a single factor model to construct dynamic life tables. The paper seeks to identify the mortality rate that best explains the global behavior of life tables. Once this key rate is identified, we assume that changes in mortality rates depend linearly on changes in the mortality rate corresponding to the key rate. Next, we proceed to adjust the sensitivities of the changes in mortality rates to changes in the key mortality rate, using non-parametric methods. Assuming that this rate follows a specific ARIMA process it can be used to forecast future mortality rates. The resulting model has a similar structure to the well-known Lee-Carter (1992) model but with the advantage that their parameters and variables can be easily identified. Finally, the forecasting ability of the model is tested using out-of-sample data from Spanish experience. The results show that the proposed Single Factor Model significantly outperforms the Lee-Carter model

Improving Lee-Carter forecasting: methodology and some results

ABSTRACT. The aim of the paper is to improve the Lee-Carter model performance developing a methodology able to refine its predictive accuracy. Considering relevant information the discrepancies between the real data and the Lee-Carter outputs, we model a measure of the fitting errors as a Cox-Ingersoll-Ross process. A “new” LC model is derived, called mLC. We apply the results over a fixed prediction span and with respect to the mortality data relating to the Italian females aged 18 and 65, chosen as examples of the model application. Through the backtesting procedure within a static framework, the model mLC proves itself to outperform the LC model.

Risk and Uncertainty for Flexible Retirement Schemes

ABSTRACT. Nowadays, we are witnessing a wide and spread need to create flexible retirement schemes for facing global ageing and the prolonging working lives. Many countries have set up Social Security Systems which link retirement age and/or pension benefits to life expectancy. In this context, we consider an indexing mechanism based on the expected residual life expectancy to adjust the retirement age and keep a constant Expected Pension Period Duration (EPPD). The analysis assesses the impact of different stochastic mortality models on the indexation by forecasting mortality paths on the basis of extrapolative methods. Nevertheless, so far, in recent literature less attention has been given to the uncertainty issue related to model selection, although having appropriate estimates for the risk in mortality projections. With respect to the state of art, our proposal considers model assembling techniques in order to balance fitting performances and uncertainty related to model selection, as well as uncertainty of parameters estimation.
The indexation mechanism obtained by joining the retirement age up with the expected life span is tested in actuarial terms by assessing the implied reduction of costs also when assuming worst and best scenarios. The analysis concerns the Italian population and outlines also gender differences.

A comparative analysis of neuro fuzzy infer-ence systems for mortality prediction

ABSTRACT. Recently, Neural network (NN) and fuzzy inference system (FIS) have been introduced in the context of mortality data. In this paper we implement an In-tegrated Dynamic Evolving Neuro-Fuzzy Inference System (DENFIS) for longevity predictions. It is an adaptive intelligent system where the learning process is updated thanks to a preliminary clusterization of the training data. We compare the results with other neuro fuzzy inference systems, like the Adaptive Neuro Fuzzy System (ANFIS) and with the classical approaches used in the mortality context. An application to the Italian population is pre-sented.

References
1. Atsalakis G., Nezis D., Matalliotakis G., Ucenic C.I., Skiadas C., Forecasting Mortality Rate using a Neural Network with Fuzzy Inference System. No 0806,2008, Working Papers, University of Crete, Department of Economics, http://EconPapers.repec.org/RePEc:crt:wpaper:080
2. D’Amato V., Piscopo G., Russolillo M., Adaptive Neuro-Fuzzy Inference System vs Stochastic Models for mortality data, in Smart Innovation, Systems and Technolo-gies,Springer,26, 2014, 251-258
3. Jang J.S.R., ANFIS: Adaptive-Network-based Fuzzy Inference Systems, IEEE Transac-tion on Systems, Man and Cybernetics, 23, 1993, 665-685
4. Kasabov N.K., Song Q., DENFIS:dynamic Evolving neuro-fuzzy inference system and its application for time series-prediction, IEEE Transaction on Fuzzy System, 10(2), 2002
5. Lee R.D., Carter L.R., Modelling and Forecasting U.S. Mortality,Journal of American Statistical Association, 87, 1992, 659-671
6. Piscopo G., Dynamic Evolving Neuro Fuzzy Inference System for mortality Prediction, Int. Journal of Engineering Research and Application, 2017
7. Takagi T., Sugeno M., Fuzzy identification of systems and its application to modelling and control, IEEE Transactions on Systems, Man and Cybernetics, 15 (1), 1985, 116 - 132.

Bayesian semiparametric multivariate stochastic volatility with an application to international volatility co-movements

ABSTRACT. In this paper, we establish a Cholesky-type multivariate stochastic volatility estimation framework, in which we let the innovation vector follow a Dirichlet process mixture, thus enabling us to model highly flexible return distributions. The Cholesky decomposition allows parallel univariate process modeling and creates potential for estimating highly dimensional specifications. We use Markov Chain Monte Carlo methods for posterior simulation and predictive density computation. We apply our framework to a five-dimensional stock-return data set and analyze international volatility co-movements among the largest stock markets.

Geographic Dependence and Diversification in House Price Returns: the Role of Leverage

ABSTRACT. We analyze time variation in the average dependence within a set of regional monthly house price index returns in a regime switching multivariate copula model with a high and a low dependence regime. Using equidependent Gaussian copulas, we show that the dependence of house price returns varies across time, which reduces the gains from the geographic diversification of real estate and mortgage portfolios. More specifically, we show that a decrease in leverage, and to a lesser extent an increase in mortgage rates, is associated with a higher probability of moving to and staying in the high dependence regime.

A Comparison of Limited Information Estimators in Dynamic Simultaneous Equations Models

ABSTRACT. This paper shows that Fuller limited information maximum likelihood estimation (FLIML) gives much less biased estimates in general dynamic simultaneous equations models than two stage least square (2SLS) estimation. This is because FLIML removes the simultaneity bias to order 1/T completely and the dynamic bias partially which are both present in 2SLS. We also analyse the Corrected 2SLS (C2SLS) and the Corrected FLIML (CFLIML) estimators where the correction is based on the estimated bias approximation. The Monte Carlo experiments show that both C2SLS and CFLIML give almost unbiased estimates. In addition, the C2SLS and CFLIML do not lead to an inflation of the mean squared errors compared with the associated uncorrected estimators. We suggest that the corrected estimators, based upon O(1/T), should be used to reduce the bias of the original estimators in small samples.

Reexamining financial and economic predictability with new estimators of realized variance

ABSTRACT. This paper explores the predictive power of new estimators of the conditional variance of the stock market returns and the equity variance risk premium for economic activity and financial instability. These estimators are obtained from new parametric and semiparametric asymmetric extensions of the heterogeneous autoregressive model. We find, using the new specifications, that the equity variance risk premium is a predictor of future stock returns while realized variance is often rejected as a predictor for short and moderate prediction horizons. Furthermore, both variables are also predictors of financial instability and, for some horizons, of economic activity.
All in all, new semiparametric models with asymmetric measures of realized variance improve considerably the predictive power of the variance risk premium when considering stock market returns and financial instability.

Bayesian Factorization Machines for Risk Management and Robust Decision Making

ABSTRACT. When considering different allocations of the marketing budget of a firm, some predictions, that correspond to scenarios similar to others observed in the past, can be made with more confidence than others, that correspond to more innovative strategies. Selecting a few relevant features of the predicted probability distribution leads to a multi-objective optimization problem, and the Pareto front contains the most interesting media plans. Using expected return and standard deviation we get the familiar two moment decision model, but other problem specific additional objectives can be incorporated. The Factorization Machine kernel, initially introduced for recommendation systems, but later used also for regression, is a good choice for incorporating interaction terms into the model, since they can effectively exploit the sparse nature of typical datasets found in econometrics.

Socially Responsible Ratings and Financial Performance

ABSTRACT. Companies included on a sustainability index meet several criteria based on an assessment of their economic, environmental and social practices. Each of these companies satisfies a different number of criteria and these standards can be quite differ-ent in quality and rigor. In this sense, RobecoSAM provides a Corporate Sustaina-bility Assessment of the companies included in the Dow Jones Sustainable Index. The three proposed classes (Gold, Silver and Bronze) can be considered as social responsible (SR) ratings. Therefore, we examine the financial performance of portfolios composed of stocks according these ratings. We assume that highly conscious SR investors could base their portfolio decision-making process on these SR ratings. From an extensive dataset, our results show that SR investments not only have no cost for investors but also outperform the market. Additionally, there are no significant differences among SR portfolios depending on the SR rating.

Household wealth and portfolio choice when tail events are salient

ABSTRACT. Robust experimental evidence of expected utility violations establishes that individuals overweight utility from low probability gains and losses. These findings motivated development of rank dependent utility (RDU). We characterize optimal RDU portfolios for investors facing dynamic, binomial returns. Our calibration shows optimal terminal wealth has significant downside protection, upside exposure, and a lottery component. Optimal dynamic trades require higher risky share after good returns and, possibly, nonparticipation when returns are poor. RDU portfolios counterfactually exhibit excessive elasticity of risky share to wealth and momentum rebalancing. Our results suggest a puzzling inconsistency between behavior inside and outside the laboratory.

A Bi-level Programming Approach for Global Investment Strategies with Financial Intermediation

ABSTRACT. Most mathematical programming models for investment selection and portfolio management rely on centralized decisions about both budget allocation in different (real and financial) investment options and portfolio composition within the different options. However, in more realistic market scenarios investors do not directly select the portfolio composition, but only provide guidelines and requirements for the investment procedure. Financial intermediaries are then responsible for the detailed portfolio management, resulting in a hierarchical investor-intermediary decision setting. In this work, a bi-level mixed-integer quadratic optimization problem is proposed for the decentralized selection of a portfolio of financial securities and real investments. Single-level reformulation techniques are presented, along with valid-inequalities which allow speeding-up their resolution procedure, when large scale instances are taken into account. We conducted computational experiments on large historical stock market data from the Center for Research in Security Prices to validate and compare the the proposed bi-level investment framework (and the resulting single-level reformulations), under different levels of investor's and intermediary's risk aversion and control. The empirical tests reveled the impact of decentralization on the investment performance, and provide a comparative analysis of the computational effort corresponding to the proposed solution approaches.

ABSTRACT. The identification of an appropriate dependence structure in multivariate data is not straightforward.
In the last years, copula has been growing as statistical tools for finding and for modeling the dependence between several random variables. In the bivariate case, many copula families have been proposed and used.

Bivariate Functional Archetypoid Analysis: An Application to Financial Time Series

ABSTRACT. Archetype Analysis (AA) is a statistical technique that describes individuals of a sample as a convex combination of certain number of elements called archetypes, which in turn, are convex combinations of the individuals in the sample. For it's part, Archetypoid Analysis (ADA) tries to represent each individual as a convex combination of a certain number of extreme subjects called archetypoids. It is possible to extend these techniques to functional data.
This work presents an application of Functional Archetypoids Analysis (FADA) to financial time series. At the best of our knowledge, this is the first time FADA is applied in this field. The starting time series consists of daily equity prices of the S&P 500 stocks. From it, measures of volatility and profitability are generated in order to characterize listed companies. These variables are converted into functional data through a Fourier basis expansion function and bivariate FADA is applied. By representing subjects through extreme cases, this analysis facilitates the understanding of both the composition and the relationships between listed companies. Finally, a cluster methodology based on a similarity parameter is presented. Therefore, the suitability of this technique for this kind of time series is shown, as well as the robustness of the conclusions drawn.

A spatio-temporal model for the inequality in the income distribution of Italy

ABSTRACT. The inequality level in the income distribution is a very important indicator for many analyses regarding the structure and the wealth degree of a society. A deep study of the inequality can often help to provide important highlights and sometimes can even explain the trends of many fundamental macro-indicators in economics as well as in finance.
In the last years, thanks also to the explosion of the data availability, the use of the geolocalization is increasingly widespread in every research field. Many analyses of economic and financial phenomena have been approached by using ever more complex geostatistical tools.
The scope of the present paper is to propose and investigate the features of a new model for representing the inequality in the income distribution within a particular geographical area.
The main innovation of this model is the capacity to simultaneusly model the temporal behaviour of the inequality and the modification of it due to the spatial component: such more information can improve the model performances in terms of prediction.
The theoretical findings will be illustrated by an application with real data from the Household Income and Wealth sample survey, provided every two years by The Bank of Italy.

Pricing illiquid assets by entropy maximization through linear goal programming

ABSTRACT. In this contribution we study the problem of retrieving a risk neutral probability (RNP) in an incomplete market, with the aim of pricing (possibly illiquid) assets and hedging their risk. The pricing issue has been often addressed in the literature finding an RNP with maximum entropy by means of the minimization of the Kullback-Leibler divergence. Under this approach, the efficient market hypothesis is modelled by means of the maximum entropy RNP.This methodology consists of three steps: firstly simulating a finite number of market states of some underlying stochastic model, secondly choosing a set of assets -called benchmarks- with characteristics close to the given one, and thirdly calculating an RNP by means of the minimization of its divergence from the maximum entropy distribution over the simulated finite sample market states, i.e from the uniform distribution. This maximum entropy RNP must exactly price the benchmarks by their quoted prices. Here we proceed in a different way consisting of the minimization of a different divergence resulting in the total variation distance. This is done by means of a two steps linear goal programming method. The calculation of the super-replicating portfolios (not supplied by the Kullback-Leibler approach) whould then be derived as solutions of the dual linear programs.

The effect of rating contingent guidelines and regulation around credit rating news

ABSTRACT. This paper investigates the effect of rating-based portfolio restrictions that many institutional investors face on the trading of their bond portfolios. Particularly, we explore how credit rating downgrades affect to bondholders that are subject to such rating-based constrains in the US corporate bond market. We go beyond the well-documented investment grade (IG) threshold by analyzing downgrades crossing boundaries usually used in investment policy guidelines. We state that the informativeness of rating downgrades will be different according to whether they imply crossing investment-policy thresholds or not. We analyze corporate bond data from the TRACE dataset to test our main hypothesis and find a clear response around the announcement date consistent with portfolio adjustments made by institutions in their fulfillment of investment requirements for riskier assets.

Life insurers’ asset-liability dependency and low-interest-rate environment

ABSTRACT. The current environment of low, and even negative, interest rates is a significant challenge for financial intermediaries. In particular, the low interest rates could negatively affect the profitability and solvency of insurance companies because of the large amount of fixed-term investment in the asset side of their balance sheet, and due to the impact of the reduction of the discount rates applied to value their liabilities. The negative effect could be exacerbated by the large diffusion of insurance instruments embedding financial guarantees in terms of minimum payouts that have been sold before the unanticipated decline of interest rates.
Life insurers take their decisions concerning the assets to buy and the liabilities to sell under a condition of uncertainty and the outcomes involve interactions among the assets, among the liabilities and between the asset and liability sides of their balance sheet. In this respect, the mixture of assets and liabilities chosen by a life insurer can be seen as a basic portfolio theory decision. By using a canonical correlation analysis, we examine the internal structure of these portfolio decisions. The purpose of this research is to shed more light on the relationships between life insurers’ assets and liabilities and to investigate how these relationships evolved during recent years, when ECB’s monetary policy decisions drove market rates to unexperienced low levels.
In our empirical analysis we measure the relationships among and between asset and liability accounts at major EU life insurers in 2007, 2011 and 2015. Insurance companies seem to run their business as if they decide their funding policies after identifying good investment opportunities. We ﬁnd strong and substantial evidence that insurers’ assets and liabilities have indeed become more independent over time. We argue that the declining trend of market interest rates over the examined time horizon has contributed to the generalized reduction in the linkage between the asset side and the liability side of EU life insurers, and has made insurance companies more exposed to ALM-related risks relative to the period before the financial crisis broke out.
Further investigation and a deeper comprehension of the relations between insurer assets and liabilities are crucial from both a regulatory and supervisory perspective since it might help to define qualitative and quantitative measures of liquidity requirements that are more consistent with insurers’ actual behaviour, during both benign market conditions and stressed financial markets. Further analysis of the asset–liability linkages is needed to generate more robust insurer-level evidence. One potential development would be to apply canonical correlation analysis to time-series data at the insurer level. This would generate insurer-speciﬁc estimates of canonical correlations which could then be regressed on insurer-speciﬁc arguments to test a variety of hypotheses.

Insurance premium principles to price future contracts. Applications in Energy markets.

ABSTRACT. It is well known that the distortion premium principle is characterized by desirable properties for pricing insurance contracts. This premium is the result of a simple expected value calculated with distorted probabilities and it can be seen as a risk measure fulfilling also desirable continuity properties.
The motivation of this talk lies on how to price future contracts in energy markets. Given that energy is a non-storable commodity, we propose to use insurance premium principles as already mentioned. A characteristic of this market is that, sometimes, the risk premium is negative. Therefore we allow the insurance premium to price also below the mean. On the one hand, we can talk about the direct problem, in the sense that we choose a premium and a probability distribution in order to calculate the price. In this case, the probability distribution refers to the underlying of the future contract. In general, the true probability is unknown and hence, we incorporate ambiguity in the model in order to calculate the premium. For that reason we would like to have continuity results of this premium with respect to the choice of the probability measure. These results will be presented using the Wasserstein distance. On the other hand, we would like to study the inverse problem. With this we mean that, we study whether we can recover the distortion density from given prices in the market when the distortion premium principle is considered. We also study these properties for different insurance premium principles.

Two–Sided Skew and Shape Dynamic Conditional Score Models

ABSTRACT. In this paper we introduce the family of 2–Sided Skew and Shape distributions
as an extension of the skewing mechanism proposed by Fern´andez and Steel
(1998) accounting also for asymmetry in the tails decay. The proposed distributions
account for many of the stylised fact frequently observed in financial time series,
except for the time–varying nature of moments of any order. To this aim we extend
the model to a dynamic framework by means of the score updating mechanism recently
introduced by Harvey (2013) and Creal et al. (2013). The asymptotic theory
of the proposed model is derived under mild conditions.

ABSTRACT. Almost riskless investment opportunities represent a fundamental innovation of the recent developments in asset pricing theory. In this paper, I introduce a related trading scheme involving two options and two asynchronous operations: a limit order for one of the assets and a market order for the other one, once the limit order is executed. A model integrating option pricing and order arrivals explains the proximity of this strategy to a pure arbitrage. In particular, satisfying the requisites of the approximate arbitrage opportunities, I therefore refer to it as a limit order approximate arbitrage. An empirical study on a novel option data set confirms that market participants actively invest in these trades. The analysis also reveals the presence of short-living pure arbitrage opportunities in the market, promptly taken by the arbitrageurs.

Reducing the risk of adverse selection in the pricing process. A practical case with a two-stage cluster procedure and a Hurdle negative binomial GLM

ABSTRACT. The objective of this work is to develop a non-life risk pricing method that allows reducing the risk of adverse selection presented by traditional methods.

Traditional techniques seek the distribution that best fits the variables "number of casualties" and "amount of claims". Then we obtain the expected value of these two variables and their product forms the pure premium (the expected value of the total loss ratio). The problem of this approach is that, unless the portfolio is very homogeneous, it penalizes the "good" insured while benefiting the "bad". This phenomenon is part of what is known as "risk of adverse selection".

To avoid this problem, instead of considering all individuals, traditionally multivariate techniques have been used to classify and segment customer profiles based on risk factors, with the aim of creating classes as homogeneous as possible. Thus, instead of estimating the parameters of the previous distributions with the data of the entire portfolio, it is done with the data of each of the classes. It should be noted that in order to create the classes, all the available information of the clients is used, while to estimate the parameters of the model in each of the classes, only the information related to the variables number and amount of losses is used.

On the other hand, Generalized Linear Models (GLM) have been replacing the traditional non-life risk pricing techniques in recent years. GLM allow the creation of models under which the expected value for the number and amount of claims variables is estimated based on the coefficients of the risk factors that have been significant (previously estimated from the data of the entire portfolio) and the values of the variables obtained directly from the insured

Although when using the GLM the individual information (collected in the values of the explanatory variables) is considered to adjust the risk rate, the coefficients of the model have been estimated using the data of the entire portfolio, which again leads to a situation of adverse selection.

This paper presents a combination of the two previous ideas: the proposal is to apply GLM within each of the classes obtained from a segmentation process. This procedure is applied to a data set of the automobile industry, made up of 10302 individuals and 26 variables. In particular, the portfolio was first segmented through a two-stage cluster procedure. This method is especially suitable for the treatment of databases where there are both quantitative and qualitative variables. Once the segmentation has been obtained, the expected value of the number of claims has been estimated by applying a Hurdle negative binomial GLM. A GLM gamma was used to estimate the expected value of the loss amount variable. The comparison of the results with those obtained with a GLM adjustment shows how previous cluster modeling reduces the risk of adverse selection in the charging process.

Price informativeness and rating revisions: Effects of reputational events and regulation reforms

ABSTRACT. This paper analyses how reputational and regulatory shocks happened in U.S. during the last two decades have affected the information content of ratings adjustments announced by the main global Credit Rating Agencies (CRAs). We analyse the stock price synchronicity as a measure of price informativeness and find that investors perceive as more informative the downgrades announced after the Enrond/Worldcom and Lehman Brothers defaults. Investors also change their perception about relevant characteristics of the downgrade as the rating prior the change. This effect is not observed for upgrades. This result indicates that investors perceive the reputational events as a signal of rating inflation, giving more credibility to downgrades than to upgrades. We also analyse the effect of all changes in rating-contingent regulation in our sample period and find that those that increase the reliance on ratings lower the information content of rating changes, whereas the new regulation that increase the competition among CRAs not always have a positive impact on the information impounded in prices by rating changes.

DO GOOGLE TRENDS HELP TO FORECAST SOVEREIGN RISK IN EUROPE?

ABSTRACT. The aim of this paper is to analyze whether Internet helps to forecast the evolution of sovereign bond yields in European countries. Namely, we use what has been called Google econometrics (Fondeur and Karamé, 2013). The economic literature has started to address this issue during recent years (Ben Rephael et al., 2017; Da and Da et al., 2011, 2015; Gao and Süss, 2015; Siganos, 2013, among others) and it is a topic of growing interest among researchers and professionals.
Specifically, Google econometrics refers to the data obtained from Google trends tool. This instrument provides indexed data from the number of queries that specific keywords have along the time. This proxy is usually named as Google Search Volume Index (GSVI). Following this, in this paper we address this issue analysing the ability of GSVI to forecast the evolution of sovereign bond yields in European countries, since it is well known that Europe has faced a critical sovereign debt crisis during the last few years and it has had a great impact on sovereign bond yields and risk premiums, especially for peripheral countries, such as Greece, Portugal, Ireland, Italy or Spain. With this paper, which to the best of our knowledge is pioneering in this kind of studies along with the paper from Dergiades et al. (2015), we attempt to determine whether internet activity is a useful tool to forecast changes on sovereign yields, and if so, it will provide a valuable instrument that acts as a signal for ups and downs on yields that is helpful for financial markets participants.
For this purpose, we focus on 27 European countries and test thorough VAR models and Granger’s causality tests the relationship between GSVI and 10-year sovereign bond yields. The results indicate that there exists causality between Google searches and variations in sovereign bond yields. Furthermore, this phenomenon is stronger in peripheral countries which have been more affected by the crisis, such as Greece or Portugal.

The optimal investment and consumption for financial markets generated by the spread of risky assets for the power utility

ABSTRACT. We consider a spread financial market defined by the Ornstein-Uhlenbeck (OU) process. We construct the optimal consumption/investment strategy for the power utility function. We study the Hamilton-Jacobi-Bellman (HJB) equation by the Feynman-Kac (FK) representation. We show the existence and uniqueness theorem for the classical solution. We study the numeric approximation and we establish the convergence rate. It turns out that in this case the convergence rate for the numerical scheme is super geometrical, i.e., more rapid than any geometrical one.

Could Machine Learning predict the Conversion in Motor Business?

ABSTRACT. The aim of this paper is to calibrate and compare different Machine Learning techniques (or ML) for one of the most complex and relevant behavior observable in the Insurance Market: the Conversion Rate. The selected perimeter is the Motor Third Party Liability (MTPL) for cars.
Defined the Conversion Rate as the ratio between policies and quote request, a good prediction of this KPI produces at least two main advantages for an Insurer: Increase in Competitiveness: this is especially important when the underwriting cycle shows a softening period; Effective price changes: a Company could identify rate changes or dedicated discounts coherently with the estimated conversion and profitability calculated for each potential client asking for a quote, both needed to develop a pricing optimisation tool.
Generalized Linear Model (GLM), defined as a standard for pricing and/or predictive modelling purposes in the Insurance Market, is used as frame of reference.
We introduce and calibrate the Classification and Regression Tree (CART), Random Forest and Gradient Boosted Tree ML algorithms. The measures adopted in order to elect the best predictive model are: Logarithmic Loss Error, Average Probability, Accuracy, Precision, Recall, Roc Curve Area and Fisher’s Score.
The Random Forest model has the highest recall, while the Gradient Boosted Tree is the most precise model. The Random Forest is able to outperform the GLM benchmark model with respect to any measure considered.
The Variable Importance and the Strength index, computed from the ML models and the GLM respectively, describe how the different algorithms are coherent on choosing the most relevant independent variables.

The Islamic Financial Industry. Performance of Islamic vs. conventional sector portfolios

ABSTRACT. This paper studies the basic principles of the Islamic financial system to know the positive aspects that make it more solid and stable than the conventional financial system during financial crises. On the other hand, this research carries out a comparison between conventional and Islamic sectoral portfolios for the period from January 1996 to December 2015, through the use of different performance measures. Specifically, the performance measures used in this paper include Jensen’s, Treynor’s, Sharpe’s and Sortino’s classical performance ratios and two of the most recent and accurate performance measures that take into account the four statistical moments of the probability distribution function, the Omega’s ratio and the MPPM statistic. In addition, for robustness, this paper analyses whether the performance results of conventional and Islamic sector portfolios depend on the state of the economy, by splitting the whole sample period into three sub-periods: pre-crisis, crisis and post-crisis. So, this paper would determine which types of portfolios offer better performance depending on the economic cycle. The main results confirm that, in general, the best performing sector is Health Care, while the worst performing sector is Financials. Furthermore, Islamic portfolios provide higher returns than conventional portfolios during the full period as well as the three sub-periods. This could be due to the fact that the low level of uncertainty and speculation in Islamic finance and the prohibition of interest rates that negatively affect the economic evolution would justify the greater profitability obtained by the Islamic sectors, even along the crisis sub-period.

Asian Options pricing under Ornstein–Uhlenbeck dynamic

ABSTRACT. Asian options are derivatives contracts written on an average price. More precisely, prices of an underlying security (or index) are recorded on a set of dates during the lifetime of the contract.
They are quite popular among commodity derivative traders and risk managers. Eydeland and Wolyniec [2003] provide an example of how these derivatives play an important role in price risk management performed by local delivery companies in the gas market.
In this paper we implement some pricing procedures for continuously monitored fixed strike Asian call options when the underlying follows a mean reverting Ornstein-Uhlenbeck dynamics. This assumption is more realistic than the usual geometric Brownian motion in contexts such as commodity markets, where mean reversion is widely observed (see for example Bessembinder et al. [1995]).
From a mathematical point of view, the problem turns out to be related with the sum of correlated log-normal random variables, whose distribution in unknown and very complicated (see for example Dothan [1978] and Privault and Yu [2016] and the references there in).
Despite that, moments of this distribution are known. For this reason we first solve this problem implementing the "Moment Matching" algorithm; this method works as follows: a distribution is supposed for the sum of correlated log-normal random variables and its parameters are estimated equaling these moments with the ones implied by the supposed distribution. So, we first calculate explicitly the moments of this distribution and then we approximate the sum of log-normals with some distribution. In principle, each radom variable which is unimodal and with positive support can be used as supposed distribution, in this paper we first reconsider some approximations previously introduced in literature, the log-normal random variable (as in Turnbull and Wakeman [1991]), the gamma (as in Milevsky and Posner [1998] and Chang and Tsao [2011]) and the reciprocal gamma (as in Milevsky and Posner [1998] and Lo, Palmer and Yu [2014]); then we investigate the usage of the Birnbaum-Saunders distribution as an alternative approximation. To the best of our knowledge, this is the first attempt to use this random variable in the context of derivatives pricing. In all these cases we consider both a 2 and 3 parameters setting.
The second approximation we propose is the Lower Bound, this method consists in approximating the exercise region with the geometric average. In particular, we retrieve the results of Rogers and Shi [1992] and Thompson [1998] in a context where the underlying follows an Ornstein-Uhlenbeck process.
In order to evaluate the accuracy of approximations for the price we implement some Monte Carlo procedures using the proposed approximations as control variables. Despite the usage of the Lower Bound as control variate turns out to be straightforward, the implementation of "Moment Matching" procedure in this context seems to be more complicated. In this paper we introduce an "ad hoc" algorithm that allows to do that.
Numerical experiments show that all these pricing procedures are accurate. The introduction of a third parameter allows to increase the precision but requires a larger computational effort since multiple numerical integration is required. The Lower Bound technique turns out to ensure the best trade-off between accuracy and computational effort.

We conclude the paper considering some extensions, in particular we add jumps to the Ornstein-Uhlenbeck dynamics. We consider normally (as in Merton [1976]) and double exponentially (as in Kou [2002]) distributed jumps. The only method available in this context is the "Moment Matching" algorithm. We derive formulas for the first 3 moments under this framework and use the above mentioned random variables as supposed distributions.
Our numerical experiments show that in this context the usage of a third parameter is necessary to get some accuracy since 2 parameters approximations don't capture the fat tails. Among the approximations considered the Shifted Reciprocal Gamma seems so ensure the best results, but we are currently working on others in order to improve the results.

Optimum Thresholding for Semimartingales with Levy Jumps under the mean-squared error

ABSTRACT. We consider a univariate semimartingale model for (the logarithm of)
an asset price, {containing jumps} having possibly infinite activity (IA).
The nonparametric threshold estimator $\hat{IV}_n$ of the integrated
variance $IV:=\int_0^T\sigma^2_sds$ proposed in \cite{Man09} is constructed using observations on a discrete time grid, and precisely it sums up the squared increments of the process when they are under a {\it threshold}, a deterministic function of the observation step and possibly of the coefficients of $X$. All the threshold functions satisfying given conditions allow asymptotically
consistent estimates of $IV$, however the finite sample properties
of $\hat{IV}_n$ can depend on the specific choice of the threshold.
We aim here at optimally selecting the threshold by minimizing either the estimation mean square error (MSE) or the conditional mean square error (cMSE). The last criterion allows to reach a threshold which is optimal not in mean but for the specific path at hand.

A parsimonious characterization of the optimum is established, which turns out to be asymptotically proportional to the Lévy's modulus of continuity of the underlying Brownian motion. Moreover, minimizing the cMSE enables us to propose a novel implementation scheme for the optimal threshold sequence. Monte Carlo simulations illustrate the superior performance of the proposed method.

Statistical learning algorithms to forecast the equity risk premium in the European Monetary Union

ABSTRACT. With the explosion of “Big Data”, the application of statistical learning models has become popular in multiple scientific areas as well as in marketing, finance or other business disciplines. Nonetheless, there is not yet an abundant literature that covers the application of these learning algorithms to forecast the equity risk premium. In this paper we investigate whether Classification and Regression Trees algorithms and several ensemble methods, such as bagging, random forests and boosting, improve traditional parametric models to forecast the equity risk premium. In particular, we work with European Monetary Union data for a period that spans from the EMU foundation at the beginning of 2000 to half of 2017.
The paper first compares monthly out-of-sample forecasting ability of multiple economic and technical variables using linear regression models and regression trees techniques. To check the out-of-sample accuracy, predictive regressions are compared to a popular benchmark in the literature: the historical mean average. Forecasts performance is analyzed in terms of the Campbell and Thompson R-squared, which compares the MSFE of regressions constructed with selected predictors, against the MSFE of the benchmark. To conclude, the paper also investigates whether the most relevant economic or technical predictors selected by learning algorithms can generate economic value for a risk-averse investor. We use the Brandt and Santa-Clara (2006) approach, which builds portfolios that invest in either equities or the risk free asset an amount proportional to selected predictors.

Google searches for portfolio management: a risk and return analysis

ABSTRACT. Data about the volumes of Google searches has proven to be a useful variable helping in improving the portfolio risk-return performances. The theoretical justification is grounded on the idea that high search volumes are related to bad news and the increasing of risk. This paper shows additional evidence about the improvement of the risk-return profile of a portfolio composed on the basis of Google searches data. Two different rules to incorporate Google data are compared, showing that levels are more useful than variations. The main contribution of the Google data on search volumes is an increase of the average return, rather than a direct reduction of the variability. Moreover, to overcome the (time series and cross-section) limitations Google imposes on the download of the data, we apply a rescaling procedure to the raw Google data. This is due to make the series consistent both for the whole time interval and across different queries. We show that a proper and rigorous use of the Google search volumes, preserving the relative magnitude of all the considered series, leads to poor results, due to the large differences between search volumes' sizes. For the empirical application, we used the SPI component data from 2004 to 2017. The main results are consistent on different sub-samples. Our results lead us to reconsider the interpretation of the Google search volume: it can be considered a bad news indicator, until the volumes have a similar order of magnitude.

Market Price of Longevity Risk for A Multi-Cohort Mortality Model with Application to Longevity Bond Option Pricing

ABSTRACT. The pricing of longevity-linked securities depends not only on the stochastic uncertainty of the underlying risk factors, but also the attitude of investors towards those factors. In this research, we investigate how to estimate the market risk premium of longevity risk using investable retirement indexes. A multi-cohort aggregate, or systematic, mortality model is used where each risk factor is assigned a market price of mortality risk. To calibrate the market price of longevity risk, a common practice is to make use of market prices, such as longevity-linked securities and longevity indices. We use the BlackRock CoRI Retirement Indexes, which provides a daily level of estimated cost of lifetime retirement income for 20 cohorts in the U.S. For these 20 cohorts, we assume risk premiums for the common factors are the same across cohorts, but the risk premium of the factors for a specific cohort is allowed to take different values for different cohorts. The market prices of longevity risk are then calibrated by matching the risk-neutral model prices with BlackRock CoRI index values. Closed-form expressions and prices for European options on longevity zero-coupon bonds are determined using the model and calibrated market prices of longevity risk. Implications for hedging longevity risk with bond options are discussed.

The w(p) in the financial markets: An empirical approach on the S&P 500

ABSTRACT. The aim of the work is to estimate the probability weighting function, starting from the time series of the S&P 500 index. After an introduction to the Effecient Markets Hypothesis (EMH) and the empirical evidence against it, we have introduced the Prospect Theory (PT). Following the studies carried out by Gonzalez et al.,we have analyzed w(p) and we have proposed a new estimation method with a two parameters function. The OLS (Ordinary Least Squares) method provides the alpha and beta coefficients, which represent respectively the curvature and elevation of the weighting function. In the last part of the paper, w(p) has been implemented in the building of the portfolio with random weights.

Illusion of control and decision making under risk: experimental results

ABSTRACT. Most people have probably heard that travelling by plane is less dangerous than travelling by car but still one can meet someone fearing flights more often than someone who is terrified before getting into a car. One of the reasons for this phenomenon, indicated by psychologists, is the perceived lack of control of an airplane. In case of travelling it is a matter of life, but one may wonder if the illusion of having or not having control over some kind of risk influences a person’s behavior in financial aspects. There has been numerous research conducted in that field that gave different results. Some scientists claim that effects of illusion of control vanish if the stakes are not hypothetical and of substantial value. In this paper we focus on effects of illusion of control over investment risk on the propensity to take the risk. The aim is to check whether decisions made under risk which seems to be to some extent controllable by a decision maker, are different from decisions made in a situation when one believes that he/she has no control over risk. The hypothesis to be verified is whether decisions taken by people who have an illusion of control are more risky than decisions made by people perceiving risk totally independent of their actions. In order to verify the stated hypothesis we conducted an experiment with real although non-monetary payoffs. Over 400 students took part in the experiment. During the first class in the semester each of the students received 20 points (constituting 20% of maximum number of points possible to be gained during the semester) that were supposed to be added to the points gained during the whole semester in one specified course and help the students achieve a better note. During the second class a game was proposed to the students. In the game one could win extra points risking to lose part of the 20 points received at the first class. The task to be fulfilled was to guess the results of two coin tosses. With two results guessed properly a student scored additional points, with one result guessed properly a student stayed with his/her 20 points, with no result guessed properly a student lost some part of his/her 20 points. The scenario of the game differed among groups according to the person guessing and their remuneration. In Scenario A a person making the decision whether to take part in the game or not was supposed to guess the results by him/herself. In Scenario B a person making the decision about taking part in the game randomly drew another person from the group that was supposed to guess the results for him/her. Scenario C was similar to Scenario B, however, a person guessing the results could earn additional points if the results were guessed correctly. As predicted, we have found more people willing to take part in the game among ones assigned to Scenario A than in other scenarios; and moreover, we found that there were more people willing to play among those assigned to Scenario C that Scenario B, which makes us think participants believed that someone incentivized is able to guess better i.e. reduce the risk of losing.

ABSTRACT. This research compares twelve different factor models in explaining variations in US sector returns between Nov. 1989 and Feb. 2014 using the quantile regression approach. Specifically, the models proposed in this study rely on the Fama and French (1993) three-factor model and the Fama and French (2015) five-factor model. Nevertheless, this research augments these models with other explanatory factors, such as nominal interest rates and its components: real interest and expected inflation rates. Moreover, this paper incorporates the Carhart (1997) risk factor for momentum (MOM) and for momentum reversal (LTREV) and the Pastor and Stambaugh (2003) traded liquidity factor (LIQV). This paper shows that the twelfth model that is based on the Fama and French (2015) five-factor model, but breaks down nominal interest rates into real interest and expected inflation rates and also aggregates the three risk factors is the model with the highest explanatory power (with and Adj. R2 about 67% for Industrials). Moreover, this research points out that the extreme quantiles of the return distribution show better results (concretely, 0.1) in all the factor models and, also, some sectors such as Industrials (represent 10.79% of the whole market) and Financials (16.58%) consistently evidence more statistically significant coefficients, and therefore higher Adj. R2 values. Contrarily, Utilities (3.42%) show the lowest explanatory power constantly for all the models and quantiles.

Modelling mortality of subpopulation with Per Capita GDP

ABSTRACT. The problem of forecasting the future course of mortality has been a subject of great interest to governments and to pension and annuity providers. In fact, in the last decades life expectancy has improved considerably faster.
In addition, when forecasting mortality rates for more than one population it is of great importance to capture the common mortality trend between the different populations.
For these reasons, a large number of mortality models have been proposed and extended, such as the Li and Lee (2005) model.
Moreover, current literature on mortality forecasting for developed countries has shifted its focus on the study of a possible long-run correlations between mortality developments and observable trends. In particular, some authors focused on correlation between mortality and economic growth (Hanewald 2011; Niu and Melenberg 2014; Boonen and Li 2017).
The aim of this work is to investigate about a possible connection between the mortality rate and the real GDPPC in Italian regions.

When is utilitarian welfare higher under insurance risk pooling? (Short Paper - pdf)

ABSTRACT. This paper focuses on the effects of bans on insurance risk classification on utilitarian social welfare. We consider two regimes: full risk classification, where insurers charge the actuarially fair premium for each risk, and pooling, where risk classification is banned and for institutional or regulatory reasons, insurers do not attempt to separate risk classes, but charge a common premium for all risks. For the case of iso-elastic insurance demand, we derive sufficient conditions on higher and lower risks’ demand elasticities which ensure that utilitarian social welfare is higher under pooling than under full risk classification. Empirical evidence suggests that these conditions may be realistic for some insurance markets.

Sustainability of the Algerian retirement system: a multi scenarios analysis

ABSTRACT. Following the recommendations of the World Bank in 1994, many countries in the world have launched heavy reforms on their pension systems. Reforms aimed principally to replace the classical one pillar Pay-As-You-Go system by a multi pillars system in order to face population aging. The Algerian retirement system is a PAYG defined benefits system. That means that the working population contributes to pay the pension benefits of the retired population while the government takes in its charge the difference between the two parts of the equation. With population aging, it will be taught to maintain the financial balance of that system. In the present paper, we propose to carry out a multi scenario analysis of the evolution of the financial balance of the Algerian retirement system. This analysis will be based on different scenarios about the future evolution of the parameters involved in retirement calculation: activity; employment; affiliation to social security; wages and contribution rates.

Nudging long-term saving: Might the “Save More Tomorrow” approach work in Spain?

ABSTRACT. This paper analyses the effectiveness of a nudging mechanism adapted from Save More Tomorrow (SMART) by Benartzi & Thaler (2004) to promote long term saving in Spain. To this end, the authors analyze the results of a pilot program implemented during 2016 with 284 employees of a leading Spanish life and pensions company. The main conclusion of the analysis is that this program significantly increases voluntary saving and helps people enjoy a better retirement by helping them save more for the long term. This increase is important specifically among the groups that have the lowest saving levels: young people and the lowest earners. The results of this field experiment confirm the effectiveness of levering the default option to increasing long-term saving patterns in Spain. Moreover, the paper shows that the nudge approach to increase savings does also work even if the program is targeted to saver with high financial literacy and professional expertise in the field. The number of participants with voluntary contribution went up by 248.53% and, after a year, 85.11% of program participants maintain their monthly voluntary contribution.

The Spanish public pension system consists of a single and earnings-related benefit. In 2017 the maximum pension is 36.031€ and with a means tested minimum pension. The most important challenge for the sustainability of the Spanish pension system is the old age dependence ratio in the future and to guarantee the total amount of retiree pension. In 2015, the number of individuals aged 65 and over per 100 people of working age was 29.6%, and the projection for 2050 is 73.2% (O.E.C.D. 2015). Demographic evolution challenges the sustainability of the Spanish public pensions system and alternative solutions to guarantee its sustainability have been proposed, such as tax increase resources for public pensions, structural reforms to increase the employed population and wages, and acceptance of the reduction of the average pension and compensating it with more resources from private savings (Rafael Doménech, 2016). Therefore if greater private savings are necessary, classical economic theory does not take into account the sensitivity of savers to this situation.

In this context, Spain is in the middle of the list of European countries in terms of household financial assets. In 2015, only 15.3 % of these household financial assets included life insurance reserves and pension plans, relegating Spain to the last third. Moreover, although 44% of Spaniards do not think they will have an adequate economic level when they retire, 57% of the Spanish population over the age of 36 have not started to save for their retirement (BBVA, 2015).

Suboptimal saving is a widely spread problem that can be explained from a behavioral economics approach. People’s quasi-hyperbolic preferences explain procrastination, choosing patiently when it is for the future and impatiently when it is for the present. It is necessary to start saving as soon as possible, because the later we start, the higher the periodic amount cost will be.

From a behavioral economics viewpoint, there are critical cognitive biases that impede us from saving enough for our retirement:
• Inertia or Status-quo bias, is evident when people prefer things to stay the same by doing nothing or by sticking with a previously made decision. Everyone who has a pension plan thinks they should save more, but postpone it and the savings rate remains the same.
• Loss Aversion, important behavioral economic concept associated with prospect theory and is encapsulated in the expression “losses loom larger than gains”. We do not like our take-home pay to decrease because of our retirement saving.
• Hyperbolic discounting or present bias, we have more control related to the future than the present. Lack of self-control to sacrifice present pleasure means not to be able to follow one´s long term goal.
This paper fills this gap in the literature by designing and implementing a field experiment to nudge private retirement savings in Spain. To this end, and based on SMART approach (Benartzi & Thaler, 2004), a savings program with automatic enrollment was tested through a field experiment with participants in a pension contribution system. The field experiment was carried out with the staff of a life insurance and pension company, in an environment with retirement needs awareness, a high level of financial literacy, three quarters of them have a bachelor´s degree and the average time spent in the company was eleven years.

The main conclusion of the paper is twofold. First, the results of the experiment show that “Save More Tomorrow” approach is also effective in Spain and is a welcome aid in making decisions about their savings. Second, the experiment shows that the cognitive biases that make effective the “Save More Tomorrow” approach do not disappear when the subjects have a high level of financial literacy and are both trained and professionally experienced in long term saving.

When is utilitarian welfare higher under insurance risk pooling? (Abstract)

ABSTRACT. Restrictions on insurance risk classification are common in life insurance and other personal insurance markets. Examples include the ban on gender classification in the European Union, and restrictions in many countries on insurers' use of genetic test results. Such restrictions are usually perceived as having negative effects on efficiency. But because restrictions also make high risks better off and low risks worse off, they can also increase social equity. Therefore depending on distributional preferences expressed in the social welfare function, restrictions might either increase or decrease utilitarian social welfare.

This paper considers two regimes: full risk classification, where insurers charge the actuarially fair premium for each risk, and pooling, where risk classification is banned and for institutional or regulatory reasons, insurers do not attempt to separate risk classes, but charge a common premium for all risks. Under the pooling regime, it is intuitive that the equilibrium price -- the pooled price at which insurers break even -- will depend on demand elasticities of lower and higher risks. Another intuition is that pooling implies a redistribution from lower risks towards higher risks. The welfare outcome will depend on how we evaluate the trade-off between the gains and losses of the two types.

This paper connects and builds on these intuitions. For the case of iso-elastic insurance demand, we derive sufficient conditions on higher and lower risks' demand elasticities which ensure that utilitarian social welfare is higher under pooling than under full risk classification. Empirical evidence suggests that these conditions may be realistic for some insurance markets.

ABSTRACT. In periods of highly volatile interest rates, a slope or shape change of the term structure can impair insurance firms beyond what can be captured by the so called duration gap. In fact, a duration matching strategy does not eliminate the immunization risk i.e. the risk of unexpected changes of future values due to arbitrary movements of the term structure.

In this paper, following Fong and Vasicek [1,2,3] seminal results, we introduce a risk-return dimension into the immunization literature.
In particular, they identify a risk measure, M2, for arbitrary changes of the term structure, as the time-variance of the portfolio cash-flows, such that the ex post portfolio value undergoes a percentage change proportional to M2. A special role plays the slope of the term structure, which, according to Fong and Vasicek [2], is decomposed into the sum of a “shift in level” component (“convexity effect”) and a “slope of shift” component (“risk effect”) of ambiguous sign. In case of adverse shift (convexity effect lower than shift effect, i.e. negative slope change) the realized return will be under the target value.

Summing up or over time (between time 0 and H) and a given portfolio of K bonds, a stochastic portfolio return arises whose moments can be used to develop a risk-return optimization problem for immunized portfolios.
The immunization case becomes a “passive” strategy (minimum risk) among an entire menu of active management decisions in which a partial risk minimization is exchanged for more return potential. Given the horizon H, a risk-return tradeoff is obtained as in the classical mean-variance approach.

An empirical application to Italian insurance companies highlights which portfolios could be actively re-positioned over the efficient frontier, at the chosen level of the firm’s risk appetite.

The value of information for optimal portfolio management

ABSTRACT. What is the value of information for a portfolio manager who invests in the stock market to optimize the utility of her future wealth?
We study this problem in a market with a mean reverting market price of risk $X$
and where the returns of the stocks are random but predictable.
The process $X$ cannot be observed by the manager, and it is driven by different risk factors from those directly affecting the stocks. The objective of the manager is to maximize the utility of her wealth at a given time $T$. In a nutshell, what we consider is the classical Merton problem for an incomplete market with partial information.

After defining the dynamics of the assets and of the process $X$, we study the corresponding filtering problem. Then, we consider the same problem, this time assuming that the manager directly observes $X$.
The difference of the certainty equivalents of the two optimal utilities is the maximum price that the agent with partial information would be willing to pay for the full information. Hence it represents the value of the information for the manager.
In our setting, such a quantity can be expressed in closed form.

The formula for the value of information can be used to provide direct answers to questions like:
{\em A more risk-averse manager is willing to pay more or less to get full information? The value of information is higher or lower when the market gets closer to be a complete one?
What is the impact on the value of the information of the uncertainty on the market price of risk?}

Simulating Economic Variables using Graphical Models

ABSTRACT. Actuaries and financial risk managers use an Economic Scenario Generator (ESG) to identify, manage and mitigate risks at a range of horizons. In particular, pension schemes and other long term businesses require ESGs to simulate projections of assets and liabilities in order to devise adequate risk mitigation mechanisms. This requires ESGs to provide reasonable simulations of the joint distribution of several variables that enter the calculation of assets and liabilities. In this paper, we discuss how a graphical model approach is used to develop an ESG, and also provide a specific application.

A wide range of ESGs are currently in use in industry. These models have varying levels of complexity and are often proprietary. They are periodically recalibrated, and tend to incorporate a forecasting dimension. For instance, they may incorporate a Vector Auto Regression model. Alternatively, many rely on a cascading structure, where the forecast of one or more variables is then used to generate values for other variables, and so on. In each case, these models balance the difficult trade-off between accurately capturing both short and long term dynamics and interdependences. We argue that, for the purpose of risk calculations over very long periods, it may be easier and more transparent to use a simpler approach that captures the underlying correlations between the variables in the model. Graphical models achieve this in a parsimonious manner, making them useful for simulating data in larger dimensions. In graphical models, dependence between variables is represented by “edges” in a graph connecting the variables or “nodes”. This approach allows us to assume conditional independence between variables and to set their partial correlations to zero. The two variables could then be connected via one or more intermediate variables, so that they could still be weakly correlated.

We compare different algorithms to select a graphical model, based on p-values, AIC, BIC, and deviance. We find them to yield reasonable results and relatively stable structures in our example. The graphical approach is fairly easy to implement, is flexible and transparent when incorporating new variables, and thus easier to apply across different datasets (e.g. countries). Similar to other reduced form approaches, it may require some constraints to avoid violation of theoretical rules. It is also easy to use this model to introduce arbitrary economic shocks.

We provide an example in which we identify a suitable ESG for a pension fund in United Kingdom that invests in equities and bonds, and pays defined benefits. While more complex modelling of the short term dynamics of processes is certainly feasible, our focus is on the joint distribution of innovations over the long term. To this end, we simply fit an autoregressive process to each of the series in our model and then estimate the graphical structure of the contemporaneous residuals. We find that simulations from this simple structure provide plausible distributions that are comparable to existing models. We also discuss how these models can be used to introduce nonlinear dependence through regime shifts in a simple way.
Overall, we argue that this approach to developing ESGs is a useful tool for actuaries and financial risk managers concerned about long term portfolios.

Pricing Electricity Derivatives with Markov Regime Switching Models

ABSTRACT. Two of the most distinct features of electricity spot prices in deregulated power markets are mean reversion property and price spike. Markov regime-switching (MRS) models are a good candidate for capturing these features. We propose three models of Markov regime switching that model the spot prices themselves rather than the log prices. Specifically, we suggest a geometric Brownian Motion process and an Ornstein–Uhlenbeck (OU) process for the base regime and a normal distributed random variable, Brownian motion process with drift, and mean-reverting process with a jump component for the spike regime. The derivative pricing on such models is based on splitting up the derivative price into the mean-reverting component and spike component owing to the independence feature of regimes in the MRS model. We derive analytic formulae for European call options and electricity forward contracts under the proposed models.