ICEEE 2021: NINTH ITALIAN CONGRESS OF ECONOMETRICS AND EMPIRICAL ECONOMICS
PROGRAM FOR FRIDAY, JANUARY 22ND
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:40 Session 4A: Factor models
09:00
Measuring Systematic Comovement with Approximate Threshold Group-Factor Models

ABSTRACT. We study regime-specific systematic comovement between two large panels of variables that exhibit an approximate factor structure. Within each panel we identify threshold-type regimes through shifts in the factor loadings. For the resulting regimes between the two panels, we define as \systematic" the comovement between any two variables in different panels that is generated by the comovement between the common components. In our setup changes in comovement are thus identified by regime shifts in the loadings. After constructing both pairwise and average measures of systematic comovement between the two panels, we propose estimators for these measures and derive their asymptotic properties. The empirical analysis of two large panels of U.S. and international equity returns shows that their systematic comovement increases when U.S. macroeconomic uncertainty is sufficiently high.

09:25
Quasi Maximum Likelihood Estimation and Inference of Large Approximate Dynamic Factor Models via the EM algorithm

ABSTRACT. This paper studies Quasi Maximum Likelihood estimation of dynamic factor models for large panels of time series. Specifically, we consider the case in which the autocorrelation of the factors is explicitly accounted for and therefore the model has a state-space form. Estimation of the factors and of their loadings is implemented by means of the Expectation Maximization (EM) algorithm, jointly with the Kalman smoother.~We prove that, as both the dimension of the panel $n$ and the sample size $T$ diverge to infinity: (i) the estimated loadings are $\sqrt T$-consistent and asymptotically normal if $\sqrt T/n\to 0$; (ii) the estimated factors are $\sqrt n$-consistent and asymptotically normal if $\sqrt n/T\to 0$; (iii) the estimated common component is $\min(\sqrt T,\sqrt n)$-consistent and asymptotically normal regardless of the relative rate of divergence of $n$ and $T$.We then propose estimators of the asymptotic covariances, robust to possible mis-specification of the idiosyncratic second order structure, which can be used to conduct inference on estimated loadings and to estimate confidence intervals for the estimated factors and common components.~In a MonteCarlo simulation exercise and an analysis of US macroeconomic data, we study the performance of our estimators and we compare them with the traditional Principal Components estimators.

09:50
A cointegration-based Permanent-Transitory decomposition for Dynamic Factor Models: long and short-run co-movement of commodity prices

ABSTRACT. In this article we propose a cointegrated-based Permanent-Transitory decomposition for non-stationary Dynamic Factor Models. This is achieved by taking into account the cointegration relations among variables and by splitting the common movement, represented through the factor structure, in a long term non-stationary part and a short-term stationary component. First, a decomposition as proposed by Kasa (1992) allows to split the vector of variables into stationary and non-stationary series corresponding to the cointegration relations and the common trends, respectively. Then, a DFM estimation can be performed straightforwardly to the decomposed vector of variables, exploiting the estimator proposed in Doz et al. (2012) and by differencing the non-stationary part resulting from the decomposition (i.e. the common trends). The non-stationarity of the system is then recovered by integration of the estimated factors (Bai and Ng, 2004). By applying this procedure to a set of commodity prices, divided in blocks according to the specific market (food, energy, metals, agricutural raw materials and livestock products) we have been able to capture the within co-movement, represented by the cointegration relations, and the between co-movement, captured by q common factors. Furthermore, with our methodology we have disentangled the permanent and the transitory component of each commodity price time series.

10:15
Factor Models with Downside Risk
PRESENTER: Lorenzo Trapani

ABSTRACT. We propose a conditional model of asset returns in the presence of common factors and downside risk. Specifically, we generalise existing latent factor models in three ways: we allow for downside risk via a threshold specification which allows for the estimation of the (usually set a priori) `disappointment' event; we permit different factor structures (and number of factors) in different regimes; we show how to recover the observable factors risk premia from the estimated latent ones in different regimes. The usefulness of this generalised model is illustrated through two applications to low-dimensional and medium-sized cross-sections of asset returns.

09:00-10:40 Session 4B: Counterfactual analysis in Labour
Location: Room Li Cossi
09:00
Employment Protection and Firm-provided Training in Dual Labour Markets

ABSTRACT. In this paper we leverage a labour market reform (Fornero Law) which reduced firing restrictions for open-ended contracts in the case of firms with more than 15 employees in Italy. The results from a Difference in Regression Discontinuities design demonstrate that after the reform, the number of trained workers increased in firms just above the threshold by approximately 1.5 additional workers. We show that this effect can be explained by the reduction in worker turnover and a lower (higher) use of temporary (permanent) contracts. Our study highlights the potentially adverse effects of EPL on training in dual labour markets.

09:25
Board composition and performance of state-owned enterprises: Evidence from a natural experiment

ABSTRACT. Board composition and quality of governance crucially affect firms' performance. This is especially true for government-owned firms that provide local public goods and services and operate in environments not disciplined by market competition forces. This paper examines the impact of board composition on the performance of Italian local government-owned enterprises, using data on their stockholders, balance sheet indicators and survey information on citizens' satisfaction regarding local public services. We exploit the staggered introduction of a law imposing a gender quota for boards of directors in firms controlled by the public sector and study its effects on board gender composition and on different firm outcomes in affected firms relative to those with a minority share of public ownership. The [preliminary] results indicate that the reform was effective in increasing female presence on the boards of directors of government-owned enterprises. Interestingly, we also detect a positive spill-over effect on the control group. Although there is no evidence of significant changes in financial firm performance measures, we find that citizens are more satisfied with the provision of local public services, consistent with the improved quality of government-owned enterprises' output.

09:50
Born in the right place at the right time: what drives training contracts as a long-lasting employment device?

ABSTRACT. We seek to unpack geographic variations in the quality of education and thickness of the labour market to explain under which circumstances a training contract acts as a long-lasting employment device. We exploit a unique setting in Italy to set a differences-in-difference-in-discontinuity design. We estimate if, across these factors, there are differences in the comparison of the likelihood of having a long-lasting employment relationship of treated and similar untreated individuals. The thickness of the labour market increases the probability of having an open-ended contract by reducing the costs of transferring employees from unsuccessful to successful firms. However, its the combination of high-quality education in a thick labour market that plays a relevant role in creating a long-lasting employment relationship through a better employee-employer match.

10:15
The Effect of Temporary Employment on Labour Market Outcomes

ABSTRACT. We evaluate the causal impact of temporary contracts on future labour market outcomes versus both a spell of unemployment and a spell of permanent employment over the period 2007-2016, using the panel of the Italian Labour Force Survey. We identify the causal effect imposing that, conditional on a suitable set of observable characteristics, the treatment status (temporary job) is ignorable for the outcome, and we build a placebo test to validate the ignorability assumption. Then, we propose a new methodology to estimate the remaining selection bias due to the omission of a known variable, which can be observed simultaneously with the treatment status but not with the outcome. Results indicate that a spell of temporary work instead of a spell of unemployment increases the probability of being employed after twelve months by 35 percentage points and the probability of having a permanent contract by 7.3 p.p; individuals who had a temporary contract work also work longer and earn more. Instead, workers who experience a spell of temporary work rather than a spell of permanent work are less likely to be employed 12 months later (-5 p.p.) and to have a permanent contract (-40 p.p.); they earn less and are also less satisfied. The placebo test indicates that large part of the selection bias is eliminated, but not all of it; however, our sensitivity analysis suggests that even accounting for the remaining selection bias, the different outcomes between the two groups would remain large and significantly different from zero.

09:00-10:40 Session 4C: Going Green
09:00
Heterogeneous effects of waste pricing policies

ABSTRACT. Using machine learning methods in a quasi-experimental setting, I study the heterogeneous effects of waste prices - unit prices on household unsorted waste disposal - on waste demands and social welfare. First, using a unique panel of Italian municipalities with large variation in prices and observables, I show that waste demands are nonlinear. I find evidence of nudge effects at low prices, and increasing elasticities at high prices driven by income effects and waste habits before policy. Second, I combine municipal level price effects on unsorted and recycling waste with their impacts on municipal and pollution costs. I estimate overall welfare benefits after three years of adoption, when waste prices cause significant waste avoidance. As waste avoidance is highest at low prices, this implies that even low prices can substantially change waste behaviors and improve welfare.

09:25
Mutual Funds' Fire Sales and the Real Economy: Evidence from Hurricanes

ABSTRACT. This paper contributes to the recent debate on whether nonfundamental price dislocations affect real economic activities, using a novel and economically-grounded approach. Hurricanes create liquidity demand from investors living in disaster zones. This translates into additional outflows for mutual funds in the areas affected by hurricanes of about $2.5 billions. Such outflows cause fire sales, which are followed by temporary price dislocations in stocks unrelated to the natural disaster (-7% reverted within 10 months). The nonfundamental price drop induces firms to re- duce investments by 4%. These results indicate that when the source of outflows is identified ex-ante and stems from investors’ liquidity needs unrelated to fund perfor- mance, the resulting nonfundamental price dislocations actually distort firms’ real decisions.

09:50
A time-varying Greenium for European stocks

ABSTRACT. This paper studies the evolution of the Greenium, i.e. a risk premium linked to firms' greenness and environmental transparency, based on individual stock returns. The Greenium is associated to a priced "greenness and transparency" factor which considers both companies' greenhouse gas emissions and the quality of their environmental disclosures. By estimating an asset pricing model with time-varying risk premia, we analyze the evolution of the European Greenium from January 2006 to December 2019. We show that the Greenium dropped after the Paris Agreement was reached in December 2015, indicating investors' willingness to earn a lower return, ceteris paribus, to hold greener and more transparent stocks. The Greenium started to increase again at the end of 2016, with the election of Donald Trump.

10:15
Climate change awareness: Empirical evidence for the European Union

ABSTRACT. In this paper we assess public attitudes on climate change in Europe over the last decade. Using aggregate figures from the Special Eurobarometer surveys on Climate Change, we find that climate change attitudes have evolved according to the "S-shaped" information dissemination model, conditional to various socioeconomic and climatological factors. In particular, we find that environmental awareness is directly related to per capita income, social trust, secondary education, the physical distress associated with hot weather and loss caused by extreme weather episodes. It is also inversely related to greenhouse gas emissions and tertiary education. Moreover, consistent with our epidemics-like narrative, we find a negative impact for Donald Trump's denial campaigns and a larger positive effect for Greta Thunberg's environmental activism. In terms of policy implications, this paper calls on the EU to take up leadership in the fight against climate change and declare a climate emergency. It also calls on teachers to introduce their students to climate change, science journals to allow wide access to any climate change article they publish and public institutions to protect climate change evidence from politicization. This paper finally calls for the close coordination of monetary and fiscal policies, to allow the green bonds market to reach rapidly the size required for the implementation of effective climate change mitigation policies.

09:00-10:40 Session 4D: Advances in counterfactual methods
09:00
Bounding Program Benefits When Participation is Misreported

ABSTRACT. In empirical research, measuring correctly the benefits of welfare interventions is incredibly relevant for policymakers as well as academic researchers. Unfortunately, the endogenous program participation is often misreported in survey data and standard instrumental variable techniques are not sufficient to point identify and consistently estimate the effects of interest. In this paper, we focus on the weighted average of local average treatment effects (LATE) and (i) derive a simple relationship between the causal and the identifiable parameter that can be recovered from the observed data, (ii) provide an instrumental variable method to partially identify the heterogeneous treatment effects, (iii) formalize a strategy to combine administrative data on the misclassification probabilities of treated individuals to further tighten the bounds. Finally, we use our method to reassess the benefits of participating to the 401(k) pension plan on savings.

09:25
Constrained Classification and Policy Learning

ABSTRACT. Modern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize the surrogate-loss techniques to circumvent computational complexity in minimizing the empirical classification risk. These techniques are useful also for causal policy learning problems as estimation of individualized treatment rules can be cast as weighted classification. Consistency of these surrogate-loss approaches studied in Zhang (2004) and Bartlett et al. (2006) crucially relies on the assumption of correct specification, meaning that the specified class of policies contains a first-best. This assumption is, however, less credible when the class of policies is constrained by interpretability or fairness, leaving applicability of the surrogate-loss based algorithms unknown in such second-best scenarios. This paper analyzes consistency of the surrogate-loss procedures under a constrained set of policies without assuming correct specification. We show that the hinge losses (i.e., l_1-support vector machines) are the only surrogate losses that preserve consistency in the second-best scenarios. We illustrate implications and uses of our theoretical results in monotone classification by proposing computational attractive procedures that are robust to misspecification.

09:50
Modelling and measuring deaccessioning: A 2SLS-MIMIC and 2SLS-EMIMIC approach

ABSTRACT. The removal of objects from a museum’s collection, i.e., deaccessioning, is one of the most debated topics in the professional and scholarly literature on museum management, in particular in the United States. Despite few studies published in the past years the issue of its modelling and measurement remains a mystery. In our article, we build on a Grossman-Hart type principal-agent model, demonstrating strong adverse microeconomic effects of “adverse” deaccessioning, being defined as deaccessioning not being focused on improving the collection (but, e.g., to the new facilities, solving financial issues of the museum, etc.). To model a "barely-legal" practice such as deaccessioning with no data available, we use MIMIC (Multiple Indicators and Multiple Causes) models deriving from the structural modelling literature. We use recently developed Detect.MIMIC algorithm to derive the causal scheme of the model. To take into account reverse causality in the model we develop three new 2SLS-MIMIC estimators for static and dynamic situations based on Bollen’s 2SLS estimator and Jöreskog’s analysis of covariance structures. We show the new estimators are consistent and asymptotically normal and explore this in a simulation study. Results enable us to estimate the relative extent of “adverse” deaccessioning and study its features. Using different microeconometric models, we demonstrate its dependence upon the size of the museum and macroeconomic conditions. We find that deaccessioning has not risen in the US in times of the financial crisis, which is an interesting result that has to be explored in further analysis.

10:15
Testing Instrument Validity with Covariates

ABSTRACT. We develop a test for instrument validity in the heterogeneous treatment effect model that can accommodate conditioning covariates. Building on a common empirical setting of marginal treatment effect analysis, we assume semiparametric dependence between the potential outcomes and conditioning covariates, and show that this allows us to express the testable implication of the instrument validity in terms of inequality restrictions among the subdensities of estimable partial residuals. To develop a powerful test for these testable implications, we propose a use of a \textit{distilled instrument}. The distilled instrument is a transformation of the instrument and propensity scores designed to extract the sample information useful for detecting violation of the instrument validity. Our proposed test procedure assesses validity of the distilled instrument, and rejecting its validity allows us to reject the validity of the original instrument also. We perform Monte Carlo exercises to demonstrate the gain in power from using a distilled instrument and the importance of controlling for conditioning covariates when testing for instrument validity. We apply our test procedure to the college proximity instrument of Card (1993), the same-sex instrument of Angrist and Evans (1998), the school leaving age instrument of Oreopoulos (2006), and the mean land gradient instrument of Dinkelman (2011). We find that the null of instrument validity conditional on covariates cannot be rejected for Card (1993) and Angrist and Evans (1998), but it is rejected at conventional levels of significance in the case of Oreopoulos (2006) and in some cases for Dinkelman (2011).

11:00-12:40 Session 5A: Vector Auto Regressions
11:00
SVARs with breaks: Identification and inference

ABSTRACT. In this paper we propose a class of structural vector autoregressions (SVARs) characterized by structural breaks (SVAR-WB). We study the identification of the structural parameters. We consider a very general set of restrictions on the parameters and provide a condition for identification that is only a function of the restrictions and does not depend on the estimated parameters. If the condition is met, the SVAR-WB is locally identified almost everywhere in the parametric space. Moreover, the strategy we use for obtaining the structural form of the model enables to mix equality restrictions jointly with sign restrictions, that is a novelty for SVAR-WBs. We also show, through a set of examples, that allowing for structural breaks can be a benefit in terms of identification, providing more flexibility on the kind of restrictions to impose for joining the identification of the model. As the identification holds locally but not globally, there will be a set of isolated structural parameters that are observationally equivalent in the parametric space. In this respect, both common frequentist and Bayesian approaches produce unreliable inference as the former focuses on just one of these observationally equivalent points, while for the latter on a non-vanishing sensitivity to the prior. To overcome these issues, we propose alternative approaches for estimation and inference that account for all admissible observationally equivalent structural parameters. Both the theory of identification and inference are illustrated through a set of examples and an empirical application on the transmission of US monetary policy over the great inflation and great moderation regimes.

11:25
BOOTSTRAP DIAGNOSTICS IN PROXY-SVARS WITH WEAK PROXIES

ABSTRACT. Bootstrap methods are debated and routinely applied in Structural Vector Autoregressions (SVARs) identified and estimated by external instruments (proxies), in the so-called proxy-SVAR/SVAR-IV approach. In this paper we link the bootstrap to the identification issues that arise in proxy-SVARs where proxies are ‘weak’in the sense that their correlation with instrumented structural shocks satisfy a local-to-zero embedding. We show that the bootstrap can be constructively used to build a computationally straightforward diagnostic test for strong against weak proxies. The test is particularly useful for practitioners: it amounts to a normality test, does not require novel asymptotic distributions and novel critical values, and is robust to VAR innovations and proxies being characterized by conditional heteroskedasticity. The empirical size and power properties of the suggested diagnostic test is analyzed through a set of Monte Carlo experiments which show that in finite samples, empirical size and power performance is comparable to that of first-stage F-statistics borrowed from the literature on instrumental variable regressions. On the empirical side, the test is applied to a fiscal proxy-SVAR estimated on US quarterly data to infer the tax multiplier by using a narrative fiscal proxy for the tax shock.

11:50
Inference in Non-stationary High-Dimensional VARs

ABSTRACT. We use the lag-augmentation idea of Toda & Yamamoto (1995) to build an inferential procedure which holds for high-dimensional non-stationary VAR models. We prove that we can restrict the augmentation to only the variables of interest for the testing, thereby reducing the loss of power coming from the misspecification of the model. By means of a post-double selection procedure where we use the lasso to reduce the parameter space, we are able to partial-out the effect of nuisance parameters and establish uniform asymptotics. We apply our procedure to the FRED-MD dataset to investigate the main macroeconomic drivers of unemployment and inflation.

12:15
Dimension Reduction for High Dimensional Vector Autoregressive Models

ABSTRACT. This paper aims to decompose a large dimensional vector autoregessive (VAR) model into two components, the first one being generated by a small-scale VAR and the second one being a white noise sequence. Hence, a reduced number of common factors generates the entire dynamics of the large system through a VAR structure. This modelling extends the common feature approach to high dimensional systems, and it differs from the dynamic factor models in which the idiosyncratic components can also embed a dynamic pattern. We show the conditions under which this decomposition exists, and we provide statistical tools to detect its presence in the data and to estimate the parameters of the underlying small-scale VAR model. We evaluate the practical value of the proposed methodology by simulations as well as by empirical applications on both economic and financial time series.

11:00-12:40 Session 5B: Microeconomic Evaluation
Location: Room Li Cossi
11:00
Depowering Risk: Vehicle Power Restriction and Young Driver Accidents in Italy

ABSTRACT. This paper investigates how vehicle power restrictions on novice drivers affect young road accidents. Exploiting rich administrative data on all severe accidents occurred in Italy between 2006 and 2016, combined with the driving permits census, we compare the evolution of accident rates among differently exposed cohorts of young adults. We find that the reform lowers road accidents per capita by about 19%. While the lower inflow of young drivers into the pool of road users can partly explain this reduction – the number of licenses issued post-reform fell by 23.5% – we also find that drivers who undertake the new regulation are about 13% less likely to cause a road accident. Consistent with the identification strategy, the effect is entirely driven by a reduction of accidents by cars with above-limit engines and is primarily explained by fewer accidents due to excessive speed violations. The estimated effects are long-lasting: older drivers no longer exposed to the power restriction – which only lasts one year – exhibit lower accident rates. Regression-discontinuity estimates suggest that the reform affects the driver-vehicle pairing, as sales of car models with barely complying engines grow compared with those of above-limit models. Our findings highlight the importance of policies aimed at reducing young drivers’ exposure to high-risk settings. This aspect is especially crucial in frameworks characterized by asymmetric information, where screening mechanisms based on individual risk attitude and ability are unlikely to constitute an effective policy tool.

11:25
Anti-Mafia Law Enforcement and Lending in Mafia's Lands. Evidence from the Judicial Administration in Italy

ABSTRACT. We analyze the impact of a preventive measure aimed at fighting the criminal organizations' activities on the bank-firm relationship in the four Italian regions with the highest density of mafia over the period 2004-2016. Taking advantage of the staggered firm-level anti-mafia enforcement actions, we implement a difference-in-differences approach and find that after entering into the judicial administration mafia-infiltrated firms experience a 19 percent contraction of bank credit and have a higher probability of being credit rationed compared to the control group. These results are robust to alternative sample selection (i.e., propensity score matching) and to different robustness exercises. The effect is due to changes in both demand-driven and supply-driven determinants of loans after confiscation. Finally, we study whether confiscation of infiltrated firms produces externalities on non-infiltrated companies, and show that banks do not reassess the overall credit risk in local markets.

11:50
Hang Up on Stereotypes: Domestic Violence and Anti-Abuse Helpline Campaign

ABSTRACT. We estimate the consequences of a Government-led anti-domestic-abuse campaign launched in the midst of the covid-19 pandemic on the number of calls to the Italian domestic violence helpline. In the week after the start of the campaign, we document a two-fold increase in the number of calls, which keep rising throughout the lock-down. By exploiting variation in the exposure to the campaign ads aired on TV, we show that the effectiveness of the media campaign is hindered in areas where gender stereotypes are stronger, even when differentials in income and violence are accounted for. More efforts to break down gender stereotypes are needed to successfully increase domestic violence reporting.

12:15
Sweeping the Dirt Under the Rug: Measuring Spillovers from an Anti-Corruption Measure

ABSTRACT. Using data on Italian public procurement, I show that the implementation of a law enforcement measure on a municipal government has two effects on neighbouring municipalities. First, they increase the number of contracts under a threshold below which evidentiary requirements become less stringent, making it more difficult to prove any infraction. This response accounts for 1 percent of the yearly expenditure on large contracts. Second, they renegotiate fewer contracts, a practice that is often associated with corruption. The results suggest that, in response to law enforcement, local administrators exploit less monitored margins of the procurement process and engage less in activities that are signals of potential corruption so as to minimise scrutiny by law enforcement bodies. I provide evidence that this is indeed the case. Using a technique from natural language processing, I show that municipalities split large projects into contracts smaller than the threshold and the response occurs only in sectors that are more vulnerable to corruption (i.e. construction and waste management).

11:00-12:40 Session 5C: Empirical Finance
11:00
Climate Sin Stocks: Stock Price Reactions to Global Climate Strikes

ABSTRACT. The first Global Climate Strike on March 15, 2019 has represented a historical turn in climate activism. We investigate the cross-section of European stock price reactions to this event. Looking at a large sample of European firms, we find that the unanticipated success of this event caused a substantial stock price reaction on high-carbon intensity companies. These findings are likely driven by an update of investors' beliefs about the level of environmental social norms in the economy and the anticipation of future developments of climate regulation.

11:25
Does one (unconventional) size fit all? Effects of the ECB's unconventional monetary policies on the euro area economies

ABSTRACT. This paper aims at assessing the macroeconomic impact of unconventional monetary policies (UMPs) that the ECB has put in place in the euro area after the 2007 financial crisis. With this purpose, we first document how the relative importance of the main transmission channels of such measures has changed over time, with the portfolio rebalancing being generally more impactful than the signaling channel after the “Whatever it takes” speech in July 2012. However, we also provide evidence of a great degree of heterogeneity across core and peripheral economies, as well as over time. We then adopt a time-varying SVAR with stochastic volatility to account for such heterogeneity, by identifying UMP shocks via “dynamic” sign restrictions. By means of a counterfactual experiment, we provide evidence of how a more aggressive stance on the part of the ECB would have helped support the economic performance of peripheral euro area economies over the period 2011-2012. Notably, if the ECB had not increased its rates in 2011, output growth and inflation in peripheral euro area would have been, on average, 0.4 and 0.3 percentage points higher.

11:50
An Euro Area Term Structure Model with Time Varying Exposures

ABSTRACT. Using monthly data for Belgium, France, Germany, Italy and Spain for the period 2002-2019, we build a Hierarchical Euro Area Dynamic Nelson-Siegel model that allows for time varying exposures of national factors on the common components, and for stochastic volatility both at the regional and country specifc level. Despite the share of national variance explained by the Euro Area factors is generally dominant, our results point out a dramatic decrease of the relative importance of common forces during the 2008 and 2012 crises, which created a neat separation between 'core' and 'peripheral' countries. This gap is particularly visible in the term premia demanded by investors on long term sovereign bonds. Furthermore, in line with Byrne et al. (2019), we find that both the level of interest rates and the associated term premia are closely related to confidence and uncertainty measures. In the aftermath of the crises these relationships appear weakened, presumably due to the unconventional intervention of the ECB.

12:15
Monetary policy shocks and inflation inequality

ABSTRACT. We evaluate household-level inflation rates since 1980, for which we compute various dispersion measures. We assess their reaction to monetary policy, using Romer and Romer (2004) shocks as proxies. We find that firstly, contractionary monetary policy shocks significantly and persistently decrease inflation dispersion in the economy, and secondly, that different demographic groups are heterogeneously affected by monetary policy. Due to different consumption bundles, lower-income households experience higher average inflation, which is at the same time decreasing more after a contractionary monetary policy shock, leading to an overall convergence of inflation rates between income groups. These results imply that consumption and income inequality are significantly different when controlling for different inflation rates.

11:00-12:40 Session 5D: Price fluctuations
11:00
World shocks and commodity price fluctuations: evidence from resource-rich economies

ABSTRACT. We identify world shocks driving up real commodity prices in a Bayesian dynamic factor model setting using a minimum set of sign restrictions complemented with constrained short-run responses. We find that a world demand and a world supply shock explain the lion's share of commodity price and commodity currency fluctuations, besides shaping the real business cycle in resource-rich economies. However, according to the asymmetric level of economic development of countries and to the intensity of trade activities, different reactions to global disturbances are estimated. We also show that the shortage of energy products in exports is responsible for negligible effects of world commodity shocks on the domestic economy. Finally, our findings suggest that the non-tradable sector benefits from resource price boosts, in line with the Dutch disease theory linked to this type of economies.

11:25
Testing and Modelling Time Series with Time Varying Tails

ABSTRACT. The occurrence of extreme observations in a time series depends on the heaviness of the tails of its distribution. The paper proposes a dynamic conditional score model (DCS) for modelling dynamic shape parameters that govern the tail index. The model is based on the Generalised t family of conditional distributions, allowing for the presence of asymmetric tails and therefore the possibility of specifying dierent dynamics for the left and right tail indices. The paper examines through simulations both the convergence properties of the model and the implications of the link functions used. In addition the paper introduces and studies the size and power properties of a new Lagrange Multiplier (LM) test based on tted scores to detect the presence of dynamics in the tail index parameter. The paper also shows that the novel LM test its more eective than existing tests based on tted scores. The model is tted to Equity Indices and Credit Default Swaps returns. It is found that the tail index for equities has dynamics driven mainly by either the upper or lower tail depending if leverage is taken or not into account. In the case of Credit Default Swaps the test identies very persistent dynamics for both the tails. Finally the implications of dynamic tail indices for the estimated conditional distribution are assessed in terms of conditional distribution forecasting.

11:50
Differences in Sectoral Price Dynamics among Italian Regions: Effects on Expenditure Composition and Welfare

ABSTRACT. This paper analyses the evolution of the consumption expenditure in the Italian regions, investigating the impact of sectoral price dynamics on the aggregate sectoral composition as well as on the representative household’s welfare. In line with the structural-change macroeconomic literature, this paper underpins on a parsimonious structural model characterized by non-homothetic preferences and balance growth path. The results confirm that differences in price dynamics do not significantly impact on the evolution of sectoral expenditure shares, while income dynamics play an important role. Yet, the welfare analysis shows that a harmonisation of regional price dynamics may lead to significant welfare improvements. These findings support the relevance of supply-side policies aimed at increasing competitiveness and consequently at bounding inflation in the Italian regions.

12:15
Global Cities and Local Challenges: Booms and Busts in the London Real Estate Market

ABSTRACT. In this paper we investigate the dynamic features of house prices in London. Using a generalized smooth transition model (GSTAR) we show that dynamic symmetry in price cycles in the London housing market is strongly rejected. We also show that the GSTAR model is able to replicate the features of the observed cycle in the simulated data. Further, our results show that the proposed model performs well when compared to other linear and nonlinear specifications in a out-of-sample forecasting exercise.

13:30-14:30 Session 6: Keynote: Gagliardini

Keynote: Patrick Gagliardini, Università della Svizzera Italiana  

"Extracting Statistical Factors When Betas are Time Varying", joint work with Hao Ma (Università della Svizzera Italiana)

14:30-15:30 Session 7: Keynote: Giglio

Keynote: Stefano Giglio, Yale School of Management

"Uncertainty and volatility shocks: implications for macroeconomics and finance"

16:00-17:40 Session 8A: Multivariate Time Series
16:00
Streaming and peer effects on the development of social capital: An analysis based on a multivariate causal hidden Markov model

ABSTRACT. We study the effect of streaming in classes according to the ability level of pupils on the development of their adult civic engagement considering individuals' participation in voting and in socio-political organizations. The empirical analysis is carried out through data of the British National Child Development Study, which is a cohort study that follows all UK citizens born during a certain week of 1958. In particular, we use six dummy variables indicating civic participation collected at 33, 42, and 51 years old. For this aim we develop a causal version of the hidden (latent) Markov model for longitudinal data so as to study the dynamics of the evolution of civil engagement over time. The underlying stochastic process is represented by a sequence of discrete latent variables with initial and transition probabilities depending, through suitable parameterizations, on treatment and post-treatment covariates. The model is estimated by maximizing a weighted log-likelihood function, with weights corresponding to the inverse of the propensity score of the received treatment estimated through a multinomial logit model involving pre-treatment covariates. Our results show that the practice of streaming according to students' ability levels can have social effects in the long run. Low-ability children who are grouped in homogeneous-ability classes at the primary school develop, in adulthood (when 33), significantly lower civic engagement than their peers who attended nonstreamed classes. Differently, the effect on civic engagement of attending a streamed class relative to a nonstreamed class is positive in the case of primary-school students of a high-ability level.

16:25
The Time-Varying Multivariate Autoregressive Index Model

ABSTRACT. Many economic variables feature changes in their conditional mean and volatility. Time Varying Vector Autoregressive Models (TVP-VARs) are often used to handle such complexity in the data, unfortunately when the number of series grows they present increasing estimation and interpretation problems. This paper tries to address this issue proposing a new Multivariate Autoregressive Index (MAI) model that features time varying mean and volatility. Technically, we develop a new estimation methodology that mix switching algorithm with the forgetting factors strategy of Koop and Korobilis (2012). This substantially reduces the computational burden and allows to select or weight, in real time the number of factors" and other features of the data using Dynamic Model Selection (DMS) or Dynamic Model Averaging (DMA) without further computational cost. Using USA macroeconomic data we provide a forecasting exercise that demonstrates the feasibility and usefulness of this new model.

16:50
Asymptotic properties of the ML estimator for mixed causal and noncausal models with generalized Student’s t-distribution.

ABSTRACT. This paper analyzes the asymptotic properties of the Maximum Likelihood Estimator for mixed causal and noncausal models with error terms following a Student’s t−distribution. We compare several existing methods and propose an alternative approach based on the empirical variance computed on the generalized Student’s t, even when the population variance does not exist. Monte Carlo simulations show the good performances of our new estimator for fat tails series.

17:15
Discrete Mixtures of Normals Pseudo Maximum Likelihood Estimators of Structural Vector Autoregressions

ABSTRACT. Likelihood inference in structural vector autoregressions with independent non-Gaussian shocks leads to parametric identification and efficient estimation at the risk of inconsistencies under distributional misspecification. We prove that autoregressive coefficients and (scaled) impact multipliers remain consistent, but the drifts and standard deviations of the shocks are generally inconsistent. Nevertheless, we show consistency when the non-Gaussian log-likelihood is a discrete scale mixture of normals in the symmetric case, or an unrestricted finite mixture more generally. Our simulation exercises compare the efficiency of these estimators to other consistent proposals. Finally, our empirical application looks at dynamic linkages between three popular volatility indices.

16:00-17:40 Session 8B: Nonstationarity and nonlinearity
Location: Room Li Cossi
16:00
Impulse response analysis for structural dynamic models with nonlinear regressors

ABSTRACT. We study the construction of nonlinear impulse responses in structural dynamic models that include nonlinearly transformed regressors. Such models have played an important role in recent years in capturing asymmetries, thresholds and other nonlinearities in the responses of macroeconomic variables to exogenous shocks. The conventional approach to estimating nonlinear responses is by Monte Carlo integration. We show that the population impulse responses in this class of models may instead be derived analytically from the structural model. We use this insight to study under what conditions linear projection (LP) estimators may be used to recover the population impulse responses. We find that, unlike in vector autoregressive models, the asymptotic equivalence between estimators based on the structural model and LP estimators breaks down. Only in one important special case can the population impulse response be consistently estimated by LP methods. The construction of this LP estimator, however, differs from the LP approach currently used in the literature. Simulation evidence suggests that the modified LP estimator is less accurate in finite samples than estimators based on the structural model, when both are valid.

16:25
Testing Linear Cointegration Against Smooth Transition Cointegration

ABSTRACT. We develop tests for the null hypothesis of linear cointegration against the alternative of smooth transition cointegration. The test statistics are based on the fully modified (compare Phillips and Hansen, 1990) or integrated modified OLS (compare Vogelsang and Wagner, 2014a) estimators suitably modified to Taylor approximations of smooth transition functions. This necessitates the adaptation of the above mentioned estimation approaches to models including cross-products of integrated regressors. As transition variable we consider integrated variables and time. For the integrated modified OLS based test we develop also fixed-b inference. The properties of the tests are evaluated with a simulation study and compared to the test proposed by Choi and Saikkonen (2004).

16:50
Unit-root test within a threshold ARMA framework

ABSTRACT. It is generally hard to discern a nonlinear stationary process with a local unit root from a linear process admitting a global unit root. Yet these two data generating mechanisms have starkly different long-run dynamics of far-reaching consequences on forecasting, control, etc. We propose a new unit-root test to solving this problem, by extending the null hypothesis to an integrated moving-average process (IMA(1,1)) and the alternative to a first-order threshold autoregressive moving-average process (TARMA(1,1)). We derive the limiting distribution and asymptotic similarity of the test statistic. The proof of tightness of the test is of independent and general theoretical interest, for unit-root testing within a nonlinear framework. Moreover, we propose a wild bootstrap version of the proposed method. Our proposals generally enjoy good size and outperform most existing tests, especially when there is an unaccounted moving average component either in the null or in the alternative hypothesis. We support the view that rejection does not necessarily imply nonlinearity so that unit-root tests should not be used uncritically to select a model. Finally, we present an application to real exchange rates.

17:15
BOOTSTRAPPING NON-STATIONARY STOCHASTIC VOLATILITY

ABSTRACT. In this paper we investigate to what extent the bootstrap can be applied to conditional mean models, such as regression or time series models, when the volatility of the innovations is random and possibly non-stationary. In fact, the volatility of many economic and financial time series displays persistent changes and possible non-stationarity. However, the theory of the bootstrap for such models has focused on deterministic changes of the unconditional variance and little is known about the performance and the validity of the bootstrap when the volatility is driven by a non-stationary stochastic process. This paper fills this gap in the literature by developing conditions for bootstrap validity in time series and regression models with non-stationary, stochastic volatility. We show that in such cases the distribution of bootstrap statistics (conditional on the data) is random in the limit. We use the concept of `weak convergence in distribution' to develop and establish novel conditions for validity of the wild bootstrap, conditional on the volatility process. We apply our results to several testing problems in the presence of non-stationary stochastic volatility, including testing in a location model, testing for structural change using CUSUM-type functionals, and testing for a unit root in autoregressive models. Importantly, we work under sufficient conditions for bootstrap validity that include the absence of statistical leverage effects, i.e., correlation between the error process and its future conditional variance. The results of the paper are illustrated using Monte Carlo simulations, which indicate that a wild bootstrap approach leads to size control even in small samples.

16:00-17:15 Session 8C: Economic Modelling
16:00
On rational fat tails

ABSTRACT. We examine the role of sunspot shocks in generating fat-tailed behavior of endogenous variables in equilibrium business cycle models without departing from the Gaussian rational expectations (RE) benchmark. We formally establish that any RE model exhibiting indeterminacy admits a linear recursive equilibrium representation as a function of regularly varying multiplicative noise (LRMN). This in turn allows small shocks to fuel large deviations, thereby imparting non-Gaussian features to equilibrium patterns in standard Gaussian environments. Numerical simulations and an estimation exercise highlight the ability of LRMN representations to replicate statistical regularities with respect to fat-tailed distributions and high-probability extreme outcomes.

16:25
Make hay while the sun shines: an empirical study of maximum price, regret and trading decisions

ABSTRACT. Using a dynamic extension of Regret Theory, we test how the regret induced by not selling a stock when the maximum price in an investment episode is attained shapes the propensity to sell a stock. We use a large discount brokerage dataset containing US households’ trading records between 1991 and 1996. Expected utility predicts that investors should stop at a threshold, whilst a Regret agent does not necessarily stop there. We observe that investors do not follow a threshold strategy in our data. Only 31.6% of the gains are sold on the day when the maximum is attained and 25.8% of the losses are sold on the day when the minimum is attained. We find that more sophisticated and younger investors are more likely to follow a threshold strategy. Second, we find that investors are more likely to sell a stock for a gain in a moment closer in time to the maximum occurrence and at a price further from the running maximum price of the stock in the investment episode. Anticipated regret and belief updating might explain this pattern. The propensity to sell a gain steadily declines a short time after the maximum was attained. We suggest that traders might regret not selling at a time close to the maximum day and hold onto the stock if a long time has passed.

16:50
Mixture Choice Data: Revealing Preferences and Cognition*

ABSTRACT. Mixture choice data consist of the joint distribution of choices of a group of agents from a collection of menus, comprising the implied stochastic choice function plus any cross-menu correlations. When agents are heterogeneous with respect to both preferences and cognition, we show that these two components of behavior can be revealed simultaneously by appropriate mixture choice data. We then extend this nding to stochastic consideration sets, random satiscing thresholds, multinomial logit, and Fechnerian models of cognition. Finally, we demonstrate how the mixture choice framework can be used by applying it to a preexisting experimental dataset.