• Nenhum resultado encontrado

16:40 - 18:20 Parallel Session J – CFE-CMStatistics

No documento Repositório ISCTE-IUL Deposited in : (páginas 166-191)

CO458 Room MAL G15 MODELLING AND FORECASTING CYCLICAL FLUCTUATIONSI Chair: Gian Luigi Mazzi CO0243: The Great Moderation in historical perspective: Is it that great?

Presenter: Lola Gadea, University of Zaragoza, Spain

The Great Moderation (GM) is widely documented in the literature as one of the most important changes in the US business cycle. All the papers that analyze it use post WWII data. For the first time we place the GM in a long historical perspective, stretching back a century and a half, which includes secular changes in the economic structure and a substantial reduction of output volatility. We find two robust structural breaks in volatility at the end of WWII and in the mid-eighties, showing that the GM still holds in the longer perspective. Furthermore, we show that GM volatility reduction is only linked to expansion features. We also date the US business cycle in the long run, finding that volatility plays a primary role in the definition of the business cycle, which has important consequences for econometricians and forecasters.

CO0265: The low-variance, high-risk economy: Lessons from the higher moments of MSI-VARs Presenter: Alexander Karalis Isaac, Warwick, United Kingdom

The aim is to determine whether Markov-switching models capture the non-Gaussian features of economic data evident since the Financial Crisis.

We derive exact solutions for the for third and fourth moments of MSI-VARs under mean square stability. This allows us to model the Financial Crisis and the Great Moderation in a single framework. For U.S. data, the post 1983 business cycle describes a low-variance, high-risk economy, with skewness−1.1 and kurtosis 6.6. A Markov-switching model with four states splits the sample irreversibly in 1983 and captures the new moment structure. This enables economists to model both the asymmetry and probability of rare disasters in GDP growth, consistent with data generated in the era of global financial liberalisation.

CO0705: Nested dynamic factor modeling: A coherent approach to measure national and state coincident indexes Presenter: Juan-Carlos Martinez-Ovando, ITAM, Mexico

Dynamic factor models have been used as a workhorse to measure business cycles from several economic information. However, when the economic information is available at aggregated and disaggregated levels (state or sectorial), the computations derived from this methodology exhibit some sorts of inconsistencies. A solution to that problem was proposed previously by deriving an ad-hoc procedure to consistently measure coincident indexes for the 50 states of the US economy. We develop an alternative procedure based on the notion of nested dynamic factor model, i.e. a dimensional reduction technique which takes into consideration the information contained in the coincident economic information for the states’ economies and the aggregate, simultaneously. Our procedure, in this way, generalizes the approach previously adopted, and allows us to provide a coherent reading of local and aggregated business cycles. We illustrate our proposal by means of computing coherent national and states coincident indexes for the US and Mexico.

CO0854: Combining composite indicators and advanced graphical tools for monitoring Euro area and member states cycles Presenter: Gian Luigi Mazzi, Eurostat, Luxembourg

Co-authors:Jacques Anas, Monica Billio, Ludovic Cales

Since several years, Eurostat is monitoring the cyclical situation of the Euro area and its largest economies by means of cyclical composite indicators. Such indicators based on MS-VAR models aim to simultaneously detect peaks and troughs of the growth and business cycles within the so-called ABCD sequence. Furthermore, at the Euro area level, also the acceleration cycle is monitored by means of a univariate MS model. Firstly we present the preliminary results of a project targeting a full coverage monitoring of the Euro area cycles, obtained by developing composite indicators, similar to those already in use, to all Euro area member countries plus the UK. Problems encountered in constructing such indicators, especially due to data availability, are analysed and related solutions are presented. Secondly we show how the results of the cyclical composite indicators can be presented in an intuitive, easy to read and friendly graphical representation. The core of such a graphical tool is constituted by a clockwise representation of the cyclical fluctuations. The characteristics of the tool are presented and some examples are proposed to show the potentials of the tool from the analysts’ point of view.

CO562 Room MAL B20 RECENT ADVANCES INBAYESIAN COMPUTATIONAL METHODS Chair: Gael Martin CO0295: Fast and efficient MCMC for large data problems using data subsampling and the difference estimator

Presenter: Matias Quiroz, Stockholm University and Sveriges Riksbank, Sweden Co-authors:Mattias Villani, Robert Kohn

The aim is to propose a generic Markov Chain Monte Carlo (MCMC) algorithm to speed up computations for datasets with many observations. A key feature of our approach is the use of the highly efficient difference estimator from the survey literature to estimate the log-likelihood accurately using only a small fraction of the data. Our algorithm improves on theO(n)complexity of regular MCMC by operating over local data clusters instead of the full sample when computing the likelihood. The likelihood estimate is used in a Pseudo-marginal framework to sample from a perturbed posterior which is withinO(m1/2)of the true posterior, wheremis the subsample size. The method is applied to a logistic regression model to predict firm bankruptcy for a large data set. We document a significant speed up in comparison to the standard MCMC on the full dataset.

CO1282: Accelerating Metropolis-Hastings algorithms by delayed acceptance Presenter: Christian Robert, Universite Paris-Dauphine, France

Co-authors:Marco Banterle, Clara Grazian, Anthony Lee

MCMC algorithms such as Metropolis-Hastings algorithms are slowed down by the computation of complex target distributions as exemplified by huge datasets. We offer a useful generalisation of the Delayed Acceptance approach, devised to reduce the computational costs of such algorithms by a simple and universal divide-and-conquer strategy. The idea behind the generic acceleration is to divide the acceptance step into several parts, aiming at a major reduction in computing time that out-ranks the corresponding reduction in acceptance probability. Each of the components can be sequentially compared with a uniform variate, the first rejection signalling that the proposed value is considered no further. We develop moreover theoretical bounds for the variance of associated estimators with respect to the variance of the standard Metropolis-Hastings and detail some results on optimal scaling and general optimisation of the procedure.

CO0157: On consistency of approximate Bayesian computation Presenter: David Frazier, Monash University, Australia Co-authors:Gael Martin, Christian Robert

Approximate Bayesian computation (ABC) methods have become increasingly prevalent of late, facilitating as they do the analysis of intractable, or challenging, statistical problems. With the initial focus being primarily on the practical import of ABC, exploration of its formal statistical properties has begun to attract more attention. The aim is to establish general conditions under which ABC methods are Bayesian consistent, in the sense of producing draws that yield a degenerate posterior distribution at the true parameter (vector) asymptotically (in the sample size). We

derive conditions under which arbitrary summary statistics yield consistent inference, with these conditions linked to the identification of the true parameters. Using simple illustrative examples that have featured in the literature, we demonstrate that identification, and hence consistency, is unlikely to be achieved in many cases, and propose a simple diagnostic procedure that can indicate the presence of this problem. We also touch upon the link between consistency and the use of auxiliary models within ABC, and illustrate the subsequent results in a simple Lotka-Volterra predator-prey model. Lastly, we explore the relationship between consistency and the use of marginalization to obviate the curse of dimensionality.

CO1187: On the properties of variational approximations of Gibbs posteriors Presenter: James Ridgway, University Paris Dauohine, France

Co-authors:Nicolas Chopin, Pierre Alquier

The PAC-Bayesian approach is a powerful set of techniques to derive non-asymptotic risk bounds for random estimators. The corresponding optimal distribution of estimators, usually called the Gibbs posterior, is unfortunately intractable. One may sample from it using Markov chain Monte Carlo, but this is often too slow for big datasets. We consider instead variational approximations of the Gibbs posterior, which are fast to compute. We undertake a general study of the properties of such approximations. Our main finding is that such a variational approximation has often the same rate of convergence as the original PAC-Bayesian procedure it approximates. We specialise our results to several learning tasks (classification, ranking, matrix completion), discuss how to implement a variational approximation in each case, and illustrate the good properties of said approximation on real datasets.

CO554 Room MAL 414 MIXED-FREQUENCY TIME SERIES Chair: J Isaac Miller

CO0325: Simple robust tests for the specification of high-frequency predictors of a low-frequency series Presenter: J Isaac Miller, University of Missouri, United States

Two simple variable addition test statistics are proposed for three tests of the specification of high-frequency predictors in a model to forecast a series observed at a lower frequency. The first one is similar to existing test statistics and we show that it is robust to biased forecasts, integrated and cointegrated predictors, and deterministic trends, while it is feasible and consistent even if estimation is not feasible under the alternative. It is not robust to biased forecasts with integrated predictors under the null of a fully aggregated predictor, and size distortion may be severe in this case. The second test statistic proposed is an easily implemented modification of the first one that sacrifices some power in small samples but is also robust to this case.

CO0461: The Beveridge-Nelson decomposition of mixed-frequency series Presenter: Yasutomo Murasawa, Konan University, Japan

Gibbs sampling for Bayesian VAR with mixed-frequency series draws latent high-frequency series and model parameters sequentially. Applying the multivariate Beveridge-Nelson (B-N) decomposition in each Gibbs step, one can simulate the joint posterior distribution of the B-N permanent and transitory components in latent and observable high-frequency series. This method is applied to mixed-frequency series of macroeconomic variables including quarterly real GDP to estimate the monthly natural rates and gaps of output, inflation, interest, and unemployment jointly. The resulting monthly real GDP and GDP gap are complementary coincident indices, measuring classical and deviation cycles respectively.

CO0618: Time-varying mixed-frequency vector autoregressive models Presenter: Thomas Goetz, Deutsche Bundesbank, Germany

Co-authors:Klemens Hauzenberger

Many of the existing macroeconomic forecasting models ignore the mismatch in the series sampling frequencies, the possibility of (smooth) structural changes, or the joint dynamics between the variables involved (or all of the above). To simultaneously address the aforementioned data features, we introduce a time-varying parameters mixed-frequency vector autoregressive (TVP-MF-VAR) model. To keep our approach feasible beyond small VARs we limit time variation to the constants and error variances. We estimate the time-varying parameters using two approximation techniques: forgetting factors in the prediction step of the Kalman filter; and exponentially weighted moving averages (EWMA) for the error variances. This approach reduces the computational burden, thus allowing us to evaluate many relatively large VARs ( up to 20 variables) in a recursive forecasting exercise in a reasonable amount of time. For a small VAR, we examine the validity of our approximate approach by comparing it to a model that is based on exact MCMC methods. Furthermore, we assess our models forecasting ability by comparing it to a pure TVP-, a pure MF- and a classical VAR using German data.

CO0781: The estimation of continuous time models with mixed frequency data Presenter: Marcus Chambers, University of Essex, United Kingdom

We consider exact representations for discrete time mixed frequency data generated by an underlying multivariate continuous time model. Al-lowance is made for different combinations of stock and flow variables as well as deterministic trends, and the variables themselves may be stationary or nonstationary (and possibly cointegrated). The resulting discrete time representations allow for the information contained in high frequency data to be utilised alongside the low frequency data in the estimation of the parameters of the continuous time model. Monte Carlo simulations explore the finite sample performance of the maximum likelihood estimator of the continuous time system parameters based on mixed frequency data, and a comparison with extant methods of using data only at the lowest frequency is provided. An empirical application demon-strates the methods and some ways in which the present analysis can be extended and refined are discussed.

CO552 Room MAL B33 APPLIED ECONOMETRICS Chair: Michael Owyang

CO0356: Taylor type monetary policy rules with financial market expectations Presenter: Michael Owyang, Federal Reserve Bank of St Louis, United States Co-authors:Eric Ghysels

Taylor rules are often used to characterize the systematic component of monetary policy. In the U.S., changes in the policy rate are typically made at scheduled meetings. The data that are available at these times are of different vintages. We develop a model that accounts for the variation in hard data vintage and uses soft data–high frequency financial data–to update hard data releases in computing expectations. The expectations are then used to estimate Taylor-type rules.

CO0546: A comprehensive evaluation of macroeconomic forecasting methods Presenter: Ana Galvao, University of Warwick, United Kingdom

Co-authors:George Kapetanios, Andrea Carriero

The proposed forecasting evaluation compares the performance of four state-of-art multivariate forecasting models: Factor-Augmented Distributed Lag (FADL) Models, Mixed Data Sampling (MIDAS) Models, Bayesian Vector Autoregressive (BVAR) Models and a medium-sized Dynamic Stochastic General Equilibrium Model (DSGE). We look at these models to predict output growth and inflation with datasets from US, UK, Euro area, Germany, France, Italy and Japan. Our evaluation considers both the accuracy of point and density forecasts, and forecast horizons from nowcasting up to two-years ahead. We find predictability of inflation at all horizons, but no predictability of output growth at the two-year-ahead

horizon. MIDAS models are the adequate choice for nowcasting output growth and quarterly inflation, but at longer horizons, BVAR and Factor specifications are a better choice. The medium-sized DSGE model is able to deliver superior long horizon forecast of US and UK inflation. There is no clear evidence that a large set of predictors (one-hundred) may improve the accuracy of forecasts in comparison with a medium set (a dozen predictors). If there are gains from the use of large datasets, they are likely to be during the more recent period (2008-2011) and using BVARs for output growth and combination MIDAS models for inflation. We also observe that UK and US output growth density forecasts of models with large datasets may be better calibrated than with smaller datasets.

CO0564: Adaptive state space models

Presenter: Ivan Petrella, Bank of England, United Kingdom Co-authors:Davide Delle Monache, Fabrizio Venditti

The estimation of state-space models with time-varying parameters typically implies the use of computationally intensive methods. Moreover, when volatility evolves stochastically the model ceases to be conditionally Gaussian and requires nonlinear filtering techniques. We model parameters’

variation in a Gaussian state-space model by letting their dynamics to be driven by the score of the predictive likelihood. In this setup, conditionally on past data, the model remains Gaussian and the likelihood function can be evaluated using the Kalman filter. We derive the analytical expressions for the score and the information matrix which are needed to update the time varying system matrices. We show that this leads to a new set of recursions running in parallel with the standard Kalman filter recursions. The resulting algorithm allows us estimate simultaneously the unobserved state vector and the time-varying parameters by maximum likelihood. The model is further extended to handle data at mixed frequencies.

CO1058: Real-time forecasting with a large, mixed frequency, Bayesian VAR Presenter: Tatevik Sekhposyan, Texas A and M University, United States Co-authors:Michael McCracken, Michael Owyang

Point forecasts from a large, mixed-frequency, structural vector autoregression (VAR) are assessed. The VAR we consider uses data at monthly and quarterly frequencies to obtain forecasts of low frequency variables such as output growth on a more frequent basis. The structure imposed on the VAR allows us to account for the temporal ordering of the data explicitly, thus accounting for the effects of temporal surprises across the variables in a more interpretable manner. Our framework relies on a blocking model, i.e. econometric model specified at a low frequency, where high frequency observations of a particular variable are stacked, i.e. treated as individual economic series occurring at the low frequency. Since stacking results in a high- dimensional system of equations, we rely on Bayesian shrinkage techniques to mitigate parameter proliferation. We use our model for short-term forecasting of the U.S. economy, as well as for structural analysis. The relative performance of the model is compared to the factor model and private sector forecasts.

CO540 Room MAL B35 ECONOMETRICS OF DYNAMIC PORTFOLIOS AND RISK Chair: Jean-Michel Zakoian CO0507: Real uncertainty and the zero lower bound

Presenter: Guillaume Roussellet, NYU Stern School of Business, United States

Both term structures of U.S. nominal and inflation-linked bonds are used to identify real uncertainty and the associated risk premia, in and out of the zero lower bound (ZLB). Regression analyses are first used to provide stylized facts on real term and inflation risk premia, and to derive new Fama conditions. We propose a ZLB-consistent affine pricing model for both term structures, encompassing simultaneously flexible inflation dynamics and providing non-negative nominal yields. We extract risk premia components, showing their consistency with the stylized facts. Decomposing the sources of real uncertainty, we document that although short-term inflation uncertainty is high at the ZLB, the predictability of nominal and real excess-returns is improved during this period.

CO0511: Filtered historical simulations for estimating the conditional risk of a dynamic portfolio Presenter: Christian Francq, CREST and University Lille III, France

The estimation of the conditional Value-at-Risk (VaR) of a portfolio of assets is considered. The composition of the portfolio is time-varying and the vector of returns is assumed to follow a multivariate GARCH-type model. Under the assumption that the distribution of the innovations is spherical, the asymptotic distribution of an estimator of conditional VaR is established. We also derive the asymptotic properties of the so-called Filtered Historical Simulation (FHS) method, which does not need the sphericity assumption. We compare the FHS method with the method based on the sphericity assumption, via Monte Carlo experiments and empirical studies, and illustrate the superiority of the two multivariate approaches over a univariate approach based on the sole series of portfolio returns.

CO1369: Deep conditional portfolio sorts

Presenter: Benjamin Moritz, Ludwig Maximilian University of Munich, Germany Co-authors:Tom Zimmermann

Which variables provide independent information about the cross-section of future returns? Standard techniques like portfolio sorts and Fama-MacBeth regressions cannot easily answer this question when the number of candidate variables is large and when cross-terms might be important as well. We introduce a new method, deep conditional portfolio sorts, that can be used in this context. To estimate the model, we import ideas from the machine learning literature and tailor them to our setting. We apply the method to past-return based predictions, and we recover short-term returns (i.e. the past six most recent one-month returns) as the most important predictors. A trading strategy based on these findings has Sharpe and information ratios that are about twice as high as in a Fama-MacBeth framework that accounts for two-way interactions. Transaction costs do not explain these results. Implications for the analysis of cross-sectional predictor variables going forward are discussed in the conclusion.

CO0782: On the empirical saddlepoint approximation with application to asset pricing Presenter: Benjamin Holcblat, BI Norwegian Business School, Norway

We prove new theoretical results regarding the ESP (empirical saddlepoint) approximation, and apply the latter one to consumption-based asset pricing. Firstly, we prove the existence of the ESP estimand, which is the intensity of the solutions to estimating equations. The challenge of this proof comes from the possible multiplicity of the solutions. Secondly, we prove global consistency and asymptotic normality of the ESP approximation. The application suggests that the basic consumption-based asset-pricing model is more consistent with data than other inference approaches suggest.

CO414 Room MAL B34 DENSITY REGRESSION,TREE MODELS,AND VARIABLE SELECTION Chair: Carlos Carvalho CO0601: Shrinkage estimation of treatment effects: Dealing with many controls

Presenter: Carlos Carvalho, The University of Texas at Austin, United States Co-authors:Richard Hahn

A shrinkage strategy is presented to estimate linear treatment effects in the presence of potentially very many controls. The approach is based on a re-parametrization that allows us to identify treatment effects by leveraging all the positive aspects of variable selection and shrinkage priors. The method looks to find the sweet spot in the bias-variance trade-off where we reduce the variability of simple OLS while avoiding the extreme bias associate with naive applications of bayesian variable selection.

CO0629: Multiscale spatial density smoothing

Presenter: James Scott, University of Texas at Austin, United States

The estimation of a spatially varying density function is considered, motivated by problems that arise in large-scale radiological survey and anomaly detection. Four challenges make this a difficult problem. First, the density at any given spatial location may have both smooth and non- smooth features. Second, the spatial correlation is neither stationary nor isotropic. Third, the spatial correlation decays at different length scales for different parts of the density. Finally, at some spatial locations, there is very little data. We present a method called multiscale spatial density smoothing that successfully addresses these challenges. The method is motivated by the same construction that underlies a Polya-tree prior, in that it is based on a recursive dyadic partition of the underlying density function. We also describe an efficient algorithm for finding a maximum a posteriori (MAP) estimate that leverages recent advances in convex optimization for non-smooth functions.

CO1215: Block hyper-gpriors in Bayesian regression

Presenter: Christopher Hans, The Ohio State University, United States

Thick-tailed mixtures ofgpriors have gained traction as a default choice of prior distribution in Bayesian regression. The motivation for these priors usually focuses on properties of model comparison and variable selection as well as computational considerations. Standard mixtures ofg priors mix over a single, common scale parameter that shrinks all regression coefficients in the same manner. The particular form of the mixture distribution determines the model comparison properties. We focus on the effect of the mono-shrinkage induced by use of a single scale parameter and propose new mixtures ofgpriors that allow for differential shrinkage across collections of coefficients. We introduce a new “conditional information asymptotic” that is motivated by the common data analysis setting where at least one regression coefficient is much larger than others.

We analyze existing mixtures ofgpriors under this limit and reveal two new behaviors, “Essentially Least Squares (ELS)” estimation and a

“Conditional Lindleys Paradox (CLP)”, and argue that these behaviors are undesirable. As the driver behind both of these behaviors is the use of a single, latent scale parameter that is common to all coefficients, we propose a block hyper−gprior that allows for differential shrinkage across collections of covariates and provide conditions under which ELS and the CLP are avoided by the new class of priors.

CO1285: Density regression with Bayesian additive regression trees Presenter: Jared Murray, Carnegie Mellon University, United States

Modeling how an entire density changes with covariates (“density regression”) is an important but challenging generalization of mean and quantile regression models. We introduce a new continuous latent variable model for density regression. Treating this unobserved variable as the input to a nonparametric regression function induces a flexible model for the conditional density of the response (given the observed covariates) after marginalizing over the latent variable. This model has a natural interpretation in terms of omitted variables, and only requires prior distributions to be specified for one or two regression functions (in contrast to covariate-dependent mixture models). Bayesian additive regression trees (BART) are used as priors over location and scale (or bandwidth) regression functions, yielding attractive invariance properties and computationally efficient posterior inference.

CO657 Room MAL B36 REGIME CHANGE MODELING IN ECONOMICS AND FINANCEII Chair: Marco Gross CO1123: Simulated ML estimation of a financial agent-based herding model

Presenter: Jiri Kukacka, Charles University in Prague - Faculty of Social Sciences, Czech Republic Co-authors:Jozef Barunik

We apply the very recent simulated MLE methodology to a stylised financial agent-based herding model where noise traders switch between the optimistic and pessimistic states. We test small sample properties of the estimator via Monte Carlo simulations and confirm important theoretical features of the estimator such as consistency and asymptotic efficiency. Via exploring behaviour of the objective simulated log-likelihood function we also verify the identification of parameters and theoretical assumptions of the estimation method. Next, we estimate the model using three stock market indices (DAX, SP500, and Nikkei), price of gold in USD, and three exchange rates (USD/EUR, USD/YEN, and CHF/YEN). Results of the full sample as well as rolling simultaneous estimation of parametersaandbgoverning switches of opinion and sentiment dynamics together with standard deviation of the innovations of the fundamental value are presented. Finally, we compare and contrast the performance of the NPSMLE method to the simulated method of moments approach.

CO1345: Convex Phillips curves: Literature review, a theoretical model and an empirical analysis for the Euro area Presenter: Marco Gross, European Central Bank, Germany

Co-authors:Willi Semmler

We develop a theoretical model that features a state-dependent relation between output, price inflation and inflation expectations, augmenting a previous model with a nonlinear Phillips curve that reflects the rationale underlying the capacity constraint theory. Our empirical assessment for the Euro area backs the theory - based on a regime-switching Phillips curve and a regime-switching monetary structural VAR - by confirming the presence of a significantconvexrelationship between inflation and the output gap. Convexity means that the beta of inflation on the output gap increases during times of economic expansion and abates during times of recession. The regime switching monetary SVAR reveals the business cycle dependence of macroeconomic responses to monetary policy shocks: Expansionary monetary policy (be it via conventional or unconventional measures) induces less pressure on price inflation at times of weak growth as opposed to strong growth; thereby rationalizing relatively stronger expansionary policy, including unconventional volume-based policy such as the Expanded Asset Purchase Programme (EAPP) of the ECB, at times of recession.

CO1386: Macroeconomic regime switching and technological change Presenter: Tommaso Ferraresi, University of Pisa and IRPET, Italy Co-authors:Willi Semmler, Andrea Roventini

We assess the effects of (adjusted) TFP on GDP and hours worked in bad and good times. More precisely, we estimate over several time spans (the longest being 1950:1-2011:4) different threshold vector autoregressions (TVAR) allowing for different threshold variables (e.g. GDP growth and financial stress indexes) and we assess the paths of generalized impulse response functions in order to judge whether TFP shocks differently affect hours and output according to the state of the economy. Moreover, consistently with the prevailing literature, we assess whether the relation is time

No documento Repositório ISCTE-IUL Deposited in : (páginas 166-191)