• Nenhum resultado encontrado

Central Bank credibility and inflation expectations: a microfounded forecasting approach

N/A
N/A
Protected

Academic year: 2020

Share "Central Bank credibility and inflation expectations: a microfounded forecasting approach"

Copied!
42
0
0

Texto

(1)

Ensaios Econômicos

EPGE Escola Brasileira de Economia e Finanças N◦ 812 ISSN 0104-8910

Central Bank Credibility and Inflation

Ex-pectations: A Microfounded Forecasting

Ap-proach

João Victor Issler, Ana Flávia Soares

(2)

Os artigos publicados são de inteira responsabilidade de seus autores. As

opiniões neles emitidas não exprimem, necessariamente, o ponto de vista da

Fundação Getulio Vargas.

EPGE Escola Brasileira de Economia e Finanças Diretor Geral: Rubens Penha Cysne

Vice-Diretor: Aloisio Araujo

Coordenador de Regulação Institucional: Luis Henrique Bertolino Braido

Coordenadores de Graduação: Luis Henrique Bertolino Braido & André Arruda Villela Coordenadores de Pós-graduação Acadêmica: Humberto Moreira & Lucas Jóver Maestri Coordenadores do Mestrado Profissional em Economia e Finanças: Ricardo de Oliveira Cavalcanti & Joísa Campanher Dutra

Victor Issler, João

Central Bank Credibility and Inflation Expectations: A Microfounded Forecasting Approach/ João Victor Issler, Ana Flávia Soares – Rio de Janeiro : FGV,EPGE, 2019

40p. - (Ensaios Econômicos; 812) Inclui bibliografia.

(3)

Central Bank Credibility and Inflation

Expectations:

A Microfounded Forecasting Approach

Jo˜

ao Victor Issler

FGV EPGE

joao.issler@fgv.br

Ana Fl´

avia Soares

FGV EPGE

anaflavia1611@hotmail.com

Feb., 2019

Abstract

Credibility is elusive and no generally agreed upon measure of it exists. Despite that,

Blinder (2000) generated a consensus in the literature by arguing that ”A central bank is credible if people believe it will do what it says”. It is very hard to argue against such a definition of credibility, being the reason why it became so popular among central bankers and academics alike.

This paper proposes a measure of credibility that is based upon this definition. A main challenge to implement it is that one needs a measure of people’s beliefs, of what central banks say they do, and a way to compare whether the two are the same.

The core idea of our approach is as follows. First, we measure people’s beliefs by using survey data on inflation’s expectations, employing the panel-data setup of Gaglianone & Issler (2018), which allows filtering the conditional expectation of inflation from survey data. Second, we compare beliefs with explicit (or tacit) targets. Therefore, our measure of what central banks say they do are these targets, which serve as a binding contract between central banks and society. To compare people’s beliefs with what central banks say they do, we need to take into account the uncertainty in our estimate of people’s beliefs. To do so, we construct robust – heteroskedasticity and autocorrelation consistent (HAC) – 95% asymptotic confidence intervals for our estimates of the conditional expectation of inflation. Whenever the target falls into this interval we consider the central bank credible. We consider it not credible otherwise.

We apply our approach to study the credibility of the Brazilian Central Bank (BCB) by using the now well-known Focus Survey of forecasts, kept by the BCB on inflation

Corresponding author. We thank the comments and suggestions given by Marco Bonomo, Wagner

Gaglianone, Felipe Iachan, Marcelo Moreira, Cezar Santos, and participants of the SNDE Symposium held in Tokyo, Japan, the LUBRAMACRO conference held in Aveiro, Portugal, and the IAAE meeting held in Montreal, Canada. Issler thanks CNPq, FAPERJ, INCT, and FGV for financial support on different grants. This study was financed in part by the Coordena¸c˜ao de Aperfei¸coamento de Pessoal de N´ıvel Superior - Brasil (CAPES) - Finance Code 001.

(4)

expectations. This choice is not merely geographic for us. The Focus Survey is a world-class database, fed by institutions that include commercial banks, asset management firms, consulting firms, non-financial institutions, etc. About 250 participants can provide daily forecasts for a large number of economic variables, e.g., inflation using different price indices, interest and exchange rates, GDP, industrial production, etc., and for different forecast horizons, e.g., current month, next month, current year, 5 years ahead, among others. At any point in time, about 120 participants are actively using the system that manages the database.

Based on this framework, we estimate on a monthly basis the conditional expectation of inflation 12-months ahead, coupled with a robust estimate of its asymptotic variance and the respective 95% robust confidence interval. Our estimates cover the period from January 2007 until April 2017. This is compared to the target in the Brazilian Inflation Target Regime. Results show that the BCB was credible 65% of the time, with the exception of a few months in the beginning of 2007 and during the interval between mid-2013 throughout mid-2016. We also constructed a credibility index for this period and compared it with alternative measures of credibility previously proposed in the literature.

Key words: Consensus Forecasts, Forecast Combination, Panel Data, Central Bank-ing.

(5)

1

Introduction

Over the last four decades, the question of central bank credibility has been extensively studied by the academic literature on monetary policy. It has also become a major concern for many central bankers around the world, which have taken a number of measures to enhance the cred-ibility of monetary policy. Building central bank credcred-ibility was especially strong in countries under Inflation Targeting regimes, but this phenomenon was not restricted to these countries. This is not surprising, given the role that expectations have in most macroeconomic models, where credibility serves as a way of anchoring inflation expectations.

A main problem of measuring credibility is the fact that it is elusive and no generally agreed upon measure of it exists. Despite that, there is a consensus in the literature thatBlinder(2000) offers an uncontroversial definition of credibility when he states that ”A central bank is credible if people believe it will do what it says.” It is very hard to argue against such a definition of credibility, being the reason why it became so popular among central bankers and academics alike.

For this reason, this paper proposes a measure of credibility that is based upon this defini-tion. A main challenge to implement it is that one needs a measure of people’s beliefs, of what central banks say they do, and a way to compare whether the two are the same.

Compared to the literature on credibility, we approach this problem in a novel way. We focus on inflation expectations, since inflation is arguably the main variable central banks care about. Indeed, many countries that are users of Inflation Targeting regimes have explicit targets for inflation. Even in those which are not, there is usually a tacit agreement on what that target should be.

The core idea of our approach is as follows. First, we measure people’s beliefs by using survey data on inflation’s expectations1, employing the panel-data setup ofGaglianone & Issler

(2018), which allows filtering the conditional expectation of inflation from the survey. Second, we compare beliefs with explicit (or tacit) targets. Therefore, our measure of what central banks say they do are these targets, which serve as a binding contract between central banks and society. To compare people’s beliefs with what central banks say they do, we need to take into account the uncertainty in our estimate of people’s beliefs. To do so, we construct robust – heteroskedasticity and autocorrelation consistent (HAC) – 95% asymptotic confidence intervals for our estimates of the conditional expectation of inflation. Whenever the target falls into this interval we consider the central bank credible. We consider it not credible otherwise.

Of course, there has been active research in the area of credibility, especially in this millen-nium. Regarding measurement, a pioneering work is due to Svensson (1993), who proposed a simple test to check whether the inflation target is credible in the sense that market agents be-lieve that future inflation will fall within the target range. A number of articles followed, trying to construct credibility measures and credibility indices in the last two decades; seeBomfim &

(6)

Rudebusch(2000),Cecchetti & Krause(2002),Mendon¸ca & Souza(2007),De Mendon¸ca et al.

(2009),Levieuge et al. (2016),Bordo & Siklos (2015), inter alia.

We advance with respect to this literature in three ways: (i) we measure beliefs properly: in most of this literature, people’s beliefs are not correctly estimated, since the survey consensus is used to measure beliefs. However, it has been shown that the consensus is a biased estimate of beliefs; (ii) We do not use ad hoc confidence intervals: most of these papers propose the use of ad hoc confidence bands in making comparisons between beliefs and targets. Even if these confidence bands are appropriate for a given country in a specific point in time, they may not be appropriate for a different country and/or for a different point in time in the same country; (iii) Our approach can be universally applied: it is straightforward to use it for countries under an inflation targeting program, but it extends naturally to other countries in which a tacit target is present, covering a wide range of countries.

We apply our approach to study the credibility of the Brazilian Central Bank (BCB) by using the now well-known Focus Survey of forecasts, kept by the BCB on inflation expectations. This choice is not merely geographic for us. The Focus Survey is a world-class database, fed by institutions that include commercial banks, asset management firms, consulting firms, non-financial institutions, etc. About 250 participants can provide daily forecasts for a large number of economic variables, e.g., inflation using different price indices, interest and exchange rates, GDP, industrial production, etc., and for different forecast horizons, e.g., current month, next month, current year, 5 years ahead, among others. At any point in time, about 120 participants are actively using the system that manages the database.

The survey has several key features: (i) participants can access the survey at any time and we can observe their decisions to update or not their forecasts; (ii) the confidentiality of information is guaranteed and the anonymity of forecasters is preserved, i.e., there are no reputational concerns; (iii) the Focus Survey has very strong incentives for participants to update their forecasts, especially on inflation expectations; see Marques (2013) for further details. To apply the techniques discussed above on the Focus Survey, we first extend the work of Gaglianone & Issler(2018) by deriving a robust HAC estimator for the asymptotic variance of the inflation expectation estimator proposed there 2.

Based on this framework, we estimate on a monthly basis the conditional expectation of inflation 12-months ahead, coupled with an estimate of its robust asymptotic variance and the respective 95% HAC confidence interval. Our estimates cover the period from January 2007 until April 2017. This is compared to the target in the Brazilian Inflation Target Regime. Results show that the BCB was credible 65% of the time, with the exception of a few months in the beginning of 2007 and during the interval between mid-2013 throughout mid-2016. We also constructed a credibility index for this period and compared it with alternative measures of credibility previously proposed in the literature.

The remainder of this paper is organized as follows. Section 2 discusses the existing literature 2This is one of the original contributions of this paper.

(7)

on central bank credibility and brings a discussion about the existing indices. Section 3 gives an informal overview of the main results of Gaglianone & Issler(2018), a step-by-step description of the HAC covariance matrix estimation procedure and develops a new credibility index for the central bank based on this measure. Section 4 brings an empirical application of this framework to evaluate the credibility of the BCB in recent years, using the Focus Survey database. Section 5 concludes.

2

The literature on central bank credibility

The issue of whether it is better for the policymaker to operate with pure discretion and poor accountability or to commit to a policy has long been a central question for monetary economics. Kydland & Prescott (1977) showed that a regime where policymakers have to commit is preferable to a regime that allows policymakers discretion. The main idea behind the dynamic optimization argument is that expectations play a central role in macroeconomic dynamics, since economic agents choose their current actions based on the best possible forecasts of future outcomes given available information.

Several central banks have adopted a more systematic approach to maintain price stability since the early 1990s, particularly under an inflation targeting regime as a method of com-mitment, explicitly acknowledging that low and stable inflation is the goal of monetary policy, retaining constrained discretion, as argued by Bernanke & Mishkin (1997). At the same time, the theoretical debate in favor of monetary policy rules gained traction withTaylor (1993).

The evolution and improvement of the monetary policy institutional framework is related to better economic outcomes in empirical studies. Many articles in the literature (e.g., Rogoff

(1985),Alesina (1988),Grilli et al.(1991),Alesina & Summers (1993), and Cukierman(2008)) show that independent, transparent, accountable, and credible central banks are able to deliver better outcomes. Cecchetti & Krause (2002) study a large sample of countries and find that credibility is the primary factor explaining the cross-country variation in macroeconomic out-comes. Carriere-Swallow et al. (2016) find evidence that price stability and greater monetary policy credibility are important determinants of exchange rate pass-through in a sample of 62 emerging and advanced economies.

Agents’ expectations regarding central bank policy is directly tied to the concept of credi-bility. In a clean and uncontroversial statement about how these two connect, Blinder (2000) argues that ”A central bank is credible if people believe it will do what it says”3. The definition of central bank credibility in the literature is usually related to a reputation4 built based on 3Cukierman & Meltzer(1986) define monetary policy credibility as ”the absolute value of the difference between

the policymaker’s plans and the public’s beliefs about those plans”. In our view, this definition is close to that of Blinder, but, more controversial since it gets into much detail as how to measure credibility.

4Rogoff(1985) suggested that monetary policy should be placed in the hands of an independent central bank

run by a ‘conservative’ central banker who would have a greater aversion to inflation than that of the public at large. This would help to reduce the inflation bias inherent in discretion.

(8)

strong aversion to inflation5 or a framework that is characterized by an explicit contract with incentive compatibility6 or a commitment to a rule or a specific and clear objective.

Svensson (1993) was the first to propose a test to monetary policy credibility, in the sense that market agents believe that future inflation will be on target. He proposes to compare ex-post target-consistent real interest rates with market real yields on bonds. In the case where the central bank has an explicit inflation target, Svensson (2000) proposes to measure credibility as the distance between the expected inflation and the target.

Bomfim & Rudebusch(2000) suggest another approach, measuring overall credibility by the extent to which the announcement of a target is believed by the private sector when forming their long-run inflation expectations. Specifically, they assume that the expectation of inflation at time t, denoted by πte, is a weighted average of the current target, denoted by πt, and last

period’s (12-months) inflation rate, denoted by πt−1:

πet = λtπt+ (1 − λt)πt−1 (1)

The parameter λt, with (0 < λt < 1) measures the credibility of the central bank. If λt = 1,

there is perfect credibility, and private sector’s long-run inflation expectations will be equal to the announced long-run goal of the policymaker. If λt = 0, there is no credibility and

intermediate values of λt represent partial credibility.

FollowingBomfim & Rudebusch(2000),Demertzis et al.(2008) model inflation and inflation expectations in a general VAR framework, based on the fact that the two variables are intrin-sically related. When the level of credibility is low, inflation will not reach its target because expectations will drive it away, and expectations themselves will not be anchored at the level the central bank wishes. They apply their framework to a group of developed countries and compute an anchoring effect based on λt.

Cecchetti & Krause(2002) construct an index of policy credibility that is an inverse function of the gap between expected inflation and the central bank’s target level, taking values from 0 (no credibility) to 1 (full credibility). The index is defined as follows:

ICK =          1 if πe ≤ πt 1 − πe−πt 20%−πt if πt ≤ π e ≤ 20% 0 if πe ≥ 20%

where πe is the expected inflation and π

tis the central bank target. Between 0 and 1 the value

5More recently, after the 2008 financial crisis, central bank credibility has also been very much tied to strong

aversion to deflation in many advanced economies.

6Canzoneri (1985), Persson & Tabellini (1993) and Walsh (1995) have analysed alternative ways of allowing

discretionary use of monetary policy with an incentive to achieve low inflation on average. One suggestion is to create a penalty on either government or central bank if the average inflation rate over a period exceeds the level consistent with price stability. This allows the monetary authorities to respond to shocks without triggering expectations of an inflation bias to policy. One form which such a penalty could take is the announcement in advance of an explicit inflation target. There would be a penalty—political, reputational or, as in New Zealand, loss of tenure of the central bank governor—were the target not to be achieved.

(9)

of the index decreases linearly as expected inflation increases. The authors define 20% as an ad hoc upper bound7.

Levieuge et al. (2016) consider an asymmetric measure of credibility based on the linear exponential (LINEX) function, as follows:

IL= exp(φ(πe−π))−φ(π1 e−π), for all πe

where πe is the inflation expectations of the private sector, π is the inflation target and, for

φ = 1, positive deviations (πe > π) will be considered more serious than negative deviations

(πe < π) as the exponential part of the function dominates the linear part when the argument

is positive. As the previous indices, when IL = 1 the central bank has full credibility and when

IL= 0 there is no credibility at all.

Bordo & Siklos (2015) propose a comprehensive approach to find cross-country common determinants of credibility. They organize sources of changes in credibility into groups of variables that represent real, financial, and institutional determinants for the proposed central bank credibility proxy. Their preferred index to measure central bank credibility is:

IBS =    πt+1e − πt if πt− 1 ≤ πt+1e ≤ πt+ 1 (πe t+1− πt)2 if πt− 1 > πt+1e > πt+ 1 where πe

t+1is the one-year-ahead inflation expectations and πtis the central bank target.

Cred-ibility is then defined such that the penalty for missing the target is greater when expectations are outside the 1% interval than when forecasts miss the target inside this 1% range.

There is also a strand of literature that applies state space models and the Kalman Filter to develop credibility measures, since it is a latent variable (SeeHardouvelis & Barnhart(1989) and

Demertzis et al. (2012)). In a recent contribution, Vereda et al. (2017) apply this framework to the term structure of inflation expectations in Brazil to estimate the long-term inflation trend, as it can be associated to the market perception about the target pursued by the central bank. They follow the methodology inKozicki & Tinsley(2012) and treat the so called shifting inflation endpoint as a latent variable estimated using the Kalman filter.

7Mendon¸ca & Souza (2007) propose an extension to the index inCecchetti & Krause(2002), considering that

not only positive deviations but also negative deviations of inflation expectations from the target can generate a loss of credibility: IDM GS =            1 if πmint ≤ πe≤ πmax t 1 − πe−πmaxt 20%−πmax t if π max t < π e< 20% 1 − πe−πmint −πmin t if 0 < πe< π 0 if πe≥ 20% or πe≤ 0

where πe is the inflation expectation of the private sector and πmin

t and πmaxt represent the lower and upper

bounds of the inflation target range, respectively. The central bank is viewed as non credible (IDM GS = 0) if

(10)

The disagreement between forecasters is also related to a credibility measure, focusing not on the consensus forecast, but on the distribution of the cross-section of forecasts. Dovern et al.

(2012) propose that disagreement among professional forecasters of inflation reflect credibility of monetary policy and find that it is related to measures of central bank independence among G7 economies, suggesting that more credible monetary policy can substantially contribute to anchoring of expectations about nominal variables. They argue that for expectations to be per-fectly anchored it is necessary that their cross-sectional dispersion (disagreement) disappears. In that sense,Capistr´an & Ramos-Francia(2010) finds that the dispersion of long-run inflation expectations is lower in targeting regimes.

Finally, Lowenkron et al.(2007) and Guill´en & Garcia (2014) also bring relevant contribu-tions to the Brazilian literature. Lowenkron et al. (2007) study the relation between agents’ inflation expectations 12 months ahead and inflation surprises and also look at inflation-linked bonds to evaluate the relation with inflation risk premia. Guill´en & Garcia (2014) look at the distribution of inflation expectations using a Markov chain approach, based on the fact that if an agent is persistently optimistic or pessimistic about inflation prospects8, this implicitly reveals a bias, which is an evidence of lack of credibility.

Next, we discuss two important problems with the previous literature and explain how we intend to deal with them.

The first problem relates to how beliefs are measured. It is common to employ private-sector survey expectations of inflation, measured by the cross-sectional average (consensus) of 12-month ahead inflation expectations. But, from a theoretical point-of-view, survey-based forecasts can be a biased estimate of the conditional expectation of inflation, as long as agents have an asymmetric loss function; see, e.g.,Bates(1969), Granger(1999) andChristoffersen & Diebold (1997).

At least since Ito (1990), research has shown that individual forecasts are biased. The consensus forecast is biased as well, and Elliott et al. (2008) establish that current rationality tests are not robust to small deviations from symmetric loss, having little ability to tell whether the forecaster is irrational or the loss function is asymmetric. Research done with Brazilian data byGuill´en & Garcia(2014) has also shown that the consensus of professional forecasters is biased, as they persistently overestimate or underestimate inflation. Gaglianone & Issler(2018) confirm these results using a different testing strategy. They fix the problem by extracting possible sources of bias from the consensus forecast. This is how we will deal with this problem here as well.

The second problem of the previous literature is that most of the credibility indices discussed above usually employ a threshold above (or below) which credibility is nil. But, in all cases, this threshold is ad hoc and cannot be suitable for all countries and/or all time periods, restricting the validity of these indices and their comparability.

8The concept of optimistic or pessimistic here is related to an inflation forecast that is below or above the

(11)

To deal with ad hoc confidence intervals we will employ a statistical procedure in construct-ing them, based on robust (HAC) consistent estimates of the long-run variance of inflation expectations. This allows the construction of confidence intervals for any confidence level, which will be based on a sound statistical foundation.

3

Building a microfounded index

This section brings a discussion of the econometric methodology proposed by Gaglianone & Issler(2018)9 to combine survey expectations10 in order to obtain optimal forecasts in a panel-data context. Their main assumptions and propositions are listed on AppendixBin more detail. Here, we further discuss how to obtain the asymptotic distribution of their estimator of the conditional mean of inflation, using a robust (heteroskedasticity and autocorrelation consistent (HAC)) covariance matrix estimation methods.

3.1

Econometric Methodology

The techniques proposed byGaglianone & Issler(2018) are appropriate for forecasting a weakly stationary and ergodic univariate process {yt} using a large number of forecasts that will be

combined to yield an optimal forecast in the mean-squared error (MSE) sense. The forecasts for ytare taken from a survey of agents’ expectations regarding the variable in question and are

computed using conditioning information sets lagged h periods. These h-step-ahead forecasts of ytformed at period (t − h) are labeled fi,th, for individual i = 1, ..., N , in time period t = 1, ..., T

and horizon h = 1, ..., H.

The econometric setup includes two layers, where in the first layer the individual form his optimal point forecast of yt(fi,th) and in the second layer the econometrician uses the information

about individual forecasts to make inference about the conditional expectation Et−h(yt). They

show that optimal forecasts, are related to the conditional expectation Et−h(yt) by an affine

function:

fi,th = kih+ βih· Et−h(yt) + εhi,t (2)

This is the optimal h-step-ahead feasible forecast of yt for each forecaster i in the sample,

where θih = [βih kih]0 is a (2×1) vector of parameters and εhi,t accounts for finite sample parameter uncertainty. As we do not need identification of all βih and khi, to be able to identify Et−h(yt),

the only focus is on their means. Then, averaging across i and assuming that 1 N PN i=1ε h i,t p − → 0, allows identifying Et−h(yt) as:

9It should be mentioned that this methodology extends the previous literature of forecasting in a panel-data

context (seePalm & Zellner(1992),Davies & Lahiri(1995),Issler & Lima(2009),Lahiri et al. (2015)).

(12)

Et−h(yt) = plimN →∞ 1 N PN i=1f h i,t− 1 N PN i=1k h i 1 N PN i=1βih (3) under the assumption that the terms of the limit above converge in probability.

Their basic idea is to estimate the random variable Et−h(yt) through GMM techniques

relying mainly on T asymptotics. The loss function is known only by the individual forecaster and this is an important source of heterogeneity in the forecasts. Moreover, there is no reason why forecasters will use a symmetric loss function. Indeed, in many practical problems the use of an asymmetric loss function is called for. This, in turn, will make the optimal forecast, fh

i,t,

a biased and error ridden measure of the conditional expectation.

To be able to proceed with the GMM framework, they need to eliminate the latent variable Et−h(yt) in the GMM restrictions. Their solution is to use an equation proposed by Issler & Lima (2009) in a similar context to decompose yt as follows:

yt= Et−h(yt) + ηth (4)

where ηht is a martingale-difference sequence and, by construction, Et−h(ηht) = 0. Therefore,

they can use this decomposition to express fi,th in the following way:

fi,th = kih+ βih· yt+ νi,th (5)

where νh

i,t ≡ βih·ηht+εi,th is a composite error term. Under the assumption that E(νi,th|Ft−h) = 011,

where Ft−h is the information set available at t − h, it follows that:

E[νi,th ⊗ zt−s] = E[(fi,th − k h i − β

h

i · yt) ⊗ zt−s] = 0 (6)

for all i = 1, ..., N , t = 1, ..., T and all h = 1, ..., H, and where zt−s is a vector of instruments,

zt−s ∈ Ft−s with s = h and ⊗ is the Kronecker product. But, as these moment conditions have

too many parameters and as the parameters to be estimated by GMM don’t depend on i, they use cross-section averages to reduce parameter dimensionality, as long as there is convergence in probability, leading to:

E[(f.,th − kh− βh · yt) ⊗ zt−s] = 0 (7)

for all t = 1, ..., N and h = 1, ..., H and where fh .,t = N1 PN i=1f h i,t, kh = 1 N PN i=1k h i, and βh = 1 N PN i=1βih.

Note that averaging across i is a good strategy to be able to identify and estimate Et−h(yt)

from a survey of forecasts, since Et−h(yt) does not vary across i. If we stack all moment

conditions implicit in equation7across h, it is even more clear that the initial problem collapsed 11

As we noted before, by construction, E(ηh

t|Ft−h) = 0, so the assumption needed to obtain E(νi,th|Ft−h) = 0 is

only that E(εh

(13)

to one where we have H × dim(zt−s) restrictions and 2H parameters to estimate: E       (f1 .,t− k1− β1 · yt) (f2 .,t− k2− β2 · yt) ... (fH .,t − kH − βH · yt)       ⊗ zt−s = 0

There are two cases here, regarding the limit in equation (3). The first case is when we let first N → ∞ and then we let T → ∞. The second is when we first let T → ∞ and N is fixed or N → ∞ after T .

In the first case, note that under suitable conditions, the cross-sectional averages in (7) would converge in probability to a unique limit as N → ∞, that is, plimN →∞N1

PN i=1β h i = βh, βh 6= 0 and |βh| < ∞; plim N →∞N1 PN

i=1kih = kh, kh 6= 0 and |kh| < ∞ and plimN →∞N1

PN

i=1fi,th = f.,th,

fh

.,t 6= 0 and |f.,th| < ∞, for all t = 1, ..., T . After taking moment conditions and noting that

N → ∞:

E[νi,th ⊗ zt−s] = E[(fi,th − k h i − β

h

i · yt) ⊗ zt−s] = 0 (8) Gaglianone & Issler(2018) show that the feasible extended bias corrected forecast N1 PN

i=1 fi,th−ckh

c βh , based on T -consistent GMM estimates bθh = [ bkh; cβh]0 obeys the following condition:

Et−h(yt) = plim(N,T →∞)seq " 1 N N X i=1 fi,th − bkh c βh # (9)

where (N, T → ∞)seq denotes the sequential asympotic approach proposed byPhillips & Moon

(1999), when first (N → ∞) and then (T → ∞).

In the second case, when first (T → ∞) and then (N → ∞) or N is fixed after (T → ∞), a stronger assumption is needed to validate the moment condition, that is, E(εhi,t|Ft−h) = 0.

Based on this assumption, they conclude that the feasible extended BCAF based on consistent GMM estimates of the vector of parameters bθh =

 b kh; cβh 0 obeys: Et−h(yt) = plim(T,N →∞)seq " 1 N N X i=1 fh i,t − bkh c βh # (10)

This result implies that Et−h(yt) can be consistently estimated as:

b Et−h(yt) = 1 N N X i=1 fi,th − bkh c βh , or, (11) b Et−h(yt) = 1 N N X i=1 fh i,t− bkh c βh (12)

(14)

for each h = 1, ..., H, depending on whether we let first N → ∞ and then T → ∞ or first T → ∞ and then N → ∞ or hold N fixed after T → ∞.

Therefore, we can state that:

Et−h(yt) = plim(N,T →∞)seq " 1 N N X i=1 fi,th − bkh c βh # = plim(T,N →∞)seq " 1 N N X i=1 fi,th − bkh c βh # (13)

regardless of the order in which N and T diverge, using a microfounded structural model for the h-period-ahead expectations based on GMM estimates of the parameters under T asymptotics. The estimates for Et−h(yt) can be interpreted as bias-corrected versions of survey forecasts

because if the mean kh or kh is zero and the mean βh or βh is one, E

t−h(yt) will converge to

the same probability limit of the consensus forecast (N1 PN

i=1f h i,t).

As it is clear from equation (2), individual forecasts are in general not equal to the conditional expectation Et−h(yt). This happens essentially because of asymmetric loss. Despite that, all

forecasts depend on a common factor Et−h(yt), which we can think of as a market forecast.

Under a symmetric risk function for the market as a whole, it will be the optimal forecast for inflation, which justifies its label as the market expectation of inflation, or, as people’s beliefs, i.e., Blinder (2000). This is our main identification assumption here regarding people’s beliefs: we equate people’s beliefs with Et−h(yt). Its feasible versions are estimated in equations (11)

and (12).

Next, we discuss how to obtain the asymptotic distribution of bEt−h(yt), applying non

para-metric methods. We will use the non parapara-metric approach because the restrictions used in GMM estimation may have unknown autocorrelation and heteroskedasticity properties.

3.2

Heteroskedasticity and autocorrelation consistent (HAC)

co-variance matrix estimation

Heteroskedasticity and autocorrelation consistent (HAC) covariance matrix estimation refers to calculation of covariance matrices that account for conditional heteroskedasticity of regression disturbances and serial correlation of cross products of instruments and regression disturbances. The heteroskedasticity and serial correlation may be of known or unknown form, and this will determine if the estimation method will be parametric or non parametric.

The relevant technical issue in this paper is that bEt−h(yt) is estimated from an

orthog-onality condition that involves a composite error term, νi,th, that has unknown distribution, hence unknown autocorrelation and heteroskedasticity properties, so that it is impossible to use a parametric model for consistent estimation of the covariance matrix of the orthogonality restrictions.

The most widely used class of non parametric estimators in a GMM context relies on smooth-ing of autocovariances. Newey & West (1987) and Andrews (1991) are the main references in

(15)

this field, building on the literature on estimation of spectral densities. We offer an informal treatment of the subject in this subsection, but AppendixA brings a more complete review of the theory.

Let Γj denote the j-th autocovariance of a stationary mean zero random vector ht(θ),

for which a valid orthogonality condition of the form E[ht(θ)] = 0 applies, where ht(θ) is a

product between a vector of instruments Zt and a vector of regression errors t, i.e., ht(θ) =

Zt· t(θ). Thus, Γj = E[ht(θ)h0t−j(θ)]. The long-run variance of ht(θ) is defined as the sum of

all autocovariances. Since Γj = Γ0−j, we can write the long-run variance as:

S = Γ0+ ∞

X

j=1

(Γj+ Γ0−j) (14)

As White(1984a) argues, from the point of view of the estimation of asymptotic covariance matrices, there are three cases regarding this sum. The first is where Zt· t is uncorrelated,

so that S = Γ0. The second case is where Zt· t is finitely correlated, so that the sum can

be truncated from j = 1 to j = m because covariances of order greater than m are zero, so that S = Γ0+

Pm

j=1(Γj+ Γ0−j). The third and last case, and the most interesting one, is when

Ztt is an asymptotically uncorrelated sequence. When we do not have information about its

covariance structure, an essential restriction is that Γj → 0 as j → ∞. Therefore, we shall

assume that Zt· t is a mixing sequence, which suffices for asymptotic uncorrelatedness.

The idea of smoothing autocovariances in non-parametric covariance matrix estimation is to use a series of weights that obey certain properties to guarantee a positive semi-definite estimator bΓj for Γj. Newey & West (1987) considered estimators defined as:

b S = bΓ0+ κ(j, l) l X j=1 (bΓj + bΓ0−j) (15)

where κ(j, l) is a kernel weight that goes to zero as j approaches l, the bandwidth parameter. The idea is that covariances of higher order have less weight, and as they are estimated with less accuracy. Therefore, the use of a HAC estimator involves the specification of a kernel function and bandwidth parameter. In our main application, we use theBartlett (1950) kernel as proposed byNewey & West(1987)12 and the data dependent method proposed by Newey & West (1994) to choose the bandwidth parameter.

As we have two asymptotic cases, we will begin by analyzing the case when we let first N → ∞ and then T → ∞. Later, we analyze the case when we first let T → ∞ and N is fixed or N → ∞ after T .

12TheBartlett(1950) kernel is defined as κ(j, l) = 1 − j l+1.

(16)

3.2.1 First case: N → ∞ and then T → ∞

Let us first recall that the population moment condition of Gaglianone & Issler (2018), after cross-sectional averaging, is given by:

E[ht(θ0h)] = E[νth⊗ zt−s] = E[(f.,th − k0h− β0h· yt) ⊗ zt−s] = 0 (16)

for each h = 1, ..., H, where θh

0 = (k0h,β0h) is the true parameter value and ⊗ is the Kronecker

product.

When we let N → ∞, under suitable conditions, the cross section averages in (16) converge in probability to certain limits, such that, plimN →∞N1 PNi=1βih = β0h, β0h 6= 0 and |β0h| < ∞;

plimN →∞N1 PNi=1khi = kh0, k0h 6= 0 and |k0h| < ∞ and plimN →∞N1 PNi=1fi,th = f.,th, f.,th 6= 0 and

|fh

.,t| < ∞, for all t = 1, ..., T .

Assuming that standard regularity conditions for consistency13 hold, after N → ∞, we can replace cross-sectional averages by their probability limits:

E[ht(θ0h)] = E[(f.,th − k0h− β0h· yt) ⊗ zt−s] = 0 (17)

where θh0 = [kh0; β0h]0 stack the true parameter values. In this context, the GMM estimator b

θh = [ bkh; cβh]0 is a consistent estimator, i.e., bθh → θp h

0, for all h. The heteroskedasticity and

autocorrelation consistent (HAC) covariance estimator of the long-run variance of the moment conditions is given by:

b S(h) = bΓ0(bθh) + l X j=1 κ(j, l)(bΓj( bθh) + bΓ0j( bθh)) (18) where b Γj(bθh) = 1 T T X t=j+1 ht(bθh)h0t−j(bθh), (19)

and κ(j, l) is the kernel function weight used to smooth the sample autocovariance function, l is the bandwidth parameter. We will employ the Bartlett kernel κ(l, j) = 1 − l+1j , so that the HAC covariance estimator is the same as the one proposed by Newey & West(1987).

To find the asymptotic distribution of bEt−h(yt) entails the following steps. First, as proved

byHansen (1982), the efficient GMM estimator bθh is asymptotically normal as follows:

√ T (bθh− θ0h)−→ Nd  0,  GhS(h)−1Gh0 −1 (20) 13For the general class of Extremum Estimators, where the GMM estimator is included, if: i) Q

0(θ) is uniquely

maximized (or minimized) at θ0; ii) Θ is compact ; iii) Q0(θ) is continuous; iv) bQn(θ) converges uniformly

in probability to Q0(θ), then bθ p

−→ θ0. Here, uniform convergence and continuity are the hypotheses that

are often referred to as ”the standard regularity conditions for consistency”. For more details, seeNewey & McFadden(1994), p. 2120-2140.

(17)

where the optimal weighting matrix is S−1, S = E[h(θ(h))h0(h))] and Gh = plim T →∞∂h 0h) ∂θh θh=cθh. Notice that Sh can be consistently estimated as shown in equations (18) and (19).

The next step is to note that bEt−h(yt) is as a continuous function of bθh, f : IR2 → IR, such

that f ( bθh) = f ( bkh, cβh) = 1 N PN i=1 fh i,t−ckh c βh . Moreover, f ( bθ

h) has continuous first derivatives, so

that the delta method can be applied to find the asymptotic variance of bEt−h(yt):

√ T (f ( bθh) − f (θh 0)) d − → N  0, D(f (θh))0  GhS(h)−1Gh0 −1 D(f (θh))  (21)

where D(f (θh)) is the jacobian of f (θh): D(f (θh)) =

 − 1 βh; −1 N PN t=1fi,th+kh (βh)2 0 .

Therefore, a consistent estimator of the long-run variance of bEt−h(yt) is given by:

b V(h) =   −1 c βh 1 N PN i=1(−fi,th+ckh) d (βh)2    c GhS\(h)−1Gdh0 −1   − 1 c βh 1 N PN i=1(−fi,th+ckh) d (βh)2  (22)

where, the estimate of S is given in equation (18), and the estimate of Ghis given by ∂h∂θ0(θhh) θh=cθh. Then, √ T (bEt−h(yt) − Et−h(yt)) Asy ∼ N (0, V(h)) (23) or: b Et−h(yt)) Asy ∼ NEt−h(yt), V(h) T  (24) where V(h) can be consistently estimated by bV(h) as in equation (22).

3.2.2 Second case: T → ∞ and N is fixed or N → ∞ after T → ∞ The population moment condition is given by:

E[ht(θ0h)] = E[ν h

t ⊗ zt−s] = E[(f.,th − k0h− β0h· yt) ⊗ zt−s] = 0 (25)

for each h = 1, ..., H, where θh

0 = (kh0, β0h) is the true parameter value for each h = 1, ..., H.

As before, we assume there exists a consistent estimator of of the true parameter bθh 0, such

that bθh 0

p

→ bθh, for all h, as proven by Hansen (1982), and z

t−s is a vector of instruments with

dim(zt−s) ≥ 2.

Following the same steps as before, a consistent estimate of the long-run variance of ˆEt−h(yt)

(18)

b V(h) =    −1 c βh 1 N PN i=1(−fi,th+ckh) d (βh) 2     c GhS\(h)−1Gdh0 −1    − 1 c βh 1 N PN i=1(−fi,th+ckh) d (βh) 2   (26)

where the estimate of Gh is given by ∂h0(θh)

∂θh

θh=cθh and the estimate of S is given by:

b S(h) = " b Γ0( bθh) + l X j=1 κ(j, l)(bΓj( bθh) + bΓ0j( bθh)) # (27) where, b Γj( bθh) = 1 T T X t=j+1 ht( bθh)h0t−j( bθh). (28) Then, we obtain: b Et−h(yt)) Asy ∼ NEt−h(yt), V(h) T  (29) where V(h) can be consistently estimated by bV(h) from equation (26).

Based on this limiting distribution, we can construct asymptotic confidence intervals for b

Et−h(yt) for each h, which will be of great usefulness in the construction of our measure of

credibility.

Once we have characterized the asymptotic distributions as in equations (24) and (29), we can construct 95% HAC robust confidence intervals to be compared with the explicit or tacit targets for inflation in each point in time. This will determine whether or not the central bank was credible in that period. As we all know, there is nothing magic about 95% confidence bands and perhaps a broader approach can be employed, considering different levels of credibility risk. The use of fan charts is an interesting approach, since it allows the assessment of uncertainties surrounding point forecasts. This technique gained momentum following the publication of fan charts by the Bank of England in 1996. Because we are using asymptotic results, we will not allow asymmetries as is common place when fan charts are employed. However, the technique of expressing risk in its different layers is an interesting extension of the current setup.

3.3

A credibility test and a new credibility index

Computing the asymptotic distribution of bEt−h(yt) for each h, as discussed in the previous

subsection, we can use this distribution to construct a new credibility index. The basic idea is that a centered version of bEt−h(yt) is asymptotically normally distributed with zero mean

and covariance matrix V(h), which can be consistently estimated by bV(h) from equation (22)

or (26). We will argue here that the area under the probability density function of the normal distribution between the inflation target and bEt−h(yt) can be used to construct an index of

(19)

central bank credibility. As before, this index will be based on a sound statistical basis.

In our application, we could use both cases discussed before, where we let N → ∞ first or when we let T → ∞ first. But, as we will argue in the next section for the brazilian case, most surveys approximate better the case when we let first T → ∞ and after we let N → ∞ or N is fixed, as there are usually thousands of time-series daily observations (more than 3,000) and a limited number of participants in the cross sectional dimension (about 120).

Based on bEt−h(yt) Asy ∼ NEt−h(yt),V (h) T 

, we can establish the cumulative distribution func-tion (CDF) of bEt−h(yt), which we label as F (x). Here, we use F (x) to construct a credibility

index (CIIS) that has the following properties: (i) if bEt−h(yt) = π∗, where π∗ is the central bank

inflation target midpoint, CIIS attains its maximum value at 1 and this would be the perfect

credibility case; (ii) CIIS decreases as the distance between π∗ and Et−h(yt) = π∗ increases,

asymptotically going to zero.

Figure (1) describes the idea behind the index, that will be measured as the density of F (x) between bEt−h(yt) and π∗, denoted in the graph by the blue area.

Figure 1: Normal Distribution

The new credibility index proposed here obeys: CIIS = n 1 − |F (bEt−h(yt))−F (π∗)| 1/2 ; if − ∞ < π ∗ < ∞.

The proposed index has some advantages over the existing indices in the literature. First, is relies on a pure statistical criterion and it does not depend on ad hoc bounds, making its appli-cability much broader. Second, it is based on a structural model for survey expectations that yields a consistent estimate of the conditional expectation. Additionally, it also encompasses the case when the consensus forecast is free of bias.

Based on the distribution of bEt−h(yt), and on the inflation target π∗t, the definition of

credibility is the following. A central bank is credible if:

πt∗ ⊂ b

Et−h(yt) − 1.96 bV(h)1/2, bEt−h(yt) + 1.96 bV(h)1/2



(30) In words, a central bank is credible if the confidence interval around bEt−h(yt) contains the

(20)

based on a statistical criterion, avoiding the use of ad hoc bounds.

4

Empirical application

4.1

Data

We use the data available in the Focus Survey of forecasts of the Brazilian Central Bank (BCB). This is a very rich database, which includes monthly and annual forecasts from roughly 250 institutions for every working day and for many important economic variables, such as different inflation indicators, exchange rates, GDP, industrial production, balance of payments series, etc. The survey collects data on professional forecasters, including banks, investment banks, other financial institutions, non-financial companies, consulting firms, academic institutions, etc.

The survey started in May 1999, initially collecting forecasts of around 50 institutions mainly on price indices and GDP growth. It quickly evolved to include around 250 institutions and survey data on a much broader basis. About 120 institutions are active in the system today. In November 2001, the online survey was created and in March 2010 it was improved, resulting in the present version of the Market Expectations System14.

Our focus is on the inflation rate measured by the Brazilian Broad Consumer Price Index (IPCA), because it is the official inflation target of the Brazilian Central Bank. The Focus Survey contains short-term monthly forecasts, 12-month-ahead forecasts, and also year-end forecasts from 2 to 5 years ahead. These year-end forecasts are fixed-point forecasts, where the forecast horizon changes monthly as time evolves. In our main application, we will focus on 12-month ahead inflation expectations, since those are fixed-horizon forecasts that can be collected at the monthly frequency. Also, this horizon is large enough for inflation shocks to dissipate, since we do not want to associate the lack of credibility with the presence of mean-reverting shocks to inflation.

Our sample covers monthly inflation forecasts collected from November 2001 until April 2017. The survey is available since 1999, but data regarding expectations 12 months ahead is available only since November 2001. In each month t, t = 1, . . . , T , survey respondent i, i = 1, . . . , N , may inform her/his forecast for IPCA inflation rates for five calendar years, including the current year. In our sample, T =208 months and N is, on average, 60.

In recent years, the annual inflation rate in Brazil as measured by IPCA has increased considerably, reaching over 10% per year. Inflation expectations for all horizons have also increased, as Figure 2shows. The data in Figure2are slightly different across horizons. There are fixed-horizon forecasts up to the 12-month horizon which can be extracted directly from the data base. For horizons from 24-months up to 48-months, there are no fixed-horizon forecasts 14For more information on the survey, seeCarvalho & Minella(2012) andMarques(2013)

(21)

available. However, year-end forecasts are available and can be combined to generate synthetic fixed-horizon forecasts using the methodology proposed by Dovern et al.(2012)15.

Figure 2: Focus Fixed Horizon Inflation Forecasts

Figure 2shows the target and the resulting series for monthly inflation forecasts, with fixed horizons of 12, and of synthetic horizons of 24, 36 and 48 months. Table 1 shows descriptive statistics for the consensus over different horizons. Data for the 12-month horizon comes di-rectly from the survey. For the longer horizons we need to employ the synthetic interpolation procedure based on year-end expectations. Since the inflation target was kept stable for 4.5% for most of our sample, the average of 12-month ahead expectations above 5.7% is worrisome, and could be a sign of lack of credibility of the Brazilian Central Bank for the sample as a whole or for parts of it.

Figure 2 plots the same data from 2000 through 2017, together with the target at every 15This requires the interpolation of inflation expectations data, which converts fixed-point inflation expectations

to fixed-horizon expectations according to the following formula:

fth= 100  1 +f y+j t 100 12−(t−1)12  1 + f y+j+1 t 100 t−112 − 1  (31) where h is the fixed horizon forecast period measured in years, h = 1, 2, 3, 4; t is the month when the forecast is calculated, t = 1, ..., T and y is the calendar year of the forecast collected by the survey, y = 2001, ..., 2022 in our sample and j = 0, 1, 2, ....

(22)

Table 1: Focus Inflation Forecasts: Descriptive Statistics (in %)∗

Forecast Horizons Average Std. deviation Max Min

12-months ahead 5.74 1.50 11.92 3.66

24-months ahead 5.04 0.78 8.08 3.93

36-months ahead 4.74 0.53 6.46 3.64

48-months ahead 4.59 0.45 5.61 3.48

Notes: *All sample statistics are calculated between November 2001 and April 2017.

point in time. It shows three distinct periods when we compare the consensus expectations with the inflation target. The first one is from 2002 until end-2003, being characterized by a sudden and strong deterioration of inflation expectations due to the uncertainty around the 2002 election. After the election, uncertainty was reduced and expectations fell. The second period is from the end of 2003 until the end of 2010, being characterized by the stability of long-term inflation expectation around the target midpoint. The third period starts in the beginning of 2011, where we observe a slow and sustained rise in all measures of inflation expectations. Their synchronized movement shows that something more than a price shock happened in that period. Towards the end of the sample, after President Rousseff’s impeachment, there was a change in economic policy and in the main team of policymakers, including a change in the Brazilian Central Bank. Soon after, expectations fell quickly towards the target midpoint.

In our main application, the inflation expectation measure is the 12-month-ahead consensus forecast which we denote by fh

.,t, for t = 1, ..., T and h = 12, where h is measured in months and

the agents’ forecasts are cross-sectionally averaged in the survey to form the consensus forecast, fh

.,t = N1

PN

i=1fi,th. The Focus survey approximates better the case where T → ∞ while N is

small, and therefore we use the results discussed in section 3.2.

Figure 3 compares the monthly consensuses forecasts for inflation rate with the actual inflation rate for the 12-month-ahead horizon. In this figure, the measure of consensus forecasts represented by the black line was forwarded 12 months to match inflation, which it is supposed to track. In each month, we can compare IPCA inflation over the last 12 months and its respective forecast made 12-months prior.

As is clear from figure 3, the consensus forecast underestimates actual inflation quite often, and this is true in 68% of the sample.

(23)

Figure 3: Forecasts vs Actual Inflation Rate for the 12-months ahead horizon

4.2

Results

Table 2presents the results of the generalized method of moments (GMM)16estimation for the parameters kh and βh, based on information about fh

.,t and yt. We use as instruments up to

two lags of the consensuses forecasts (fh

.,t), up to two lags of the output gap17 and up to four

lags of the 12-month variation of the Commodity Price Index (IC-Br)1819.

Our sample starts at 2001 and out-of-sample forecasts are constructed for the period between 2007 and 201720, re-estimating the coefficients at the end of each year, and using them to construct true out-of-sample forecasts of inflation. In the first column of Table 2, the sample denotes the period used in estimation, which is used to perform out-of-sample forecasts up to 12-months ahead. For example, the sample from 2001 to 2006 generates out-of-sample forecasts for the year of 2007; the sample from 2001 to 2007 generates out-of sample forecasts for 2008, and so on, in a growing-window setup.

16 The iterative procedure ofHansen et al.(1996) is employed in the GMM estimation, and the initial weight

matrix is the identity. The HAC covariance matrix is obtained following the steps discussed previously and using the Bartlett kernel asNewey & West (1987) proposed and the bandwith selection is according to the automatic procedure proposed byNewey & West(1994).

17The output gap is calculated using HP filter.

18IC-Br is published monthly by the Brazilian Central Bank and its weighting structure is designed to measure

the impact of commodity prices on Brazilian consumer ination. Fore more details, see ”Transfer of Commodity Prices to the IPCA and the Commodities Index-Brazil”, Brazilian Central Bank Inflation Report, Dezember 2010, at ”http://www.bcb.gov.br/htms/relinf/ing/2010/12/ri201012b5i.pdf”

19We estimated the coefficients with alternative sets of instruments and obtained similar results 20Our sample ends in April, 2017.

(24)

Table 2: GMM estimation results for the 12-months ahead forecast horizon Sample∗ kc12 βc12 kc12 βc12 OIR test Joint significance

p-value p-value p-value test p-value

2001-2006 2.16 0.59 0.00 0.00 0.71 0.00 (0.47) (0.07) 2001-2007 2.28 0.59 0.00 0.00 0.73 0.00 (0.62) (0.10) 2001-2008 2.11 0.63 0.00 0.00 0.60 0.00 (0.72) (0.13) 2001-2009 2.40 0.51 0.00 0.00 0.57 0.00 (0.64) (0.12) 2001-2010 2.52 0.54 0.00 0.00 0.46 0.00 (0.61) (0.11) 2001-2011 1.89 0.63 0.02 0.00 0.53 0.00 (0.82) (0.15) 2001-2012 2.26 0.54 0.00 0.00 0.61 0.00 (0.63) (0.11) 2001-2013 2.03 0.59 0.00 0.00 0.60 0.00 (0.64) (0.11) 2001-2014 2.52 0.47 0.00 0.00 0.74 0.00 (0.47) (0.08) 2001-2015 2.35 0.50 0.00 0.00 0.67 0.00 (0.54) (0.09) 2001-2016 2.35 0.50 0.00 0.00 0.67 0.00 (0.54) (0.09)

Notes: The set of instruments is up to two lags of the consensus forecasts, up to two lags of the output gap and up to four lags of the 12-month variation of the Commodity Price Index (IC-Br).

The sample used in estimation and in forecasting up to 12-month ahead.

In Table 2, Hansen’s over-identifying restriction (OIR)21 test is employed in order to check the validity of instruments and GMM estimates. For all sets of estimates we are far from rejecting the null, which is evidence that the instrument set is appropriate and orthogonal to regression errors. In the last column, we test for joint significance of the coefficients [H0:

k12 = 0 and β12 = 1]. If the null hypothesis is not rejected, the consensus forecast is equal

to the Extended BCAF and there is no evidence of any bias in it. But, we reject the null hypothesis with great confidence in all cases, a strong evidence of biases for the consensus forecast, requiring their respective extraction to construct a consistent estimate for Et−h(yt).

Our estimates in Table 2 show that both mean intercept and mean slope parameters are statistically significant for all samples. The average intercept value is 2.26, ranging from 1.89 to 2.52 and the average slope value is 0.56, ranging from 0.47 to 0.63.

Figure 4 compares consensus forecasts with the Extended BCAF estimated, for the 12-months ahead horizon. The grey line represents the consensus forecasts for the 12-12-months 21Hansen’s J statistic is used to determine the validity of the over-identifying restrictions in a GMM model. For

(25)

ahead horizon, while the black line represents the estimated Extended BCAF for the same horizon. Notice that the behavior of the two is quite distinct, with our estimate of Et−h(yt)

below the consensus on early years and above it on later years of the sample.

Figure 4: Consensuses Forecasts and Extended BCAF: 12-months-ahead horizon

The next step is to estimate robust (HAC) covariance matrices and standard deviations based on the asymptotic distribution of the estimated Extended BCAF, and then construct the respective 95% confidence interval. Figure 5 includes E\t−h(yt), its respective 95% robust

confidence interval, and the inflation target from 2007:1 through 2017:4.

In Figure 5, the 95% confidence interval contains the target slightly above 65% of the time and the Extended BCAF increases considerably between mid-2013 and mid-2016, such that the confidence interval around in this period does not contain the target. As noted before, these were times when the increase in inflation expectations were widespread, indicating a de-anchoring of inflation expectations not consistent with the target of 4.5% annual inflation rate. Based on a statistical criterion, there is a significant distance between inflation expectations and the inflation target, which indicates that central-bank policy was not credible.

From mid-2016 onward, there was a very steep fall in inflation expectations. This happened at the same time when there was a change in the economic team of the Brazilian government, including the central bank administration and its board of directors. Perhaps these changes helped to consolidate the fall in inflation expectations, which were re-anchored according to our

(26)

Figure 5: BCAF and Confidence Intervals: 12-months-ahead

criterion.

Figure 6 shows the same result as Figure 5. However, comparisons between beliefs and targets are only made at the end of each year, since, technically, the Brazilian Inflation Targeting Program has only year-end targets. Results are virtually unchanged: there is no credibility in 2007 and from 2013 to 2016, with the BCB recovering credibility in 2017.

Figure 7 presents the credibility index proposed above. The shaded regions indicate when the target is outside the 95% confidence interval, therefore, when the central bank is not credible. From 2008 on, there is a decrease in the credibility index of the central bank, although with some volatility. This decrease is finally consolidated in 2013, with an almost complete loss of credibility according to our index. Agents’ forecasts were significantly above target, which shows a disbelief in the target and/or a disbelief in the capacity or the willingness of the central bank to bring current inflation towards the target. From August 2013 to July 2016, the credibility index fell close to zero and stayed around zero for three years, until august 2016, where there was a very steep rise in the index: it rose from 0.06 in July to 0.53 in September, reaching 1.0 in December. In the first months of 2017, the index stayed between 0.95 and 1.00, expectations falling bellow target. This is evidence of a strong recovery of central-bank credibility, and a strong re-anchoring of inflation expectations.

Figure 8 compares our measure of central bank credibility with other indices proposed in the literature. Our index is labelled as the ”IS Index”. The other indices are labelled as the ”CK Index” (Cecchetti & Krause(2002)), the ”DGMS Index” (De Mendon¸ca et al. (2009)), the ”DM Index” (Mendon¸ca & Souza(2007)) and the ”LL Index” (Levieuge et al. (2016)).

(27)

Figure 6: BCAF, Confidence Intervals, and Year-end Targets

4.3

Comparing our credibility index with other indices in the

liter-ature

First, note that our index shows results in line with those of the DM and the LL Index. Index DGMS has its maximum value when inflation expectations are inside the inflation target band, therefore, it is equal to one for almost the entire sample. Second, the behavior of our index is very different from all other indices for 2007 and the beginning of 2008: ours points out to a loss in credibility, since inflation expectations fall too much during this period – the Extended BCAF was below 2% for some months – whereas for all other indices credibility is above 0.4 and in some cases close to 1.0. Third, our index shows a loss of credibility during the global crisis of 2008-9, whereas the other indices do not. Again, this is due to inflation expectations being very low vis-a-vis the target. After the crisis, our index shows a prolonged period where the central bank keeps its credibility, which is confirmed by all other indices. Finally, all three indices (IS, DM and LL) show credibility falling to close to zero between 2013 and 2016. However, the CK and the DGMS indices are kept above 0.8 during the same period. In our view, this is not consistent with a severe loss of credibility shown by a rise in expectations depicted in Figure2. Despite that, all indices show strong evidence of a rise in central bank credibility by the end of the sample, being either equal or very close to one.

(28)

Figure 7: Credibility Index

5

Conclusion

Although central bank credibility is elusive,Blinder(2000) generated a consensus when he wrote that ”A central bank is credible if people believe it will do what it says”. This paper proposes a measure of credibility based upon this definition. To implement it, one needs a measure of people’s beliefs, of what central banks say they do, and a way to compare whether these two are the same.

We approach this problem in a novel way. To keep it tractable, we focus on inflation expectations, since inflation is arguably the main variable central banks care about. We extract people’s beliefs about inflation by modelling inflation expectations data using the panel-data approach of Gaglianone & Issler (2018), which shows that optimal individual forecasts are an affine function of one factor alone – the conditional expectation of inflation. This allows the identification and estimation of people’s beliefs, as well as the construction of robust confidence intervals around it. The latter is as an additional contribution of this paper. Based on this approach, we say that a central bank is credible if its explicit (or tacit) inflation target lies within the 95% robust asymptotic confidence interval of people’s beliefs. We also propose a credibility index based upon these concepts.

Our methodology seems sensible from an economic and econometric (statistical) point of view. Given the consensus around Blinder’s definition of credibility, it is economically sensible to equate people’s beliefs to what central banks say they do. Based on Gaglianone and Issler’s

(29)

Figure 8: Comparison between Credibility indices proposed in the literature

methodology, it is straightforward to extract market expectations from a panel of individual expectations, where market expectations are the common latent factor in the latter. Since a large number of important countries has adopted Inflation Targeting Programs, one can easily obtain data on what central banks say they do by looking at actual targets of inflation for these countries. Even if a given country has not adopted such programs, there usually exists tacit targets at any point in time that could serve to that end.

From an econometric point of view, the methodology is relatively simple: we have a consis-tent estimate of the conditional expectation of inflation, which is a random variable. Comparing it to explicit targets only requires constructing robust asymptotic confidence intervals for these estimates. If targets lie within confidence bands, we say that targets are credible, i.e., the central bank is credible. It is important to note that market expectations are extracted from a survey of expectations of professional forecasters, which are active financial stakeholders most of them. This further validates the methodology proposed here.

The approach discussed above is applied to the issue of credibility of the Brazilian Cen-tral Bank (BCB) by using the Focus Survey of professional forecasters. This is a world-class database, fed by institutions that include commercial banks, investment banks, asset man-agement firms, consulting firms, large corporations, and academics, inter alia. About 250 participants can provide daily forecasts for a large number of economic variables, e.g., inflation using different price indices, interest and exchange rates, GDP, industrial production, etc., and for different forecast horizons, e.g., current month, next month, 12-month ahead, 2, 3, 4, and

(30)

5 year-end ahead horizons, etc. At any point in time, about 120 participants are active using the system that manages the database.

Based on the proposed methodology, we estimated monthly the conditional expectation of inflation (12-month-ahead) and its respective robust asymptotic variance, constructing 95% HAC robust confidence intervals for inflation expectations from January 2007 until April 2017. This was then compared to the target in the Brazilian Inflation Targeting Regime. Results show that the BCB was credible 65% of the time, with the exception of a few months in the beginning of 2007 and during the interval between mid-2013 throughout mid-2016. We also constructed a credibility index for the sample 2007-17 and compared it with alternative measures of credibility that are popular in the literature.

Our results agree with the conventional wisdom in Brazil. In describing our empirical findings, we related them to the existing literature on credibility.

(31)

References

Alesina, A. (1988). Macroeconomics and politics. NBER macroeconomics annual, 3, 13–52. Alesina, A. & Summers, L. H. (1993). Central bank independence and macroeconomic

perfor-mance: some comparative evidence. journal of Money, Credit and Banking, 25 (2), 151–162. Andrews, D. W. (1991). Heteroskedasticity and autocorrelation consistent covariance matrix

estimation. Econometrica: Journal of the Econometric Society, 817–858.

Andrews, D. W. & Monahan, J. C. (1992). An improved heteroskedasticity and autocorrelation consistent covariance matrix estimator. Econometrica: Journal of the Econometric Society, 953–966.

Bartlett, M. S. (1950). Periodogram analysis and continuous spectra. Biometrika, 37 (1/2), 1–16.

Bates, J.M., G. C. (1969). The combination of forecasts. Operations Research Quarterly, 20, 309–325.

Bernanke, B. S. & Mishkin, F. S. (1997). Inflation targeting: a new framework for monetary policy? Technical report, National Bureau of Economic Research.

Blinder, A. (2000). “central bank credibility: Why do we care? how do we build it. American Economic Review, 90, 1421–1431.

Bomfim, A. N. & Rudebusch, G. D. (2000). Opportunistic and deliberate disinflation under imperfect credibility. Journal of Money, Credit and Banking, 32 (4), 707–721.

Bordo, M. D. & Siklos, P. L. (2015). Central bank credibility before and after the crisis. Open Economies Review, 1–27.

Canzoneri, M. B. (1985). Monetary policy games and the role of private information. The American Economic Review, 75 (5), 1056–1070.

Capistr´an, C. & Ramos-Francia, M. (2010). Does inflation targeting affect the dispersion of inflation expectations? Journal of Money, Credit and Banking, 42 (1), 113–134.

Carriere-Swallow, Y., Gruss, B., Magud, N. E., Valencia, F., et al. (2016). Monetary policy credibility and exchange rate pass-through. Technical report, International Monetary Fund. Carvalho, F. A. & Minella, A. (2012). Survey forecasts in brazil: a prismatic assessment of epidemiology, performance, and determinants. Journal of International Money and Finance, 31 (6), 1371–1391.

Cecchetti, S. G. & Krause, S. (2002). Central bank structure, policy efficiency, and macroeco-nomic performance: exploring empirical relationships. Review-Federal Reserve Bank of Saint Louis, 84 (4), 47–60.

Christoffersen, P. F. & Diebold, F. X. (1997). Optimal prediction under asymmetric loss. Econometric theory, 13 (6), 808–817.

Cukierman, A. (2008). Central bank independence and monetary policymaking institu-tions—past, present and future. European Journal of Political Economy, 24 (4), 722–736. Cukierman, A. & Meltzer, A. H. (1986). A theory of ambiguity, credibility, and inflation under

discretion and asymmetric information. Econometrica, 54 (5), 1099–1128.

Cumby, R. E., Huizinga, J., & Obstfeld, M. (1983). Two-step two-stage least squares estimation in models with rational expectations. Journal of econometrics, 21 (3), 333–355.

Imagem

Figure 1: Normal Distribution
Figure 2: Focus Fixed Horizon Inflation Forecasts
Table 1: Focus Inflation Forecasts: Descriptive Statistics (in %) ∗
Figure 3: Forecasts vs Actual Inflation Rate for the 12-months ahead horizon
+7

Referências

Documentos relacionados

[SLIDE 12] To understand the potential impact of FinTech on central banking and on economic development it is useful to identify specific dimensions of FinTech

No decorrer dos encontros foram produzidas discussões, tendo como pano de fundo relações de gênero, abordando-se temas como violência, saúde da mulher, divisão de

This action does not occur for the –X direction, because the out-of-plane movement of the West façade tympanum leads to tensile stresses that cannot be withstand by the rotunda

No caso da Ins- tituição nº 1, que atende aos dois níveis de educação infantil (creche e pré-esco- la), o custo do pessoal não-docente bem como os demais itens de custo (material de

This article investigates the degree of tolerance to higher inflation rates in the short run by the presidents of the Brazilian Central Bank in the period 2001-2012. We used

This confidence strategy is clearly opposed to the credibility strategy developed in central banks and the academic milieu after 1980, but also inflation targeting (cf. infra)

Institutional design is central to the inflation targeting credibility construction by reducing the inflation bias via procedures lowering information asymmetry

significantly higher or lower in scores on question 6” (two‐tailed); H11- “One education level group will score statistically significant- ly higher or lower in scores