• Nenhum resultado encontrado

FIRST-ORDER EXPONENTIAL SMOOTHING

Exponential Smoothing Methods

4.2 FIRST-ORDER EXPONENTIAL SMOOTHING

176 EXPONENTIAL SMOOTHING METHODS 12000

• • •

11000

1/J ! 10000

...,

0

;r:

8

9000

Variable

Dow Jones

••

MA

8000

- - MA(S)

- - - - MA (10)

FIGURE 4.4 The Dow Jones Index from June 1999 to June 2006 with moving averages of span 5 and I 0.

FIRST-ORDER EXPONENTIAL SMOOTHING 177 This is called a simple or first-order exponential smoother. There is an extensive literature on exponential smoothing. For example, see the books by Brown [ 1963], Abraham and Ledolter [1983], and Montgomery et al. [1990], and the papers by Brown and Meyer [1961], Chatfield and Yar [1988], Cox [1961], Gardner [1985], Gardner and Dannenbring [1980], and Ledo1ter and Abraham [1984].

An alternate expression in a recursive form for simple exponential smoothing is given by

VT

=(I- 8)

YT

+(I- 8)

(8YT-l

+

82

YT-2

+ · · · +

8T-l Yt)

= (1-8) YT

+

8 (1-8) (YT-1

+

81 YT-2

+ · · · +

8T-2

yl) (4.6)

= (1-8)yT

+8h-1

The recursive form in Eq. (4.6) shows that first-order exponential smoothing can also be seen as the linear combination of the current observation and the smoothed observation at the previous time unit. As the latter contains the data from all previous observations, the smoothed observation at time T is in fact the linear combination of the current observation and the discounted sum of all previous observations. The simple exponential smoother is often represented in a different form by setting A. =

1-e.

YT

= AYT

+

(1 - A)

YT

-I (4.7)

In this representation the discount factor, A, represents the weight put on the last observation and (1 -'A) represents the weight put on the smoothed value of the previous observations.

Analogous to the size of the span in moving average smoothers, an important issue for the exponential smoothers is the choice of the discount factor, A. Moreover, from Eq. (4.7), we can see that the calculation of

y

1 would require us to know

y

0 • We will discuss these issues in the next two sections.

4.2.1 The Initial Value,

.Yo

Since

Yo

is needed in the recursive calculations that start with

y

1 = A. y1

+ (

l - 'A)

y

0 ,

its value needs to be estimated. But from Eq. ( 4. 7) we have Yt = A. Yt

+(I -

'A)Yo

5

72 = A Y2

+(I -

'A)Y1 =A Y2

+

(l - 'A)('A Yt

+

(1 - 'A)Yo)

= 'A (Y2

+(I -

'A) yt)

+

(1 - 'A)2

Yo

Y3

='A

(Y3 +(I

-A.) Y2

+

( l - A.)2

Yl) +(I-

'A)3

Yo

_VT =A (YT

+(I-

A)YT-1

+ · · · +

(1-'Al-I Yt)

+(I-

'Al_vo

178 EXPONENTIAL SMOOTHING METHODS

which means that as T gets large and hence (I -

A.l

gets small, the contribution of

y

0 to

Yr

becomes negligible. Thus for large data sets, the estimation of .\\J has little relevance. Nevertheless, two commonly used estimates for

0 are the following.

1. Set

Yo

= y1• If the changes in the process are expected to occur early and fast.

this choice for the starting value for

h

is reasonable.

2. Take the average of the available data or a subset of the available data,

s·.

and set y0

=

y. If the process is at least at the beginning locally constant, this starting value may be preferred.

4.2.2 The Value of..\

In Figures 4.5 and 4.6, respectively, we have two simple exponential smoothers for the Dow Jones Index data with A

=

0.2 and A

=

0.4. It can be seen that in the latter the smoothed values follow the original observations more closely. In general, as A.

gets closer to I, and more emphasis is put on the last observation. the smoothed values will approach the original observations. Two extreme cases will be when A = 0 and A = I. In the former, the smoothed values will all be equal to a constant, namely.

y

0 • We can think of the constant line as the "smoothest" version of whatever pattern the actual time series follows. For A

=

l, we have

h =

YT and this will represent the "least" smoothed (or unsmoothed) version of the original time series. We can accordingly expect the variance of the simple exponential smoother to vary between 0 and the variance of the original time series based on the choice of A. Note that under

12000

• •

11000

-.· •

..,

Gl c 10000

-

0

-

..,

3:

0

• ••

0

9000

• .. •

••

8000

•• •

9<?> 1:)~ 1:)~ l:l" 1:)'1- 1:)'1- 1:)<>;, ~ ~ 1:)~ 1:)'0

).§! )'&(;( e;,<u<:i ~~ )'&(;( e;,<u<:i ~~ )'It~ e;,<u<:i ~~ )'It~

Date

Variable

e Actual --Smoothed

Smoothing Constant Alpha 0.2 Accuracy Measures

MAPE 4

MAD 394

MSD 287615

FIGURE 4.5 The Dow Jones Index from June 1999 to June 2006 with first-order exponential smoothing with)..= 0.2.

alpha no Minitab

FIRST-ORDER EXPONENTIAL SMOOTHING

the independence and constant variance assumptions we have Var(Yr) = Var

(A.~

(I -

W Yr-)

00

= A.2

L

(I - ;,_}' Var(Yr-r)

1=0 =

= ;..2

L

( I -A)2' Var(yr)

t=O

00

=Var(yr)A.2L(l-A.)2r

t=O

= --Var(yr) A (2 - A.)

179

(4.8)

Thus the question will be how much smoothing is needed. In the literature, A values between 0.1 and 0.4 are often recommended and do indeed perform well in practice.

A more rigorous method of finding the right A. value will be discussed in Section 4.6.1.

Example 4.1

Consider the Dow Jones Index from June 1999 to June 2006 given in Figure 4.3.

For first-order exponential smoothing we would need to address two issues as stated in the previous sections: how to pick the initial value .Yo and the smooth- ing constant A. Following the recommendation in Section 4.2.2, we will consider the smoothing constants 0.2 and 0.4. As for the initial value, we will consider the first recommendation in Section 4.2.1 and set .Yo = y1• Figures 4.5 and 4.6 show the

12000

• •

11000

.,

Q) 10000 c 0

..,

0 3:

c 9000

8000

<:?JC!J C) C) \)C) C)' \)'].. \:')'].. Cl'=' ~ ~ \)":> C)'O

':>"'« ';,'~>« e:,'§i ~if. ';,'~>« e:,0~ ~if. ';,'~>« e:,0~ ~if. ';,'~>«

Date

Variable

Actual --Smoothed

Smoothing Constant

Alpha 0.4

Accuracy Measures

MAPE 3

MAD 336

MSD 195996

FIGURE 4.6 The Dow Jones Index from June 1999 to June 2006 with first-order exponential smoothing with A.= 0.4.

The smoothed values follow the original observations more closely. In general, as alpha gets closer to 1, and more emphasis is put on the last observation, the smoothed values will approach the original observations.

Fazer no Minitab

From February 2003 to February 2004) the smoothed values consistently underestimate the actual data.

Use Options=1 para Initial Vaue

180 EXPONENTIAL SMOOTHING METHODS

smoothed and actual data obtained from Minitab with smoothing constants 0.2 and 0.4 respectively.

Note that Minitab reports several measures of accuracy; MAPE, MAD, and MSD.

Mean absolute percentage error (MAPE) is the average absolute percentage change between the smoothed and the true values given as

T

L I<YI-

s·~)/YII MAPE= -1=-1

- - - - - X 100

T

Mean absolute deviation (MAD) is the average absolute difference between the smoothed and the true values given as

L

T l(yl - S'1 )I MAD= -1=-1

- - - -

T

Mean squared deviation (MSD) is the average squared difference between the smoothed and the true values given as

T

"'(\' -L., . I . I \• )2 MSD = -1=-1

- - - -

T

It should also be noted that the smoothed data with A = 0.4 follows the actual data closer. However, in both cases, when there is an apparent linear trend in the data (e.g., from February 2003 to February 2004) the smoothed values consistently underesti- mate the actual data. We will discuss this issue in greater detail in the next section.

As an alternative estimate for the initial value, we can also use the average of the data between June 1999 and June 2001 since during this period the time series data appears to be stable. Figures 4.7 and 4.8 show the single exponential smoothing with the initial value equal to the average of the first 25 observations corresponding to the period between June 1999 and June 200 I. Note that the choice of the initial value has very little effect on the smoothed values as time goes on.