TS Module 11 Computing and using sample autocorrelations


TS Module 11 Computing and using sample autocorrelations

Author
Message
NEAS
Supreme Being
Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)

Group: Administrators
Posts: 4.5K, Visits: 1.6K

TS Module 11 Simulated and actual time series

 

(The attached PDF file has better formatting.)

 

Computing and using sample autocorrelations

 

 

*Exercise 10.1: Sample autocorrelations

 

A time series has ten elements: {10, 8, 9, 11, 13, 12, 10, 8, 7, 12}.

 


A.     What is the sample autocorrelation of lag 1?

B.     What is the sample autocorrelation of lag 2?

C.    What is the sample autocorrelation of lag 3?

 


 

Part A: The table below shows the computations.

 


 

           The numerator of the sample autocorrelation is the total of 9 cross products of lag 1.

           The denominator is the total of 10 squared deviations.


 

 

10 / 36 = 0.278

 

The cross products use deviations from different rows.

 

Illustration: For the row beginning Entry #2:

 


 

           The cross product of lag 1 is –2 × –1 = 2.

           The cross product of lag 2 is –2 × 1 = –2.

           The cross product of lag 3 is –2 × 3 = –6.


 

 

The numbers of terms in the numerator is less than that in the denominator.

 

Part B:

 


 

           The numerator of the sample autocorrelation is the total of 8 cross products of lag 2.

           The denominator is the total of 10 squared deviations.


 

 

–11 / 36 = -0.306

 

Part C: –0.556

 


 

           The numerator of the sample autocorrelation is the total of 7 cross products of lag 3.

           The denominator is the total of 10 squared deviations.


 

 

–20 / 36 = -0.556

 


 

Entry

Value

Deviation

Deviation Squared

Cross Product Lag1

Cross Product Lag2

Cross Product Lag3

1

10

0

0

0

0

0

2

8

-2

4

2

-2

-6

3

9

-1

1

-1

-3

-2

4

11

1

1

3

2

0

5

13

3

9

6

0

-6

6

12

2

4

0

-4

-6

7

10

0

0

0

0

0

8

8

-2

4

6

-4

 

9

7

-3

9

-6

 

 

10

12

2

4

 

 

 

Avg/tot

10

0

36

10

-11

-20

Autocorr

 

 

 

0.278

-0.306

-0.556

 

To produce the sample autocorrelations in R, use:

 

ts10 <- ts(c(10, 8, 9, 11, 13, 12, 10, 8, 7, 12))

acf10 <- acf(ts10)

 


 

The correlogram for this time series is below. The pattern is mostly random fluctuation. With only 10 elements, the standard deviation of the sample autocorrelations for a white noise process is 1/√10 = 0.316. A 95% confidence interval of two standard deviations is ±0.632.

 

The horizontal blue lines show the 95% confidence interval. All the sample autocorrelations are within this interval, so we do not reject the null hypothesis that the figures are white noise.

 

 

 


 

Final exam problems may give a short time series and compute sample autocorrelations.

 

*Exercise 10.2: Sample autocorrelations

 

A time series has five elements: {1, 2, 3, 4, 5}.

 


 

A.     What is the sample autocorrelation of lag 1?

B.     What is the sample autocorrelation of lag 2?

C.    What is the sample autocorrelation of lag 3?

 


 

Part A: The table below shows the computations.

 

Step #1: Deviations from the mean

 


 

           Convert the figures into deviations from the mean.

           The mean is 3, so the deviations are –2, –1, 0, +1, +2.


 

 

Step #2: Denominator

 

The denominator is the total of N squared deviations (N = number of observations).

 

(–2)2 + (–1)2 + 02 +12 +22 = 10.

 

The numerator of the sample autocorrelation is the total of 4 cross products of lag 1.

 

–2 × –1 + –1 × 0 + 0 × 1 + 1 × 2 = 4

 

Step #3: Take the ratio

 

4 / 10 = 0.400

 

The cross products use deviations from different rows.

 

Illustration: For the row beginning Entry #2:

 


 

           The cross product of lag 1 is –2 × –1 = 2.

           The cross product of lag 2 is –2 × 0 = 0.

           The cross product of lag 3 is –2 × 1 = –2.


 

 

The numbers of terms in the numerator is less than that in the denominator.

 

Part B:

 


 

           The numerator of the sample autocorrelation is the total of 3 cross products of lag 2.

           The denominator is the total of 5 squared deviations.


 

 

–1 / 10 = -0.100

 

Part C:

 


 

           The numerator of the sample autocorrelation is the total of 2 cross products of lag 3.

           The denominator is the total of 5 squared deviations.


 

 

–4 / 10 = -0.400

 


 

Entry

Value

Deviation

Deviation Squared

Cross Product Lag1

Cross Product Lag2

Cross Product Lag3

1

1

-2

4

2

0

-2

2

2

-1

1

0

-1

-2




 

3

3

 

Final exam problems may give a short time series and compute sample autocorrelations.

 

*Exercise 10.8: Sample autocorrelations

 

A time series has five elements: {1, 2, 3, 4, 5}.

 

A.                 What is the sample autocorrelation of lag 1?

A.                 What is the sample autocorrelation of lag 2?

B.                 What is the sample autocorrelation of lag 3?

 


 

Final exam problems may give a short time series and compute sample autocorrelations.

 

*Exercise 10.8: Sample autocorrelations

 

A time series has five elements: {1, 2, 3, 4, 5}.

 

C.                What is the sample autocorrelation of lag 1?

D.                What is the sample autocorrelation of lag 2?

E.                 What is the sample autocorrelation of lag 3?

 


 

Final exam problems may give a short time series and compute sample autocorrelations.

 

*Exercise 10.8: Sample autocorrelations

 

A time series has five elements: {1, 2, 3, 4, 5}.

 

F.                 What is the sample autocorrelation of lag 1?

G.                What is the sample autocorrelation of lag 2?

H.                What is the sample autocorrelation of lag 3?

 


0

0

0

0

 

4

4

1

1

2

 

 

5

5

2

4

 

 

 

Avg/tot

3

0

10

4

-1

-4

Autocorr

 

 

 

0.400

-0.100

-0.400

 

 

 

 


 

*Question 10.6: Sample autocorrelations

 

rk is the sample autocorrelation for lag k of a stationary ARMA process with N observations.

 

Which of the following is true?

 


 

A.     The variance of rk is approximately proportional to N.

B.     The variance of rk is approximately proportional to 1/N.

C.    The standard deviation of rk is approximately proportional to N.

D.    The standard deviation of rk is approximately proportional to 1/N.

E.     The standard deviation of rk is approximately proportional to N2.

 

Answer 10.6: B

 

This result is consistent with the standard error of a sample mean.

 


 

           The variance of the sample mean is proportional to 1/N.

           The standard error of the sample mean is proportional to 1/ N.


 

 

These relations are not exact for sample autocorrelations because of the different number of terms in the numerator and denominator. For the final exam, assume the relation is exact. Exam problems may say proportional instead of approximately proportional

 

(Cryer and Chan P110)

 

 


 

*Question 10.7: Sample autocorrelations

 

rk is the sample autocorrelation for lag k of a white noise process with N observations.

 

Which of the following is true?

 


 

A.     The variance of rk   N.

B.     The variance of rk   1/N.

C.    The standard deviation of rk   N.

D.    The standard deviation of rk   1/N.

E.     The standard deviation of rk   N2.

 

Answer 10.7: B

 

We use this result to test if the residuals from an ARIMA process are white noise. Some statisticians refer to this as Bartlett’s test.

 

(Cryer and Chan, P110, Eq 6.1.3)

 

 


 

*Question 10.8: AR(1) Process

 

An AR(1) process has 100 observations and ö = 80%. What is the standard error of r1, the sample autocorrelation of lag 1? (Use the approximation for large N.)

 


 

A.     0.06

B.     0.08

C.    0.36

D.    0.60

E.     0.64

 


 

Answer 10.8: A

 


 

           Var(r1)   (1 – ö2) / n = (1 – 0.82)/100 = 0.00360

           The standard error is the square root of the variance = 0.06000


 

 

(Cryer and Chan, P111, Eq 6.1.5)

 

Note two things:

 


 

           The standard error is proportional to 1/ N.

           If ö is close to +1 or –1, the standard error is low, and we are more certain of the estimate.


 

 

 


 

*Question 10.9: Sample autocorrelation function

 

A stationary ARMA process of 900 observations shows sample autocorrelations of –0.497, –0.035, –0.025, +0.030, and –0.015 for lags 1 through 5. Which of the following five models is most likely to cause these sample autocorrelations?

 


 

A.     MA(1) with è = 0.9

B.     MA(1) with è = –0.497

C.    MA(1) with è = +0.497

D.    AR(1) with ö = –0.497

E.     AR(1) with ö = +0.497

 

Answer 10.9: A

 


 

(Cryer and Chan, P118)

 

The standard error of the sample autocorrelations is 1/30 = 3.33%.

 

We infer that the autocorrelations of lags 2 - 5 are zero, and the observed values are random fluctuations.

 


 

           If the process were AR(1), then r1   ö1, r2   ö2, and so forth.

           The sample autocorrelations decline exponentially, not in a sudden drop.


 

 

If ö1 = ±0.497, r2 ≈ 0.247. The observed value of –0.035 is about 8 standard deviations away, which is too large to attribute to chance.

 

An MA(1) process gives a drop after r1. This is the pattern in the sample autocorrelation function. The Yule-Walker equations give è = 0.900.

 


 

*Question 10.10: Sample autocorrelation function

 

Which of the following is true for sample autocorrelations of non-stationary series?

 


 

A.     The sample ACF typically approaches one as the lag increases.

B.     The sample ACF typically increases without bound as the lag increases.

C.    The sample ACF typically fails to dies out rapidly as the lag increases.

D.    The sample ACF typically is zero if the lag is greater than max(p,q).

E.     The sample ACF typically is one if the lag is greater than max(p,q).

 

Answer 10.10: C

 

The sample autocorrelation function of any time series approaches zero as the lag increases without bound.

 


 

           The expected sample autocorrelation of a moving average process is zero for lags > q.

           The expected sample autocorrelation of an autoregressive or a mixed moving average and autoregressive process decline exponentially to zero for lags > max(p,q).

           The observed sample autocorrelation has an error term that depends on the length of the time series.


 

 

The sample autocorrelation function of a non-stationary time series declines less rapidly. The correlogram may have any type of pattern: asymptotic, cyclical, or oscillating, and the sample autocorrelations stay more than zero (in absolute value) for many lags.

 

(P125)

 

 


 

*Question 10.11: Differencing

 

Which of the following is true?

 


 

A.     The first difference of a non-stationary time series is stationary.

B.     The first difference of a non-stationary time series is non-stationary.

C.    The first difference of a stationary time series is stationary.

D.    The first difference of a stationary time series is non-stationary.

E.     The first difference of a stationary time series may be stationary or non-stationary.

 

Answer 10.11: C

 

The first difference of an ARIMA process with d = 1 is a stationary time series. If d > 1, the first difference is non-stationary. The first difference of a stationary time series is also stationary.

 

(P126 and Exercise 2.6 on page 20)

 

 


Attachments
Michelle2010
Junior Member
Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)Junior Member (19 reputation)

Group: Forum Members
Posts: 18, Visits: 1
 I have a question on 10.6.  Why is

Var(rk) for a stationary ARMA process proportional to 1/N?  I thought this was only true for white noise processes.

I understand the comment in the solution that this result is consistent with the variance of a sample mean, however I don't really get how the variance of a sample mean is related to the variance of the sample autocorrelation.


Adrian
Forum Newbie
Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)Forum Newbie (8 reputation)

Group: Forum Members
Posts: 8, Visits: 1

The sentence immediately after equation 6.1.2 seems to imply that Var(rk) will be proportional to 1/n for large n.  It says that the variance will be ckk/n and the equation for ckk doesn't have any n terms, so ckk/n should be proportional to n.  The example problem doesn't specify that n is large, I think we are intended to assume that.

[NEAS: Yes, assume N is large.]


GO
Merge Selected
Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...





Reading This Topic


Login
Existing Account
Email Address:


Password:


Social Logins

  • Login with twitter
  • Login with twitter
Select a Forum....













































































































































































































































Neas-Seminars

Search