Time Series Projects Moving Average Models


Time Series Projects Moving Average Models

Author
Message
NEAS
Supreme Being
Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)Supreme Being (5.9K reputation)

Group: Administrators
Posts: 4.3K, Visits: 1.3K

Time Series Projects Moving Average Models

Updated: April 7, 2006

Jacob: How do we back into a moving average parameter?

Rachel: For an MA(1) model, the autocorrelation of lag 1 is . If we have enough points, the sample autocorrelation is a good estimator of the autocorrelation, and we can back into the θ1 parameter.

Jacob: Can you give an example of this?

Rachel: Suppose the sample autocorrelation of lag 1 is 50% and sample autocorrelations of lags greater than 1 are close to zero. We presume the time series is an MA(1) process and we compute the θ1 parameter as 1 + θ12 = –2 × θ1 A θ1 = –1.

Jacob: Can you explain this model intuitively?

Rachel: Suppose the mean is zero. If θ1 = –1 for an MA(1) process, yt = εt + εt-1. Since εt and εt-1 are identically distributed random variables, ρ(yt, εt) = ρ(yt, εt-1) = ½.

Jacob: If we have only 20 observations; can we still back into the moving average parameter?

Rachel: A time series with only 20 observations is common; an example might be five years of quarterly numbers. We can back into the moving average parameter, but our estimate is not efficient. In the example above, θ1 parameters between –1.2 and –0.8 give autocorrelations of lag 1 of about 50%. Even with hundreds of observations, we can’t get an exact estimate of θ1. With only 20 observations, a θ1 between –1.5 and –0.5 may give a sample autocorrelation of 50%. We must be careful with inferences from small data sets.

Jacob: Is the same true for the regression analysis used to estimate autoregressive models?

Rachel: The concept is the same, though the estimates may be more accurate. We should always compute the standard error of the estimator. Sometimes the textbook gives a formula for the standard error. These standard errors are not in the chapters for the time series course, so they are not required for the student projects.

Jacob: Are we supposed to examine moving average processes for the student project.

Rachel: Some candidates are confused by this.

~ If the sample autocorrelation indicates an MA(1) model or an ARIMA(0,1,1) model, you should back into the MA(1) parameter and test if the residuals are a white noise process.

~ You do not have to test ARMA(p,q) models or ARIMA(p,d,q) models for which both p and q are equal to or greater than 1. You may comment if the sample autocorrelation function suggests a moving average component, but estimating the parameters requires nonlinear regression.

Jacob: If we have a statistical package like Minitab, should we examine moving average models?

Rachel: Statisticians differ on the value of moving average models. The textbook authors believe that all reasonable ARIMA models should be examined. Other statisticians examine simple autoregressive models and use moving average models only if a strong intuitive reason exists for these models.

Jacob: Why is it hard to identify moving average models? From the description in the textbook, they should be easier to identify than autoregressive models.

~ A moving average model has a high sample autocorrelation followed by sample autocorrelations of zero. This should be easy to spot.

~ An autoregressive model has an exponentially declining envelop about its sample autocorrelations; this should hard to spot.

Rachel: In the extreme scenarios, a moving average model can be spotted. If the sample autocorrelation is 50% for lag 1 and 0% for all other lags, the model is moving average. Even in this simple scenario, specifying the moving average parameter may be difficult in small data sets. θ1 could range from –80% to –120% and give sample autocorrelations of 50% for lag 1.

In practice, the moving average component is usually weaker than the autoregressive component. If the sample autocorrelation of lag 1 is 80% and the following sample autocorrelations have an exponentially declining pattern of 50%, 40%, 32%, …, it is hard to decide if this model is AR(1) or ARMA(1,1).

~ If the exponential decay is exactly 50%, 40%, 32%, and so forth, we might assume the sample autocorrelation of lag 1 is exact as well, and the model is ARMA(1).

~ If the exponential decay is roughly equal to 50%, 40%, 32%, and so forth, we might assume the sample autocorrelation of lag 1 is also not exact, and the model is AR(1).

Jacob: Are there statistical tests to decide this question?

Rachel: The decision depends on the intuition for a moving average component. If we have no reason to assume a moving average component, we prefer the AR(1) model. If we think a moving average component is likely, we may prefer the ARMA(1,1) model.


Attachments
GO
Merge Selected
Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...





Reading This Topic


Login
Existing Account
Email Address:


Password:


Social Logins

  • Login with twitter
  • Login with twitter
Select a Forum....











































































































































































































































Neas-Seminars

Search