Regression hypothesis testing (overview 3rd edition)


Regression hypothesis testing (overview 3rd edition)

Author
Message
NEAS
Supreme Being
Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)Supreme Being (6K reputation)

Group: Administrators
Posts: 4.5K, Visits: 1.6K

MS Module 17: Regression analysis confidence intervals and hypothesis testing (overview 3rd edition)

(The attached PDF file has better formatting.)

(Readings from the third 3rd edition of the Devore, Berk, and Carlton text.)

Reading: §12.3 Inferences about the regression coefficient β1

Distinguish between parameters and their estimates.

β1 is a parameter of the regression equation (its slope); it has a fixed but unknown value, with no standard deviation. The estimate of β1 (shown as 1 in the textbook) is a sample statistic with a sampling distribution. Inferences from regression analysis depend on the attributes of this sample statistic (such as its standard deviation) and of other sample statistics, such as the estimate of σ2.

Know the degrees of freedom for each statistical test. Most problems in previous modules subtract 1 from the number of observations or the number of classes. A regression equation has two parameters, so the degrees of freedom is the number of observations minus 2. The estimator for σ divides by n-2 (see equations below Figure 12.13) and the T-ratio has n-2 degrees of freedom. Multiple regression has more parameters, and the degrees of freedom decrease accordingly.

β1 can be expressed as a linear combination the observed Y values, where the coefficients in this combination are functions of the X values. Know the formula for these coefficients, which appears in confidence intervals, prediction intervals, and standardized residuals as well.

Final exam problems test three types of intervals for regression analysis:

●    The confidence interval for β1, which is a range with upper and lower bounds.
●    The confidence interval for ŷ (the fitted value), which depends on the x value.
●    The prediction interval, part of which depends on the x value and part of which is constant.

This module computes the confidence interval for β1, t values to test hypotheses about β1, and the p values to estimate the probability of a Type I error. The next module derives the other two intervals. All three intervals are tested on the final exam.

Distinguish between the standard deviation of the error term (σ) and the standard deviation of the estimate of β1. The standard deviation of the estimate of β1 = s / √Sxx. A common error is to forget to divide by √Sxx when testing hypotheses about β1.

Read the section “A Confidence Interval for β1.” Note the degrees of freedom and the standard error. Example 12.9 shows how to compute the confidence interval for β1 from the summary statistics. Example 12.10 is strange, relating subjective estimates of risk by one group to subjective estimates of expected returns by another group. The regression analysis is well done and shows a negative relation, but the implication for investment is unclear; perhaps the subjective estimates do not measure actual risk and return.

Read the section “Regression and ANOVA.” Know the relation of the t value for hypothesis testing and the F test for the regression analysis (t2 = f).

Review end of chapter exercises 31 a and b, 33, 34 a and b, 35 a and b.

Attachments
GO
Merge Selected
Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...





Reading This Topic


Login
Existing Account
Email Address:


Password:


Social Logins

  • Login with twitter
  • Login with twitter
Select a Forum....













































































































































































































































Neas-Seminars

Search