16.5 Hypothesis Testing on Parameters
The \(t\) tests are used to conduct hypothesis tests on the regression coefficients obtained in simple linear regression.
\[ H_0: \beta_1 = 0 \] \[ H_1: \beta_1 \neq 0 \]
The test statistic used for this test is:
\[t_{stat} = \dfrac{\hat{\beta_1}}{se(\hat{\beta_1})}\]
where \(\beta^1\) is the least square estimate of \(\beta_1\), and \(se(\hat{\beta_1})\) is its standard error. The value of \(se(\hat{\beta_1})\)can be calculated as follows:
\[ se(\hat{\beta}_1)= \sqrt{\frac{\frac{\displaystyle \sum_{i=1}^n e_i^2}{n-2}}{\displaystyle \sum_{i=1}^n (x_i-\bar{x})^2}} \]
The test statistic, \(T_0\), follows a \(t\) distribution with \((n−2)\) degrees of freedom, where \(n\) is the total number of observations. The null hypothesis, \(H_0\), is not rejected if the calculated value of the test statistic \((t_{stat})\) is such that:
\[-t_{\alpha/2,n-2}\lt T_0\lt t_{\alpha/2,n-2}\]
where \(t_{\alpha/2,n-2}\) and \(-t_{\alpha/2,n-2}\) are the critical values for the two-sided hypothesis. \(t_{\alpha/2,n-2}\) is the percentile of the \(t\) distribution corresponding to a cumulative probability of \((1−\alpha/2)\) and \(\alpha\) is the significance level.
The test indicates if the fitted regression model is of value in explaining variations in the observations or if you are trying to impose a regression model when no true relationship exists between \(x\) and \(y\). Failure to reject \(H_0: \beta_1 = 0\) implies that no linear relationship exists between \(x\) and \(y\).