When both variables are categorical, the t-test should be used for hypothesis test
1. When both variables are categorical, the t-test should be used for hypothesis testing.
2. The term parametric statistics refers to tests that make assumptions about the distribution of data. True
3. Recoding continuous variables as categorical variables is discouraged because it results in a loss of information.
4. The t-test has four test assumptions.
5. The critical values of the t-test are provided by Student's t-test distribution.
6. One-tailed tests are used most often, unless compelling a priori knowledge exists or it is known that one group cannot have a larger mean than the other.
7. The term robust is used, generally, to describe the extent to which test conclusions are unaffected by departures from test assumptions.
8. A combination of visual inspection and statistical testing should always be used to determine normality.
9. Nonnormality is sometimes overcome through variable transformation.
10. When problems of nonnormality cannot be resolved adequately, analysts should consider nonparametric alternatives to the t-test.
11. Analysts should always examine the robustness of their findings.
12. All t-tests first test for equality of means and then test for equality of variances.
13. The paired-samples t-test tests the null hypothesis that the mean difference between the before and after test scores is zero.
14. Simple regression is appropriate for examining the bivariate relationships between two continuous variables.
15. A scatterplot is a plot of the data points of two continuous variables.
16. The null hypothesis in regression is that the intercept is zero.
17. The slope indicates the steepness of the regression line.
18. A negative slope indicates an upward sloping line.
19. The statistical significance of regression slopes is indeterminable.
20. R-square is the percent variation of the dependent variable explained by the independent variable(s).
21. R-square varies from 1 to 2.
22. A perfect fit is indicated when the coefficient of determination is zero.
23. A regression line assumes a linear relationship that is constant over the range of observations.
24. The dependent variable is also called the error term.
25. Pearson's correlation coefficient, r, measures the association (significance, direction, and strength) between two continuous variables.
26. The Pearson's correlation coefficient, r, always has the same sign as b.
27. Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables.
28. Full model specification means that all variables are measured that affect the dependent variable.
29. A nomothetic mode of explanation isolates the most important factors.
30. The search for parsimonious explanations often leads analysts to first identify different categories of factors that most affect their dependent variable.
2
31. It is okay for independent variables not to be correlated with the dependent variables, as long as they are highly correlated with each other.
32. In multiple regression the adjusted R-square controls for the number of dependent variables.
33. Values of R-square adjusted below 0.20 are considered to suggest weak model fit, those between 0.20 and 0.40 indicate moderate fit, those above 0.40 indicate strong fit, and those above 0.65 indicate very strong model fit.
34. It is common to compare beta coefficients across different models.
35. The global F-test examines the overall effect of all independent variables jointly on the dependent variable.
36. Outliers are observations whose multiple regression residuals exceed three standard deviations.
37. When two variables are multicollinear, they are strongly correlated with each other.
38. Curvelinearity is indicated by residuals that are linearly related to each other.
39. Heteroscedasticity occurs when one of the dependent variables is linearly related to the independent variable.
40. Autocorrelation is common with time series data.
|