How does inferential statistics allow you to assess the reliability of your data?

Inferential statistics are the statistical procedures that are used to reach conclusions about associations between variables. They differ from descriptive statistics in that they are explicitly designed to test hypotheses. Numerous statistical procedures fall in this category, most of which are supported by modern statistical software such as SPSS and SAS. This chapter provides a short primer on only the most basic and frequent procedures; readers are advised to consult a formal text on statistics or take a course on statistics for more advanced procedures.

Basic Concepts

British philosopher Karl Popper said that theories can never be proven, only disproven. As an example, how can we prove that the sun will rise tomorrow? Popper said that just because the sun has risen every single day that we can remember does not necessarily mean that it will rise tomorrow, because inductively derived theories are only conjectures that may or may not be predictive of future phenomenon. Instead, he suggested that we may assume a theory that the sun will rise every day without necessarily proving it, and if the sun does not rise on a certain day, the theory is falsified and rejected. Likewise, we can only reject hypotheses based on contrary evidence but can never truly accept them because presence of evidence does not mean that we may not observe contrary evidence later. Because we cannot truly accept a hypothesis of interest [alternative hypothesis], we formulate a null hypothesis as the opposite of the alternative hypothesis, and then use empirical evidence to reject the null hypothesis to demonstrate indirect, probabilistic support for our alternative hypothesis.

A second problem with testing hypothesized relationships in social science research is that the dependent variable may be influenced by an infinite number of extraneous variables and it is not plausible to measure and control for all of these extraneous effects. Hence, even if two variables may seem to be related in an observed sample, they may not be truly related in the population, and therefore inferential statistics are never certain or deterministic, but always probabilistic.

How do we know whether a relationship between two variables in an observed sample is significant, and not a matter of chance? Sir Ronald A. Fisher, one of the most prominent statisticians in history, established the basic guidelines for significance testing. He said that a statistical result may be considered significant if it can be shown that the probability of it being rejected due to chance is 5% or less. In inferential statistics, this probability is called the p-value , 5% is called the significance level [α], and the desired relationship between the p-value and α is denoted as: p≤0.05. The significance level is the maximum level of risk that we are willing to accept as the price of our inference from the sample to the population. If the p-value is less than 0.05 or 5%, it means that we have a 5% chance of being incorrect in rejecting the null hypothesis or having a Type I error. If p>0.05, we do not have enough evidence to reject the null hypothesis or accept the alternative hypothesis.

We must also understand three related statistical concepts: sampling distribution, standard error, and confidence interval. A sampling distribution is the theoretical distribution of an infinite number of samples from the population of interest in your study. However, because a sample is never identical to the population, every sample always has some inherent level of error, called the standard error . If this standard error is small, then statistical estimates derived from the sample [such as sample mean] are reasonably good estimates of the population. The precision of our sample estimates is defined in terms of a confidence interval [CI]. A 95% CI is defined as a range of plus or minus two standard deviations of the mean estimate, as derived from different samples in a sampling distribution. Hence, when we say that our observed sample estimate has a CI of 95%, what we mean is that we are confident that 95% of the time, the population parameter is within two standard deviations of our observed sample estimate. Jointly, the p-value and the CI give us a good idea of the probability of our result and how close it is from the corresponding population parameter.

General Linear Model

Most inferential statistical procedures in social science research are derived from a general family of statistical models called the general linear model [GLM]. A model is an estimated mathematical equation that can be used to represent a set of data, and linear refers to a straight line. Hence, a GLM is a system of equations that can be used to represent linear patterns of relationships in observed data.

Figure 15.1. Two-variable linear model.

The simplest type of GLM is a two-variable linear model that examines the relationship between one independent variable [the cause or predictor] and one dependent variable [the effect or outcome]. Let us assume that these two variables are age and self-esteem respectively. The bivariate scatterplot for this relationship is shown in Figure 15.1, with age [predictor] along the horizontal or x-axis and self-esteem [outcome] along the vertical or y-axis. From the scatterplot, it appears that individual observations representing combinations of age and self-esteem generally seem to be scattered around an imaginary upward sloping straight line. We can estimate parameters of this line, such as its slope and intercept from the GLM. From high-school algebra, recall that straight lines can be represented using the mathematical equation y = mx + c, where m is the slope of the straight line [how much does y change for unit change in x] and c is the intercept term [what is the value of y when x is zero]. In GLM, this equation is represented formally as:

y = β 0 + β 1 x + ε

where β 0 is the slope, β 1 is the intercept term, and ε is the error term . ε represents the deviation of actual observations from their estimated values, since most observations are close to the line but do not fall exactly on the line [i.e., the GLM is not perfect]. Note that a linear model can have more than two predictors. To visualize a linear model with two predictors, imagine a three-dimensional cube, with the outcome [y] along the vertical axis, and the two predictors [say, x 1 and x 2 ] along the two horizontal axes along the base of the cube. A line that describes the relationship between two or more variables is called a regression line, β 0 and β 1 [and other beta values] are called regression coefficients , and the process of estimating regression coefficients is called regression analysis . The GLM for regression analysis with n predictor variables is:

y = β 0 + β 1 x 1 + β 2 x 2 + β 3 x 3 + … + β n x n + ε

In the above equation, predictor variables x i may represent independent variables or covariates [control variables]. Covariates are variables that are not of theoretical interest but may have some impact on the dependent variable y and should be controlled, so that the residual effects of the independent variables of interest are detected more precisely. Covariates capture systematic errors in a regression equation while the error term [ε] captures random errors. Though most variables in the GLM tend to be interval or ratio-scaled, this does not have to be the case. Some predictor variables may even be nominal variables [e.g., gender: male or female], which are coded as dummy variables . These are variables that can assume one of only two possible values: 0 or 1 [in the gender example, “male” may be designated as 0 and “female” as 1 or vice versa]. A set of n nominal variables is represented using n–1 dummy variables. For instance, industry sector, consisting of the agriculture, manufacturing, and service sectors, may be represented using a combination of two dummy variables [x 1 , x 2 ], with [0, 0] for agriculture, [0, 1] for manufacturing, and [1, 1] for service. It does not matter which level of a nominal variable is coded as 0 and which level as 1, because 0 and 1 values are treated as two distinct groups [such as treatment and control groups in an experimental design], rather than as numeric quantities, and the statistical parameters of each group are estimated separately.

The GLM is a very powerful statistical tool because it is not one single statistical method, but rather a family of methods that can be used to conduct sophisticated analysis with different types and quantities of predictor and outcome variables. If we have a dummy predictor variable, and we are comparing the effects of the two levels [0 and 1] of this dummy variable on the outcome variable, we are doing an analysis of variance [ANOVA]. If we are doing ANOVA while controlling for the effects of one or more covariate, we have an analysis of covariance [ANCOVA]. We can also have multiple outcome variables [e.g., y 1 , y 1 , … y n ], which are represented using a “system of equations” consisting of a different equation for each outcome variable [each with its own unique set of regression coefficients]. If multiple outcome variables are modeled as being predicted by the same set of predictor variables, the resulting analysis is called multivariate regression . If we are doing ANOVA or ANCOVA analysis with multiple outcome variables, the resulting analysis is a multivariate ANOVA [MANOVA] or multivariate ANCOVA [MANCOVA] respectively. If we model the outcome in one regression equation as a predictor in another equation in an interrelated system of regression equations, then we have a very sophisticated type of analysis called structural equation modeling . The most important problem in GLM is model specification , i.e., how to specify a regression equation [or a system of equations] to best represent the phenomenon of interest. Model specification should be based on theoretical considerations about the phenomenon being studied, rather than what fits the observed data best. The role of data is in validating the model, and not in its specification.

Two-Group Comparison

One of the simplest inferential analyses is comparing the post-test outcomes of treatment and control group subjects in a randomized post-test only control group design, such as whether students enrolled to a special program in mathematics perform better than those in a traditional math curriculum. In this case, the predictor variable is a dummy variable [1=treatment group, 0=control group], and the outcome variable, performance, is ratio scaled [e.g., score of a math test following the special program]. The analytic technique for this simple design is a one-way ANOVA [one-way because it involves only one predictor variable], and the statistical test used is called a Student’s t-test [or t-test, in short].

The t-test was introduced in 1908 by William Sealy Gosset, a chemist working for the Guiness Brewery in Dublin, Ireland to monitor the quality of stout – a dark beer popular with 19 th century porters in London. Because his employer did not want to reveal the fact that it was using statistics for quality control, Gosset published the test in Biometrika using his pen name “Student” [he was a student of Sir Ronald Fisher], and the test involved calculating the value of t, which was a letter used frequently by Fisher to denote the difference between two groups. Hence, the name Student’s t-test, although Student’s identity was known to fellow statisticians.

The t-test examines whether the means of two groups are statistically different from each other [non-directional or two-tailed test], or whether one group has a statistically larger [or smaller] mean than the other [directional or one-tailed test]. In our example, if we wish to examine whether students in the special math curriculum perform better than those in traditional curriculum, we have a one-tailed test. This hypothesis can be stated as:

H 0 : μ 1 ≤ μ 2

[null hypothesis]

H 1 : μ 1 > μ 2

[alternative hypothesis]

where μ 1 represents the mean population performance of students exposed to the special curriculum [treatment group] and μ 2 is the mean population performance of students with traditional curriculum [control group]. Note that the null hypothesis is always the one with the “equal” sign, and the goal of all statistical significance tests is to reject the null hypothesis.

How can we infer about the difference in population means using data from samples drawn from each population? From the hypothetical frequency distributions of the treatment and control group scores in Figure 15.2, the control group appears to have a bell-shaped [normal] distribution with a mean score of 45 [on a 0-100 scale], while the treatment group appear to have a mean score of 65. These means look different, but they are really sample means [

] , which may differ from their corresponding population means [μ] due to sampling error. Sample means are probabilistic estimates of population means within a certain confidence interval [95% CI is sample mean + two standard errors, where standard error is the standard deviation of the distribution in sample means as taken from infinite samples of the population. Hence, statistical significance of population means depends not only on sample mean scores, but also on the standard error or the degree of spread in the frequency distribution of the sample means. If the spread is large [i.e., the two bell-shaped curves have a lot of overlap], then the 95% CI of the two means may also be overlapping, and we cannot conclude with high probability [p

Bài Viết Liên Quan

Chủ Đề