Text Preview

Measurement issues

Chance

P value

Confidence intervals

Measurement Bias and Error in Study Design Bias

Selection bias

Measurement bias

Confounding

Estimation

Process of using calculated sample values to determine the probable value of a population parameter • Point estimate

• Confidence interval estimation

Range of values that has a known probability of capturing the parameter

Chance

• From estimates in the sample population make inferences about the risk in the total population • Random error is when the value of the sample measurement diverges – due to chance from that of the true population value. P values

• Is the result because of chance?

• Measured by P value (statistical tests)

• P value less than 0.05 P<0.05 probability of obtaining the observed value by chance is less than 1 in 20 Selection bias

Statistical error that causes a bias in the sampling portion of an experiment. The error causes one sampling group to be selected more often than other groups included in the experiment. This may produce an inaccurate conclusion if the selection bias is not identified. For example, if an experiment only selects people of a certain race with similar characteristics and excludes any group that deviates from these elements. "The students at the University had to start their experiment over because they discovered a selection bias in the species gathered."

Read more: http://www.businessdictionary.com/definition/selection-bias.html#ixzz2Sb8IWwbJ

Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study.[1] It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account then certain conclusions drawn may be wrong. In epidemiology, Information bias refers to bias arising from measurement error.[1] Information bias is also referred to as observational bias and misclassification. A Dictionary of Epidemiology, sponsored by the International Epidemiological Association, defines this as the following: “1. A flaw in measuring exposure, covariate, or outcome variables that results in different quality (accuracy) of information between comparison groups. The occurrence of information biases may not be independent of the occurrence of selection biases. In statistics, a confounding variable (also confounding factor, hidden variable, lurking variable, a confound, or confounder) is an extraneous variable in a statistical model thatcorrelates (positively or negatively) with both the dependent variable and the independent variable. A perceived relationship between an independent variable and a dependent variable that has been misestimated due to the failure to account for a confounding factor is termed a spurious relationship, and the presence of misestimation for this reason is termed omitted-variable bias. In the case of risk assessments evaluating the magnitude and nature of risk to human health, it is important to control for confounding to isolate the effect of a particular hazard such as a food additive, pesticide, or new drug. For prospective studies, it is difficult to recruit and screen for volunteers with the same background (age, diet, education, geography, etc.), and in historical studies, there can be similar variability. Due to the inability to control for variability of volunteers and human studies, confounding is a particular challenge. For these reasons, experiments offer a way to avoid most forms of confounding. As an example, suppose that there is a statistical relationship between ice cream consumption and number of drowning deaths for a given period. These two variables have a positive correlationwith each other. An evaluator might attempt to explain this correlation by inferring a causal relationship between the two variables...