Data Analysis Methods Used in Quantitative Research.

Only available on StudyMode
  • Download(s) : 3864
  • Published : March 18, 2011
Open Document
Text Preview
Quantitative methods are research techniques that are used to gather quantitative data — information dealing with numbers and anything that is measurable e.g. Statistics, tables and graphs, are often used to present the results of these methods. Quantitative research methods were originally developed in the natural sciences to study natural phenomena. However examples of quantitative methods now well accepted in the social sciences and education.

Differences between parametric and non parametric methods

Parametric methodNon parametric
Assumes data comes from a type of probability distributions ( they are not distribution-free)Distribution free methods which do not rely on assumptions that the data are drawn from a given probability distribution Makes inferences about the parameters of the distributionNon parametric statistics can refer to a statistic( a function on a sample) whose interpretation does not depend on the population fitting any parameterized distributions. Makes more assumptions Makes fewer assumptions, their applicability are much wider than the corresponding parametric methods If those extra assumptions are correct, parametric methods can produce more accurate and precise estimates. However if those assumptions are incorrect, parametric methods can be very misleading. Hence have more statistical powerNon parametric tests have less power. They are often not considered robustThey are more robust

Parametric formulae are often simpler to write down and faster to compute PARAMETRIC METHODS

The parametric methods include the following;
z- test
Analysis of variance (ANOVA)
Analysis of co-variance (ANCOVA)

The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the analysis for the posttest-only two-group randomized experimental design. In hypothesis testing, the t test is used to test for differences between means when small samples are involved. (n £ 30 say). For larger samples use the z test. Description

The t-test (or student's t-test) gives an indication of the separateness of two sets of measurements, and is thus used to check whether two sets of measures are essentially different (and usually that an experimental effect has been demonstrated). The typical way of doing this is with the null hypothesis that means of the two sets of measures are equal. The t-test assumes:

A normal distribution (parametric data)
Underlying variances are equal (if not, use Welch's test) It is used when there is random assignment and only two sets of measurement to compare. There are two main types of t-test:
Independent-measures t-test: when samples are not matched. •Matched-pair t-test: When samples appear in pairs (eg. before-and-after). A single-sample t-test compares a sample against a known figure, for example where measures of a manufactured item are compared against the required standard.

The t test can test:
i) If a sample has been drawn from a Normal population with known mean and variance. (Single sample) ii) If two unknown population means are identical given two independent random samples. (Two unpaired samples) iii) If two paired random samples come from the same Normal population. (Two paired samples (paired differences)) Any hypothesis test can be one tailed or two tailed depending on the alternative hypothesis, H1. Consider the null hypothesis, H0: m =3

A one tailed test is one where H1 would be of the form m > 3. A two-tailed test is one where H1 would be of the form m ¹ 3.


This section covers the test of a mean. It discusses the test of a proportion. It assumes that the sample is large

(n > 30 )

The following steps must be follow.
1.Specify the significance level of the test
2.Choose a test...
tracking img