OUTLINE OF BASIC
CRITIQUE STEPS (understanding quantitative research):
Brink and Wood (1994), and Burnes and Groves (1997), similarly describe the steps for quantitative nursing research process and methods. The authors include not only the basic steps to conduct nursing studies, but also translating the studies into articles for publication. According to Burnes & Groves (1997), and Hamric and Spross (1992), the quantitative research method is used to describe or gain more information, test relationships, examine cause and effect relationships, and uses numeric data to "answer" problems. The following is a basic outline of what is included in nursing research study, and what you should critique a study for: 1. Purpose
a. Usually found in introduction or problem statement.
b. Might be stated as a main research question or hypothesis for the study. c. Main focus may not be clearly labeled requiring reader to synthesize the purpose from this section. 2. Sample
a. Look for representativeness of sample.
1. Representativeness-Subjects are randomly selected from the target population. 2. Target population-Population from which the sample is chosen and study findings are generalized to. Example: All women ages 65 to 90 with a diagnosis of acute MI. 3. Sample size-The sample size should be as large as possible. Sampling error decreases as sample size increases (general rule). b. Random Vs. non-random sample
1. Convenience, strictly voluntary sampling selection (usually non-random) may have bias in representativeness. 2. Bias in sampling selection means those chosen to participate may differ from those not chosen. A randomized sample reduces bias in sampling selection. NOTE: Many nursing primary references are based on non-random convenience samples! Bias may be reduced through certain data analysis techniques, and addressed in design limittions. 3. Methods
a. Data collection - Procedure should be clearly defined (based upon problem and sample). 1. Description of data collection time frames, where data collected, participant's permission procedure, and confidentiality/anonymity assurance. 2. How questionnaires, scales and/or interviews were utilized in the study. NOTE: Method of data collection can also have bias. Use of interview technique to gather data has the least bias because the investigator has less influence on the participant's answers. b. Study Designs
1. Permits the examination of the study's research variables. Variables are qualities, or characteristics of persons, things, or situations under study. 2. Descriptive-Gain more information about characteristics of a group. Example: pilot or exploratory. 3. Correlational-Examines relationships between variables in a group. 4. Quasi-experimental/ Experimental-Examines causality, explains relationships, uses control and experimental groups. NOTE: Experimental designs are the most scientific!
1. Description of the data collection instruments, scales, questionnaires. Example: # questions, scoring range, etc. 2. Inclusion of reliability and validity of instruments.
a. Reliability-Measurement of how consistently similar results are obtained every time the scale/instrument is used. b. Validity-Measurement of how accurately the instrument reflects some of the variables in the study (characteristics under study). 3. Reliability and validity are important because the study's results should never be influenced by instrument/scale error. 4. Data Analysis:
a. Summarizes and describes the data in a logical, understandable format from research variables capable of being quantified (converted to numbers). 1. Descriptive statistics-Clearly and understandably describes the sample mostly using frequency distribution, mean, median, mode, range, % and others. 2. Inferential statistics-Tests the research questions or hypothesis using T-Tests, ANOVA, multiple regressions, etc. When the research plan hypothesizes relationships between variables, it is...
Please join StudyMode to read the full document