Vark Analysis Paper

Only available on StudyMode
  • Download(s) : 3066
  • Published : August 4, 2012
Open Document
Text Preview
VARK Analysis Paper
Michelle Pierson, RN
Grand Canyon University
NRS 429
Leslie Greenberg, MSN
July 25, 2012

Abstract
The visual, aural, read/write, kinesthetic (VARK) questionnaire was designed by Neil Fleming as a way to help users to understand their learning styles (Fleming, 2001-2011). It contains sixteen questions, when answered truthfully, will generate an answer that corresponds to the closest learning style. VARK is a tool that students can use to tailor study habits to their learning styles, thus improving study time and hopefully their grades. This paper will examine these learning strategies, and how Michelle’s learning compares to her questionnaire results.

VARK Learning Styles
With Vark, the four styles are visual, auditory/aural, read/write and kinesthetic learning. Each differs in its preferred method for intake or output of information. Students can also be multimodal, which means they have more than one preferred method. By using VARK, they can identify their style and take responsibility for their learning (Rogers, 2009). The VARK website reports that analyzing the statistic of the questionnaire is difficult, due to the structure of the test. The test is completed by mostly students and teachers so it does not represent the population as a whole (Fleming, 2001-2011). Learning Styles Comparison

According to VARK (Fleming, 2001-2011), visual learners prefer pictures; they tend to look for the whole picture instead of breaking it down, and analyzing each portion. They will remember pictures or diagrams on study pages. Aural learners prefer to hear things; they will re-listen to taped lectures, and should attend discussions. Their class notes might be poor because they tend to listens instead of take notes, but they will recall stories or examples from the lectures. Read/write learners like words or lists; they take a lot of notes, and re-read them over and over. They use dictionaries,...
tracking img