As the entertainment industry continues to grow, many interesting and exceptional developments are being made to further the desire to produce fluid, life like, realistic animations. As a result of the increases in computational technology the quest to develop these techniques has accelerated within the past 10 to 15 years. Perhaps the newest realization has been the development of a process called motion capture. Motion capture has been around since the late 1970s, where it was then called rotoscoping and was used by Walt Disney in the film Snow White (Sturman). To begin with, the term “motion capture” actually has many different names, such as mocap, performance animation, performance capture, virtual theater, digital puppetry, real-time animation, and the favorite among traditional key frame animators is “The Devil’s Rotoscope” (Furniss). Motion capture, in its most basic form, is the technique of digitally recording the movements of real organisms, usually humans and then applying those movements to some other media (Polhemus). What motion capture allows animators to do is capture the movements of people or animals to create digital models with more realistic movements. Although this description seems have the obvious preference towards animation in entertainment realms the technique of motion capture has many other areas of application, such as biomechanics, sport performance analysis, tele-robotics, ergonomics, entertainment and so on (Animation World Magazine). Throughout the remainder of this paper a more in depth look at the history, processes, types, new developments, and applications of motion capture to help give a more clear representation of exactly what motion capture is and where it is going. The History
The use of motion capture for computer character animation is relatively new, having begun in the late 1970's, and only now becoming widespread. The need to capture motion has been acknowledged for decades in a variety of fields, but perhaps most of the key advances in motion capture have been in the entertainment industry beginning with an early method called ‘rotoscoping’, first used by Walt Disney. Rotoscoping is where animators would trace their character over the video footage of live action performances to give the characters more realistic movements (Habibi). The first real motion capture took place in the early 1980s, with potentiometers being attached to a moving body. The potentiometer would move in unison with the body and could measure each of those movements and store them for later applications (Habibi). From 1980 to 1983, biomechanics labs were beginning to use computers to analyze human motion. Tom Calvert, a professor of kinesiology and computer science at Simon Fraser University, attached potentiometers to a body to track knee flexion through the use of a special exoskeleton attached to the knee. The obtained information was then used to animate figures for choreographic studies and clinical assessments (Sturman). Around 1982 to 1983 the process of optical tracking was used by Ginsberg and Maxwell at MIT to present “the Graphical Marionette”. It was a system that used an early optical motion capture systems called Op-Eye which used LEDs wired to a suit and then tracked by two cameras. A computer then used the position information from the cameras to obtain a 3-D world coordinate for each LED, which in turn drove a stick figure (Sturman). The slow rate of rendering characters, and the high expense was the largest roadblock to the widespread use of this technology at this time. In 1988, DeGraf / Wahrman developed "Mike the Talking Head", which was driven by a specially built controller that allowed a single puppeteer to control the character's face, mouth, eyes, expression, and head position. A Silicon Graphics hardware system was used to provide real-time interpolation between facial expressions and head geometry as controlled by the performer (Sturman)....
Please join StudyMode to read the full document