Will GestureRecognition Technology Point the Way?
existing peripherals can’t already achieve, or users won’t see the point in spending the time and money on the technology,” said Jackie Fenn, a Fellow in emerging trends and technologies for Gartner, a market research ﬁrm.
In the early 1960s, users could move a light-emitting pen to control the Sketchpad computer-aided design system. Several subsequent commercial systems also worked with light-emitting pens. Research into camera-based computer vision for gesture recognition began in earnest in the early 1990s at places such as the Massachusetts Institute of Technology Media Lab, Japan’s Advanced Telecommunications Research Institute International, and the University of Zürich. Since then, a few companies have sold gesture-recognition software. Until now, though, the technology hasn’t had a signiﬁcant commercial impact.
hen playing most video games, speed is of the essence. Manipulating a joy stick, mouse, or other input device slows a player’s reaction time. Players would prefer to control game activities by movements or gestures. Physically disabled users, who frequently have trouble providing the strength or precision necessary to use traditional computer input devices, would also beneﬁt from being able to control devices and enter information via eye blinks, head motions, or other gestures. For these and other reasons, considerable research has gone into computer-related gesture-recognition technology. Now, this research is bearing fruit as the technology increasingly appears in commercial products such as Canesta’s Virtual Keyboard for PDAs; iMatte’s iSkia projector-based presentation technology; and Cybernet System’s GestureStorm for weather reporting, NaviGaze head- and eyemovement-based cursor and mouse interface technology, and UseYourHead game controller. Gesture-recognition systems identify human gestures and use them to convey information such as input data or to control devices and applications such as Computer
Gathering gesture data
computers, games, PDAs, browsers, cell phones, and MP3 audio players. For example, eye movements could initiate mouse clicks or hand gestures could manipulate computer graphics. Researchers continue to improve gesture-recognition technology—for example, by making algorithms faster, more robust, and more accurate. Proponents say gesture recognition has many potential new uses, such as helping surgeons perform operations and improving security, surveillance, and military applications. However, the technology still faces major challenges. For example, gesturerecognition devices such as motiontracking gloves are too intrusive for mainstream use. In addition, the video processing that records user movements in some gesture-recognition products is resource intensive. “Commercially, gesture recognition must prove it can yield results that Users create gestures by a static hand or body pose or by a physical motion —including eye blinks or head movements—in two or three dimensions. Software translates the gestures into letters or words, or simple or complex commands. The computer then acts based on the input or command. Several image- or device-based hardware techniques gather information about gestures. Image-based techniques detect a gesture by capturing pictures of a user’s motions during the course of a gesture, such as via a camera, as Figure 1 shows. The system sends these images to computer-vision software, which tracks them and identiﬁes the gesture. Device-based techniques use a glove, stylus, or other position tracker, whose movements send signals that the system uses to identify the gesture. For example, instrumented gloves house sensors that relay information about the wearer’s hand and finger
positions. Styli interface with display technologies to record and interpret gestures like the writing of text. Fingerbased sensors detect ﬁnger...