Driving Safety

Only available on StudyMode
  • Download(s) : 301
  • Published : January 19, 2013
Open Document
Text Preview
Real Time and Non-intrusive Driver Fatigue Monitoring
Zhiwei Zhu*
Department of Electrical, Computer, and
Systems Engineering
Rensselaer Polytechnic Institute
Troy, New York, USA

Qiang Ji**
Department of Electrical, Computer, and
Systems Engineering
Rensselaer Polytechnic Institute
Troy, New York, USA

zhuz@rpi.edu

qji@ecse.rpi.edu

Abstract— This paper describes a real-time non-intrusive
prototype driver fatigue monitor. It uses remotely located
CCD cameras equipped with active IR illuminators to acquire
video images of the driver. Various visual cues typically
characterizing the alertness of the driver are extracted in
real time and systematically combined to infer the fatigue
level of the driver. The visual cues employed characterize
eyelid movement, gaze movement, head movement, and facial
expression. A probabilistic model is developed to model human fatigue and to predict fatigue based on the observed visual cues and the available contextual information. The simultaneous use of multiple visual cues and their systematic combination yields a much more robust and accurate fatigue characterization

than using a single visual cue. The feasibility of our system is demonstrated using synthetic data. Further validation of our system under real life fatigue conditions with human subjects shows that it was reasonably robust, reliable and accurate in fatigue characterization.

I. I NTRODUCTION
The ever-increasing number of traffic accidents in the
U.S. due to a diminished driver’s vigilance level has become a problem of serious concern to society. Drivers with a
diminished vigilance level suffer from a marked decline
in their abilities of perception, recognition, and vehicle
control and therefore pose serious danger to their own
life and the lives of other people. Statistics show that a
leading cause for fatal or injury-causing traffic accidents is due to drivers with a diminished vigilance level. In the
trucking industry, 57% fatal truck accidents are due to driver fatigue. It is the number one cause for heavy truck crashes. 70% of American drivers report driving fatigued. With the
ever-growing traffic conditions, this problem will further
deteriorate. For this reason, developing systems actively
monitoring a driver’s level of vigilance and alerting the
driver of any insecure driving conditions is essential to
accident prevention.
Many efforts have been reported in the literature on
developing active real-time image-based fatigue monitoring
systems [1], [2], [3], [4], [5], [6], [7]. But most of them
focus on only a single visual cue such as facial expression, eyelid movement or line of gaze or head orientation to
characterize driver’s state of alertness. The system relying on a single visual cue may encounter difficulty when the
required visual features cannot be acquired accurately or
reliably.
All those visual cues, however imperfect they are individually, if combined systematically, can provide an accurate

characterization of a driver’s level of vigilance. It is our belief that simultaneous extraction and use of multiple
visual cues can reduce the uncertainty and resolve the ambiguity present in the information from a single source. The systematic integration of these visual parameters, however,
requires a fatigue model that models the fatigue generation
process and is able to systematically predict fatigue based
on the available visual as well as the relevant contextual
information. The system we propose can simultaneously,
non-intrusively, and in real time monitor several visual
behaviors that typically characterize a person’s level of
alertness while driving. These visual cues include eyelid
movement, gaze movement, head movement and facial
expression. The fatigue parameters computed from these
visual cues are subsequently combined probabilistically
to form a composite fatigue index that could robustly,
accurately, and consistently characterize one’s vigilance...
tracking img