Tags:Artificial Neural Network, Convolutional Neural Network, Graphics Processing Unit, K Nearest Neighbours, Linear Discriminant Analysis, Recurrent Neural Network, Steering Wheel Angle, Stochastic Gradient Descent, Support Vector Machine and You Only Look Once
Abstract:
In today's society, drowsiness and fatigue have become prominent factors contributing to road accidents. These risks can be effectively mitigated by ensuring sufficient sleep, consuming caffeine, or taking breaks when signs of drowsiness manifest. Currently, complex methods such as EEG, ECG, steering wheel angle, and steering wheel pressure sensors are commonly employed to detect drowsiness. Despite their high accuracy, these methods rely on contact-based measurements and have limitations in monitoring driver fatigue and drowsiness in real-time driving scenarios. Consequently, they are not ideal for immediate use while driving. This research introduces an alternative approach that utilizes the rate of eye closure and the occurrence of yawning as indicators of drowsiness in drivers. The paper outlines a methodology for identifying the eyes and mouth in videos or images, extracting relevant features from the visual input, and determining whether the driver is drowsy or alert. The proposed system focuses on the facial region captured in the video or image, specifically targeting the eyes and mouth. By identifying the face, the eyes and mouth can be detected, facilitating eye and mouth state assessment as well as yawn detection. The parameters for eye and mouth detection are derived from the facial image itself. The video is transformed into individual frames, enabling the localization of the eyes and mouth within each frame. Once the eyes are located, features from the eye area and the overall face region are extracted to determine if the eyes are open or closed, while also extracting a yawn score. If the eyes are identified as closed for a certain duration, such as four consecutive frames, it confirms that the driver is in a drowsy state.