Computer Science Department: "DUI Detection from Gait using a Multichannel 1DCNN-Attention-BiLSTM Framework"

Wednesday, January 29, 2025
2:00 pm to 3:00 pm

Samuel Uche

PhD Student

WPI - Computer Science

Wednesday, January 29, 2025

Time: 2:00 PM - 4:00 PM

Location: Fuller Labs 141

Advisor: Prof. Emmanuel Agu

Reader: Prof. Kyumin Lee

Abstract:


 

Alcohol intoxication increases Blood Alcohol Content (BAC) and impairs psychomotor, cognitive and motor functions. Driving under the influence (DUI) of alcohol caused over 30% of motor vehicle traffic fatalities in 2017, with $58 billion accrued in medical, legal and death bills annually. Current measures of detecting alcohol DUI - breathalyzers, blood tests, and transdermal alcohol monitors - are invasive. Moreover, they require the purchase of additional hardware and active user involvement and impractical for continuous monitoring. Unobtrusive methods of detecting driver intoxication are desirable to reduce DUI incidents. Gait analysis provides a passive, non-invasive approach to continuous DUI detection, enabling unobtrusive monitoring of gait patterns to identify impairment and enhance road safety. Prior work primarily explored traditional machine learning classifiers, such as Random Forest and J48, and some deep learning approaches, but had limitations including utilizing handcrafted features that are prone to errors. The deep learning architectures explored achieved suboptimal performance as they did not effectively address key challenges such as class imbalance, and gait variability under different conditions. 

This work proposes a deep learning approach for automated, passive detection of alcohol intoxication from smartphone accelerometer sensor data. The challenge of class imbalance that arises from limited intoxicated gait samples was addressed using subject-level stratified split and random oversampling of intoxicated samples in the training data to ensure balanced class representation. Raw time series sensor data were preprocessed using low-pass filtering to reduce noise, followed by segmentation into fixed-size windows. Our novel hybrid multichannel 1D-CNN-Attention-BiLSTM (MC-Hybrid) framework combines a 1D convolutional neural network (1D-CNN) for feature extraction, an attention mechanism for emphasizing critical temporal patterns, and a bidirectional LSTM (BiLSTM) for sequential modeling. This architecture addresses key challenges such as capturing temporal dependencies, highlighting important features, and improving classification accuracy despite class imbalance.

In rigorous evaluation, our MC-Hybrid approach achieved an accuracy of 93%, outperforming the current state-of-the-art by 9.5%, and all baselines by 9.0%, with the self-attention mechanism outperforming other attention mechanisms by 2%.

-----


 

Audience(s)

Department(s):

Computer Science