The research introduces a reproducible framework for transforming raw, heterogeneous sensor streams into aligned, semantically meaningful representations for multimodal human activity recognition. Grounded in the Carnegie Mellon University Multi-Modal Activity Database (CMU-MMAC) database and focused on the naturalistic Subject 07 Brownie session, the study traces the full pipeline from data ingestion to modeling and interpretation. Unlike black box preprocessing, a unified preprocessing workflow is proposed that temporally aligns video, audio, and RFID through resampling, grayscale conversion, sliding-window segmentation, and modality-specific normalization, producing standardized fused tensors suitable for downstream learning. Building on this foundation, the work systematically compares early, late, and hybrid fusion strategies using LSTM-based models implemented with PyTorch and TensorFlow, showing that late fusion consistently achieves the highest validation accuracy, with hybrid fusion outperforming early fusion. To evaluate interpretability and modality contribution, PCA and t-SNE visualizations reveal coherent temporal structure and confirm that the video carries stronger discriminative power than audio, while their combination yields substantial performance gains. Incorporating sparse, asynchronous RFID signals further improves accuracy by over 50% and boosts macro-averaged ROC-AUC, demonstrating the added value of object-interaction cues. Overall, the framework contributes a modular, empirically validated approach to multimodal fusion that links preprocessing design, fusion architecture, and interpretability, offering a transferable template for intelligent systems operating in complex, real-world activity settings.
翻译:暂无翻译