Wearable e-textile interfaces require gesture recognition capabilities but face severe constraints in power consumption, computational capacity, and form factor that make traditional deep learning impractical. While lightweight architectures like MobileNet improve efficiency, they still demand thousands of parameters, limiting deployment on textile-integrated platforms. We introduce a convexified attention mechanism for wearable applications that dynamically weights features while preserving convexity through nonexpansive simplex projection and convex loss functions. Unlike conventional attention mechanisms using non-convex softmax operations, our approach employs Euclidean projection onto the probability simplex combined with multi-class hinge loss, ensuring global convergence guarantees. Implemented on a textile-based capacitive sensor with four connection points, our approach achieves 100.00\% accuracy on tap gestures and 100.00\% on swipe gestures -- consistent across 10-fold cross-validation and held-out test evaluation -- while requiring only 120--360 parameters, a 97\% reduction compared to conventional approaches. With sub-millisecond inference times (290--296$μ$s) and minimal storage requirements ($<$7KB), our method enables gesture interfaces directly within e-textiles without external processing. Our evaluation, conducted in controlled laboratory conditions with a single-user dataset, demonstrates feasibility for basic gesture interactions. Real-world deployment would require validation across multiple users, environmental conditions, and more complex gesture vocabularies. These results demonstrate how convex optimization can enable efficient on-device machine learning for textile interfaces.
翻译:可穿戴电子织物界面需要手势识别能力,但面临功耗、计算能力和尺寸的严格限制,使得传统深度学习方法难以实用。尽管MobileNet等轻量级架构提升了效率,但仍需数千个参数,限制了在织物集成平台上的部署。我们提出一种适用于可穿戴应用的凸化注意力机制,该机制通过非扩张单纯形投影和凸损失函数动态加权特征并保持凸性。与使用非凸softmax运算的传统注意力机制不同,我们的方法采用概率单纯形上的欧几里得投影结合多类别合页损失,确保全局收敛性保证。在具有四个连接点的织物电容传感器上实现时,该方法在点击手势和滑动手势上均达到100.00%准确率——在10折交叉验证和留出测试评估中结果一致——仅需120-360个参数,较传统方法减少97%。凭借亚毫秒级推理时间(290-296μs)和极小存储需求(<7KB),我们的方法使得手势界面能直接内置于电子织物中而无需外部处理。在受控实验室条件下使用单用户数据集进行的评估表明,该方法适用于基础手势交互。实际部署需在多元用户、环境条件和更复杂手势库中进行验证。这些结果证明了凸优化如何为织物界面实现高效的设备端机器学习。