Machine Learning (ML) is continuously permeating a growing amount of application domains. Generative AI such as Large Language Models (LLMs) also sees broad adoption to process multi-modal data such as text, images, audio, and video. While the trend is to use ever-larger datasets for training, managing this data efficiently has become a significant practical challenge in the industry-double as much data is certainly not double as good. Rather the opposite is important since getting an understanding of the inherent quality and diversity of the underlying data lakes is a growing challenge for application-specific ML as well as for fine-tuning foundation models. Furthermore, information retrieval (IR) from expanding data lakes is complicated by the temporal dimension inherent in time-series data which must be considered to determine its semantic value. This study focuses on the different semantic-aware techniques to extract embeddings from mono-modal, multi-modal, and cross-modal data to enhance IR capabilities in a growing data lake. Articles were collected to summarize information about the state-of-the-art techniques focusing on applications of embedding for three different categories of data modalities.
翻译:机器学习(ML)正持续渗透至日益增多的应用领域。诸如大型语言模型(LLM)等生成式人工智能亦被广泛用于处理文本、图像、音频和视频等多模态数据。尽管当前趋势是使用规模不断扩大的数据集进行训练,但高效管理这些数据已成为工业界面临的重大实践挑战——数据量翻倍绝非意味着效果翻倍。恰恰相反,理解底层数据湖固有的质量与多样性,对于特定应用场景的机器学习以及基础模型的微调而言,正成为日益严峻的挑战。此外,从持续扩张的数据湖中进行信息检索(IR)的复杂性,因时间序列数据固有的时序维度而加剧——必须考量该维度以确定其语义价值。本研究聚焦于从单模态、多模态及跨模态数据中提取嵌入向量的多种语义感知技术,旨在提升不断增长的数据湖中的信息检索能力。通过收集相关文献,本文总结了针对三类数据模态的嵌入技术应用现状,并对前沿方法进行了系统性综述。