DGMSCL: A dynamic graph mixed supervised contrastive learning approach for class imbalanced multivariate time series classification

Neural Netw. 2025 Jan 17:185:107131. doi: 10.1016/j.neunet.2025.107131. Online ahead of print.

Abstract

In the Imbalanced Multivariate Time Series Classification (ImMTSC) task, minority-class instances typically correspond to critical events, such as system faults in power grids or abnormal health occurrences in medical monitoring. Despite being rare and random, these events are highly significant. The dynamic spatial-temporal relationships between minority-class instances and other instances make them more prone to interference from neighboring instances during classification. Increasing the number of minority-class samples during training often results in overfitting to a single pattern of the minority class. Contrastive learning ensures that majority-class instances learn similar features in the representation space. However, it does not effectively aggregate features from neighboring minority-class instances, hindering its ability to properly represent these instances in the ImMTS dataset. Therefor, we propose a dynamic graph-based mixed supervised contrastive learning method (DGMSCL) that effectively fits minority-class features without increasing their number, while also separating them from other instances in the representation space. First, it reconstructs the input sequence into dynamic graphs and employs a hierarchical attention graph neural network (HAGNN) to generate a discriminative embedding representation between instances. Based on this, we introduce a novel mixed contrast loss, which includes weight-augmented inter-graph supervised contrast (WAIGC) and context-based minority class-aware contrast (MCAC). It adjusts the sample weights based on their quantity and intrinsic characteristics, placing greater emphasis on minority-class loss to produce more effective gradient gains during training. Additionally, it separates minority-class instances from adjacent transitional instances in the representation space, enhancing their representational capacity. Extensive experiments across various scenarios and datasets with differing degrees of imbalance demonstrate that DGMSCL consistently outperforms existing baseline models. Specifically, DGMSCL achieves higher overall classification accuracy, as evidenced by significantly improved average F1-score, G-mean, and kappa coefficient across multiple datasets. Moreover, classification results on a real-world power data show that DGMSCL generalizes well to real-world application.

Keywords: Class imbalanced multivariate time series classification; Dynamic graph; Hierarchical attention graph neural network; Mixed contrastive learning.