Parallel-Learning of Invariant and Tempo-variant Attributes of Single-Lead Cardiac Signals: PLITA

Authors: Adrian Atienza, Jakob E. Bardram, Sadasivan Puthusserypady

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate both the capability of the method to learn the attributes of these two distinct kinds, as well as PLITA s performance compared to existing SSL methods for ECG analysis. To assess these hypotheses, we have conducted three experiments that require the invariant or/and the tempo-variant attributes to be encoded within the representations: 1. AFib Classification. 2. Sleep Stages Classification. 3. Gender Identification task.
Researcher Affiliation Academia Adrian Atienza, Jakob E. Bardram, Sadasivan Puthusserypady Technical University of Denmark EMAIL
Pseudocode No The paper describes the method and its components mathematically and conceptually, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing open-source code, nor does it include any links to code repositories.
Open Datasets Yes All databases are publicly available in Physionet (Goldberger et al. 2000) and National Sleep Research Resource (NSRR). Comparison against state-of-the-art (SOTA) The performance of the proposed PLITA method has been compared against the three most relevant energy-based SOTA methods, namely, (i) PCLR (Diamant et al. 2022), (ii) CLOCS (Kiyasseh, Zhu, and Clifton 2021), and (iii) Mixing Up (Wickstrøm et al. 2022). Reconstruction methods such as (iv) Ti-MAE (Li et al. 2023) or (v) Siamese Masked Autoencoderss (Gupta et al. 2023) (For more details about this latter implementation, refer to Appendix). Finally, We have also included (vi) the BYOL method tailored by ECG processing by following the PCLR strategy for selecting the positive pairs. To guarantee an equitable assessment, we have optimized the identical model employed in this study, maintaining consistent settings such as the optimizer, data, batch size, and iteration count. Each method has been trained in two distinct datasets, SHHS (Zhang et al. 2018; Quan et al. 1998) and Icentia (Tan et al. 2019), that are composed of long-term single-lead ECG recordings.
Dataset Splits Yes AFib Classification: To assess the ability of the method to generalize different classes within the same record, given a limited number of labelled records, we have conducted a Leave-One-Out (LOO) cross-validation across the 23 MIT-AFIB subjects. Sleep Stage Detection: We have carried out a LOO cross-evaluation for the 18 records contained in the dataset. Gender Classification: We conducted a five-fold cross-validation over 1500 randomly-selected inputs from distinct subjects from the SHHS database.
Hardware Specification Yes The training procedure and the evaluations are performed on a desktop computer, with a Nvidia Ge Force RTX 3070 GPU.
Software Dependencies No The paper mentions 'Adam' as the optimizer (Kingma and Ba 2017) but does not provide specific version numbers for any software libraries, programming languages, or other key software dependencies.
Experiment Setup Yes The window size W is set to 10 seconds. N is set to 4, so 4 inputs are drawn from each window. The projectors and predictors are implemented as a two-layer Multilayer Perceptron (MLP) with a dimensionality of 512 and 256, respectively. The exponential moving average (EMA) updating factor (τ) is set to 0.995. The training procedure consists of 35,000 iterations. We use a batch size of 256, Adam (Kingma and Ba 2017) with a learning rate of 3e 4, and a weight decay of 1.5e 6 as the optimizer.