Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise

Authors: Niklas Pfister, Sebastian Weichwald, Peter Bühlmann, Bernhard Schölkopf

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we illustrate the performance and robustness of our method on simulated data, provide audible and visual examples, and demonstrate the applicability to real-world scenarios by experiments on publicly available Antarctic ice core data as well as two EEG data sets.
Researcher Affiliation Academia Niklas Pfister EMAIL Seminar for Statistics, ETH Z urich R amisstrasse 101, 8092 Z urich, Switzerland Sebastian Weichwald EMAIL Department of Mathematical Sciences, University of Copenhagen Universitetsparken 5, 2100 Copenhagen, Denmark Peter B uhlmann EMAIL Seminar for Statistics, ETH Z urich R amisstrasse 101, 8092 Z urich, Switzerland Bernhard Sch olkopf EMAIL Empirical Inference Department, Max Planck Institute for Intelligent Systems Max-Planck-Ring 4, 72076 T ubingen, Germany
Pseudocode Yes Algorithm 1: coro ICA input : data matrix X group index G (user selected) group-wise partition (Pg)g G (user selected) lags T N0 (user selected) initialize empty list M for g G do
Open Source Code Yes We provide a scikit-learn compatible pip-installable Python package coro ICA as well as R and Matlab implementations accompanied by a documentation at https://sweichwald.de/coro ICA/.
Open Datasets Yes Finally, we illustrate the performance and robustness of our method on simulated data, provide audible and visual examples, and demonstrate the applicability to real-world scenarios by experiments on publicly available Antarctic ice core data as well as two EEG data sets.
Dataset Splits Yes We proceed by repeatedly splitting the data into a training and a test data set. More precisely, we construct all possible splits into training and test subjects for any given number of training subjects.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions software like "scikit-learn compatible pip-installable Python package coro ICA" and "R and Matlab implementations", and for fast ICA "the implementation from the scikit-learn Python library due to Pedregosa et al. (2011)". However, specific version numbers for these software dependencies are not provided.
Experiment Setup Yes In all of our numerical experiments, we apply coro ICA as outlined in Algorithm 1, where we partition each group based on equally spaced grids and run a fixed number of 10 103 iterations of the uwedge approximate joint diagonalizer. For fast ICA we use the implementation from the scikit-learn Python library due to Pedregosa et al. (2011) and use the default parameters. For our unmixing estimations, we use the entire data, i.e., including intertrial breaks. For classification experiments (cf. Section 4.3.2) we use, in line with Treder et al. (2011), the 8 12 Hz bandpass-filtered data during the 500 2000 ms window of each trial, and use the log-variance as bandpower feature. Results obtained on the Covert Attention data set (with equally spaced partitions of 15 seconds length) are given in Figure 8.