ptwt - The PyTorch Wavelet Toolbox

Authors: Moritz Wolter, Felix Blanke, Jochen Garcke, Charles Tapley Hoyt

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1: Run-time comparisons for various implementations of the padded wavelet transformation from one to three dimensions. We compare transformations of 32 106 random values... Table 2: Run-time comparison for different implementations of the CWT. We report mean and standard deviations over 100 repetitions each. Compared to the two-dimensional code presented in Cotter (2022), we observe state-of-the-art performance on GPU.
Researcher Affiliation Academia Moritz Wolter EMAIL High-Performance Computing and Analytics Lab, University of Bonn, Germany Felix Blanke EMAIL Fraunhofer Institute for Algorithms and Scientific Computing, Sankt Augustin, Germany Jochen Garcke EMAIL Institute for Numerical Simulation, University of Bonn and Fraunhofer Institute for Algorithms and Scientific Computing, Sankt Augustin, Germany Charles Tapley Hoyt EMAIL Northeastern University, Boston, USA
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It includes a Python code snippet to demonstrate API usage but no formal pseudocode for an algorithm.
Open Source Code Yes We provide the Py Torch Wavelet Toolbox to make wavelet methods more accessible to the deep learning community. Our Py Torch Wavelet Toolbox is well documented. A pip package is installable with pip install ptwt. Toolbox and documentation are available online. 1. https://pypi.org/project/ptwt/, https://pytorch-wavelet-toolbox.readthedocs.io/en/latest/
Open Datasets No We compare transformations of 32 106 random values. Inputs are shaped as R32 106, R32 103 103 and R32 102 102 102 transformation run times are reported in seconds. All runs use a Daubechies five-wavelet. We report mean and standard deviations over 100 repetitions each. The input signal has dimensions of R32 103, with the first dimension the batchand the second dimension the time dimension. The paper uses randomly generated data for its experiments, not specific external datasets that would require access information.
Dataset Splits No We compare transformations of 32 106 random values. The input signal has dimensions of R32 103, with the first dimension the batchand the second dimension the time dimension. The paper uses randomly generated data for its experiments, thus the concept of dataset splits is not applicable.
Hardware Specification Yes All speed tests were run on a machine with an Intel Xeon W-2235 CPU @ 3.80GHz and an NVIDIA RTX A4000 Graphics card.
Software Dependencies No At the time of writing, our unit tests ensure Python 3.9 and 3.11 compatibility. The paper mentions compatibility with specific Python versions but does not provide specific version numbers for other key software dependencies like PyTorch or PyWavelets.
Experiment Setup Yes We compare transformations of 32 106 random values. Inputs are shaped as R32 106, R32 103 103 and R32 102 102 102 transformation run times are reported in seconds. All runs use a Daubechies five-wavelet. We report mean and standard deviations over 100 repetitions each. All experiments use a Shannon wavelet.