Stable and Interpretable Unrolled Dictionary Learning
Authors: Bahareh Tolooshams, Demba E. Ba
TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our findings through synthetic and image denoising experiments. Finally, we demonstrate PUDLE s interpretability, a driving factor in designing deep networks based on iterative optimizations, by building a mathematical relation between network weights, its output, and the training set. |
| Researcher Affiliation | Academia | Bahareh Tolooshams EMAIL Demba Ba EMAIL School of Engineering and Applied Sciences Harvard University |
| Pseudocode | Yes | Algorithm 1: Classical alternating-minimization-based dictionary learning using lasso (1). Algorithm 2: PUDLE: Provable unrolled dictionary learning framework. |
| Open Source Code | Yes | 1Source code is available at https://github.com/btolooshams/stable-interpretable-unrolled-dl |
| Open Datasets | Yes | We trained on 432 and tested on 68 images from BSD (Martin et al., 2001). ... We focused on digits of {0, 1, 2, 3, 4} MNIST. |
| Dataset Splits | Yes | We trained on 432 and tested on 68 images from BSD (Martin et al., 2001). |
| Hardware Specification | Yes | PUDLE is developed using Py Torch (Paszke et al., 2017). We used one Ge Force GTX 1080 Ti GPU. |
| Software Dependencies | Yes | PUDLE is developed using Py Torch (Paszke et al., 2017). ... with Adam optimizer (Kingma & Ba, 2014) ... We used linear sum assignment optimization (i.e., scipy.optimize.linear_sum_assignment) |
| Experiment Setup | Yes | We let T = 200, λ = 0.2, and α = 0.2. The network is trained for 600 epochs with full-batch gradient descent using Adam optimizer (Kingma & Ba, 2014) with learning rate of 10-3 and ϵ = 10-8. ... We trained PUDLE where the dictionary is convolutional with 64 filters of size 9 × 9 and strides of 4. The encoder unrolls for T = 15, and the step size is set to α = 0.1. ... trained stochastically with Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10-4 and ϵ = 10-3 for 250 epochs. |