Why should autoencoders work?
Authors: Matthew Kvalheim, Eduardo Sontag
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A computational example is also included to illustrate the ideas. ... The numerical experiments are in 3. |
| Researcher Affiliation | Academia | Matthew D. Kvalheim EMAIL Department of Mathematics and Statistics University of Maryland, Baltimore County, MD, United States. Eduardo D. Sontag EMAIL Departments of Electrical and Computer Engineering and Bioengineering Northeastern University, Boston, MA, United States. |
| Pseudocode | Yes | A Appendix: Code used for implementation howmany_points = 500 epochs = 5000 batch_size = 20 import matplotlib.pyplot as plt import plotly.graph_objects as go import pandas as pd import numpy as np import scipy as sp import tensorflow as tf from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model |
| Open Source Code | Yes | An appendix lists the Python code used for the implementation. |
| Open Datasets | No | We generated 500 points in each of the circles |
| Dataset Splits | No | We generated 500 points in each of the circles, and used 5000 epochs with a batch size of 20. ... autoencoder.fit(input_data, input_data, epochs=epochs, batch_size=batch_size, shuffle=True) ... Test the autoencoder on the training data encoded_vectors = encoder.predict(input_data) decoded_vectors = decoder.predict(encoded_vectors) |
| Hardware Specification | No | The paper does not mention specific hardware used for running the experiments. |
| Software Dependencies | No | We used Python s Tensor Flow with Adaptive Moment Estimation (Adam) optimizer and a mean squared error loss function. ... import matplotlib.pyplot as plt import plotly.graph_objects as go import pandas as pd import numpy as np import scipy as sp import tensorflow as tf |
| Experiment Setup | Yes | After some experimentation, we settled on an architecture with three hidden layers of encoding with 128 units each, and similarly for the decoding layers. The activation functions are Re LU (Rectified Linear Unit) functions, except for the bottleneck and output layers, where we pick simply linear functions. ... We generated 500 points in each of the circles, and used 5000 epochs with a batch size of 20. We used Python s Tensor Flow with Adaptive Moment Estimation (Adam) optimizer and a mean squared error loss function. |