Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
DeepWave: A Recurrent Neural-Network for Real-Time Acoustic Imaging
Authors: Matthieu SIMEONI, Sepand Kashani, Paul Hurley, Martin Vetterli
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our real-data experiments show Deep Wave has similar computational speed to the state-of-the-art delay-and-sum imager with vastly superior resolution. While developed primarily for acoustic cameras, Deep Wave could easily be adapted to neighbouring signal processing fields, such as radio astronomy, radar and sonar. |
| Researcher Affiliation | Collaboration | Matthieu Simeoni IBM Zurich Research Laboratory EMAIL Sepand Kashani École Polytechnique Fédérale de Lausanne (EPFL) EMAIL Paul Hurley Western Sydney University EMAIL Martin Vetterli École Polytechnique Fédérale de Lausanne (EPFL) EMAIL |
| Pseudocode | Yes | Algorithm 1 Deep Wave forward propagation |
| Open Source Code | Yes | Deep Wave implementation can be found on https://github.com/imagingofthings/Deep Wave. |
| Open Datasets | Yes | Finally we express our gratitude towards Robin Scheibler and Hanjie Pan for their openly-accessible real-world datasets [36, 43]. |
| Dataset Splits | Yes | Deep Wave is trained by splitting the data points into a training and validation set (respectively 80% and 20% in size). |
| Hardware Specification | No | The paper mentions 'a general-purpose CPU' and 'a standard computing platform' but does not provide specific models of CPUs, GPUs, or other hardware components used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly state specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions). |
| Experiment Setup | Yes | For each frequency band, we chose an architecture with 5 layers. Optimisation of (9) is carried out by stochastic gradient descent (SGD) with momentum acceleration [51]. Finally, we substitute the Re Lu activation function by a scaled rectified tanh to avoid the exploding gradient problem [39]. |