A functional framework for nonsmooth autodiff with {\it maxpooling} functions

Authors: Bruno Després

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The formalism is applied to four basic examples, with some tests in Py Torch. A self contained proof of an important Stampacchia formula is in the appendix.
Researcher Affiliation Academia Bruno Després EMAIL Sorbonne Université, Université Paris Citée, CNRS, LJLL, F-75005 Paris, France MEGAVOLT, INRIA, Paris, France
Pseudocode Yes def max1(x): res = x[0] for i in range(1, a): if x[i] > res: res = x[i] return res def max2(x): return torch.max(x) Table 1: Script of the functions max1 and max2
Open Source Code No The paper discusses Py Torch tests and refers to "The scripts taken from Boustany (2024) are in Table 1." This indicates the code is from another source and not provided by the authors for their methodology. There is no explicit statement or link for code release by the authors.
Open Datasets No The paper focuses on a mathematical framework for nonsmooth automatic differentiation and demonstrates its application with basic examples and Py Torch functions. It does not mention using or providing access to any specific datasets.
Dataset Splits No The paper does not use any datasets, thus there is no information about dataset splits.
Hardware Specification No The paper mentions "some tests in Py Torch" and discusses derivative calculations using Py Torch functions, but it does not provide any specific details about the hardware (e.g., GPU, CPU models) used for these tests.
Software Dependencies No The paper mentions "Py Torch" as a software used for testing, but it does not specify any version number for Py Torch or any other software dependencies.
Experiment Setup No The paper focuses on theoretical derivations and applies them to small, illustrative examples using PyTorch functions for derivative calculations. It does not describe an experimental setup involving hyperparameters, training configurations, or system-level settings typically found in machine learning experiments.