NEU: A Meta-Algorithm for Universal UAP-Invariant Feature Representation

Authors: Anastasis Kratsios, Cody Hyndman

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 Numerical Evaluation of NEU-OLS and NEU-PCA: Next, we evaluate the performance of NEU across various learning tasks. First, we investigate the performance of NEU in the chaotic environment provided by real-world financial data. Then, we stress test NEU s behaviour within the controlled environment provided by simulation studies. Our implementations focus on financial data analysis.
Researcher Affiliation Academia Anastasis Kratsios EMAIL Department of Mathematics Eidgen ossische Technische Hochschule Z urich (ETH) R amistrasse 101, 8092 Z urich, ZH, Switzerland Cody Hyndman EMAIL Department of Mathematics and Statistics Concordia University 1455 boulevard de Maisonneuve Ouest, Montr eal, Qu ebec, H3G 1M8, Canada
Pseudocode Yes Meta-Algorithm 1: Non-Euclidean Upgrading (NEU) input : Hypothesis class F, loss-function L, penalty function P, Training Data {xn}n N Feature map s depth J Robustness Hyper-parameter λ > 0 output: NEU-model f NEU ˆf ˆφˆI. 1 ˆφ argmin φ Φ :d P n N w ,λ n L (f(xn), Aφ(xn) + b, xn) + P(Aφ + b) ; Get Feature Map 2 ˆf argmin ˆf F P n N w ,λ n L f(xn), ˆf ˆφ(xn) + b, xn + P(ˆf ˆφ) ; Get NEU-Model
Open Source Code No The Tensorflow (v.2.4.1) code and data-sets for our implementations is available online at ?.
Open Datasets No The Tensorflow (v.2.4.1) code and data-sets for our implementations is available online at ?.
Dataset Splits Yes The models are trained on the first 75% of the data and the remaining 25% is used to evaluate the out-of-sample predictive performance of the trained models, and is illustrated in Figure 2.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies Yes The TensorFlow (v.2.4.1) code and data-sets for our implementations is available online at ?.
Experiment Setup No Each of the hyper-parameters is selected by cross-validations and randomized search from a large grid, hyper-parameters include the choice of kernel. ... The models tuning-parameters are then estimated by cross-validation.