Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
StructED: Risk Minimization in Structured Prediction
Authors: Yossi Adi, Joseph Keshet
JMLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compared our implementation of SSVM optimized by SGD to the corresponding implementation in Py Struct on the MNIST data set of handwritten digits. We got similar accuracy and training times (Struct ED : 92.6%, 149 sec; Py Struct: 90.2%, 145 sec). We conclude the paper by demonstrating the advantage of the package with a simple structured prediction task of automatic vowel duration measurement. ... We trained all algorithms on this task and present their performance in terms of task loss (the lower the better) and training times in Table 1 (training parameters for each of the training methods can be found in the package s Examples folder). |
| Researcher Affiliation | Academia | Yossi Adi EMAIL Joseph Keshet EMAIL Department of Computer Science Bar-Ilan University Ramat Gan, 52900, Israel |
| Pseudocode | No | The paper describes various algorithms and their implementation, but it does not include any explicit pseudocode blocks or algorithm listings within the text. |
| Open Source Code | Yes | This paper presents Struct ED, a software package for learning structured prediction models with training methods that aimed at optimizing the task measure of performance. The package was written in Java and released under the MIT license. It can be downloaded from http://adiyoss.github.io/Struct ED/. |
| Open Datasets | Yes | We compared our implementation of SSVM optimized by SGD to the corresponding implementation in Py Struct on the MNIST data set of handwritten digits. |
| Dataset Splits | Yes | We had a training set of 90 examples, a validation set of 20 examples and a test set of 20 examples. |
| Hardware Specification | No | The paper compares performance and training times with PyStruct, but it does not specify the hardware (CPU, GPU, memory, etc.) used for running its own experiments. |
| Software Dependencies | No | The package was written in Java and released under the MIT license. While Java is mentioned as the programming language, no specific version number for Java or any other software dependencies/libraries with their versions are provided. |
| Experiment Setup | Yes | The task loss is defined as ℓ(y, ˆy) = max{0, |yb ˆyb| τb}+max{0, |ye ˆye| τe}, where ˆyb and ˆye are predicted vowel onset and offset times, respectively, and τb and τe are parameters (τb=10 msec and τe=15 msec). ... We extracted 21 unique acoustic features every 5 msec, including the confidence of a frame-based phoneme classifier, the first and the second formants and other acoustic features. |