Loss Functions, Axioms, and Peer Review
Authors: Ritesh Noothigattu, Nihar Shah, Ariel Procaccia
JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also provide empirical results, which analyze properties of our approach when applied to a dataset of 9197 reviews from IJCAI 2017. Finally, we implement and apply our approach to reviews from IJCAI 2017. In this section, we provide an empirical analysis of a few aspects of peer review through the approach of this paper. We employ a dataset of reviews from the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017). |
| Researcher Affiliation | Academia | Ritesh Noothigattu EMAIL Machine Learning Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Nihar B. Shah EMAIL Machine Learning Department and Computer Science Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Ariel D. Procaccia EMAIL School of Engineering and Applied Sciences Harvard University 150 Western Avenue Boston, MA 02134 |
| Pseudocode | No | The paper describes mathematical derivations and axiomatic properties but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/ritesh-noothigattu/choosing-how-to-choose-papers. |
| Open Datasets | No | We employ a dataset of reviews from the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017), which was made available to us by the program chair. To our knowledge, we are the first to use this dataset. At submission time, authors were asked if review data for their paper could be included in an anonymized dataset, and, similarly, reviewers were asked whether their reviews could be included; the dataset provided to us consists of all reviews for which permission was given. |
| Dataset Splits | No | The paper defines how papers were selected based on their scores (e.g., 'top 27.27% of papers'), but it does not specify explicit training, testing, or validation splits for a machine learning model or experiment in the traditional sense. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instances) used for running experiments. |
| Software Dependencies | No | The optimization problem in Equation (1) is convex, and standard optimization packages can efficiently compute the minimizer. Hence, importantly, computational complexity is a nonissue in terms of implementing our approach. |
| Experiment Setup | No | The paper describes the L(1, 1) aggregation method and its application, but it does not specify typical machine learning experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or specific optimizer settings. |