Unimodal Likelihood Models for Ordinal Data

Authors: Ryoya Yamasaki

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental OR experiments in this study showed that the developed more representable unimodal likelihood models could yield better generalization performance for real-world ordinal data compared with previous unimodal likelihood models and popular statistical OR models having no unimodality guarantee. We performed experimental comparisons of 2 previous unimodal likelihood models, 2 popular statistical OR models without the unimodality guarantee, and 8 proposed unimodal likelihood models; see Section 6 and Appendix C. Our empirical results show that the proposed more representable unimodal likelihood models can be effective in improving the generalization performances for the conditional probability estimation and OR tasks for many data that have been treated in previous OR studies as ordinal data.
Researcher Affiliation Academia Ryoya Yamasaki EMAIL Department of Systems Science Graduate School of Informatics, Kyoto University 36-1 Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 JAPAN
Pseudocode No The paper does not contain a clearly labeled section for 'Pseudocode' or 'Algorithm'. However, it describes structured mathematical definitions and procedures for models like ORD-ACL, VS-SL, and their variants using specific equations and conditions, which function as algorithmic steps for implementation. For example, the transformation in equation (12) for the ordered learner model g = ρ[g] or the construction of the V-shaped learner model ˇg = τ( g) in Section 4.2 describes a procedural approach.
Open Source Code Yes One can get the datasets from a researchers site (http://www.uco.es/grupos/ayrna/orreview) of (Gutierrez et al., 2015), or our Git Hub repository (https://github.com/yamasakiryoya/ULM) together with our used program codes.
Open Datasets Yes We selected 21 real-world datasets of those used in experiments by the previous OR study (Gutierrez et al., 2015) with the total sample size ntot that is 1000 or more, and used them for our numerical experiments. One can get the datasets from a researchers site (http://www.uco.es/grupos/ayrna/orreview) of (Gutierrez et al., 2015), or our Git Hub repository (https://github.com/yamasakiryoya/ULM) together with our used program codes.
Dataset Splits Yes We trained a likelihood model with a training sample of size ntra = 800, and evaluated the MU with an obtained likelihood model and a remaining test sample of size ntes = ntot - ntra. We repeated this procedure 100 trials with a randomly-set different sample setting and initial parameters of the likelihood model to obtain 100 test MUs. We experimented with 6 training sample size settings ntra = 25,50,100,200,400,800, to see the dependence of behaviors of each method on the training sample size.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No We trained a model with a training sample and Adam optimization for 1000 epochs... The paper mentions 'Adam optimization' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes We implemented all learner models with a 4-layer fully-connected neural network model that shares weights in except for the final layer and has 100 nodes activated with the sigmoid function in addition to bias nodes in every hidden layer. We trained a model with a training sample and Adam optimization for 1000 epochs according to the maximum likelihood estimation, and evaluated the NLL, MZE, MAE, and MSE with a remaining test sample at the end of each epoch.