Multivariate Spearman's $\rho$ for Aggregating Ranks Using Copulas

Authors: Justin Bedő, Cheng Soon Ong

JMLR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate good performance on the rank aggregation benchmarks MQ2007 and MQ2008. ... We tested our method on the MQ2007-agg and MQ2008-agg list aggregation benchmarks Qin et al. (2010b). ... In fig. 1a, we see that our approach RAGSperforms better than all other methods...
Researcher Affiliation Academia Justin Bed EMAIL The Walter and Eliza Hall Institute, 1G Royal Parade, Parkville Victoria 3052, Australia The Department of Computing and Information Systems, the University of Melbourne, VIC 3010 Australia; Cheng Soon Ong EMAIL Data61, CSIRO, 7 London Circuit, Canberra ACT 2601 Australia Research School of Computer Science, the Australian National University, Australia The Department of Electrical and Electronic Engineering, the University of Melbourne, VIC 3010 Australia
Pseudocode No The paper describes the algorithm steps in a numbered list within section 6.2 but does not provide a formally structured pseudocode or algorithm block.
Open Source Code No The paper mentions an interactive website (http://uni.cua0.org) showing detailed results but does not provide access to the source code for the methodology described.
Open Datasets Yes We tested our method on the MQ2007-agg and MQ2008-agg list aggregation benchmarks Qin et al. (2010b). ... These data sets are available for download at the LETOR website.
Dataset Splits Yes Each data set has 5 pre-defined cross-validation folds with each fold providing a training, testing and validation data set (60%/20%/20%).
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running its experiments.
Software Dependencies No The paper mentions using a BFGS based optimiser but does not provide specific software names with version numbers for any ancillary software dependencies.
Experiment Setup Yes In the following experiments we also included a bias/offset term in the least squares problem, which can be interpreted as adding a ranking that is constant (gives all objects the same rank). ... We trained our model on the training set and tested on the testing set, leaving the validation set unused since we have no hyperparameters. ... To solve the relaxed problem, we used a BFGS based optimiser by shifting the equality constraints into the objective function with high penalties.