Sparse Factor Analysis for Learning and Content Analytics

Authors: Andrew S. Lan, Andrew E. Waters, Christoph Studer, Richard G. Baraniuk

JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with synthetic and real-world data demonstrate the efficacy of our approach.
Researcher Affiliation Academia Andrew S. Lan EMAIL Andrew E. Waters EMAIL Dept. Electrical and Computer Engineering Rice University Houston, TX 77005, USA Christoph Studer EMAIL School of Electrical and Computer Engineering Cornell University Ithaca, NY 14853, USA Richard G. Baraniuk EMAIL Dept. Electrical and Computer Engineering Rice University Houston, TX 77005, USA
Pseudocode No The paper describes algorithms (SPARFA-M, SPARFA-B) in detail, including iterative steps and mathematical formulations for optimization problems (e.g., Section 3.3, 4.2.2), but does not present these as clearly labeled 'Pseudocode' or 'Algorithm' blocks with structured formatting.
Open Source Code No Please see our website www.sparfa.com, where you can learn more about the project and purchase SPARFA t-shirts and other merchandise. This statement points to a project website for general information and merchandise, but does not explicitly state that the source code for the methodology described in the paper is openly available there.
Open Datasets Yes We next test the SPARFA algorithms on three real-world educational data sets... and a portion of the ASSISTment data set (Pardos and Heffernan 2010).
Dataset Splits Yes In each of the 25 trials we run for both data sets, we hold out 20% of the observed learner responses as a test set, and train both the logistic variant of SPARFA-M9 and CF-IRT on the rest.
Hardware Specification Yes On a 3.2 GHz quad-core desktop PC, SPARFA-M converged to its final estimates in 4 s, while SPARFA-B required 10 minutes.
Software Dependencies No The paper describes algorithms and frameworks like FISTA, but does not specify any particular software libraries, programming languages, or their version numbers that were used for implementation or experimentation.
Experiment Setup Yes For all of the synthetic experiments with SPARFA-M, we set the regularization parameters γ = 0.1 and select λ using the BIC (Hastie et al. 2010). For SPARFA-B, we set the hyperparameters to h = K + 1, vµ = 1, α = 1, β = 1.5, e = 1, and f = 1.5; moreover, we burn-in the MCMC for 30,000 iterations and take output samples over the next 30,000 iterations.