OMLT: Optimization & Machine Learning Toolkit
Authors: Francesco Ceccon, Jordan Jalving, Joshua Haddad, Alexander Thebelt, Calvin Tsay, Carl D Laird, Ruth Misener
JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate how to use OMLT for solving decision-making problems in both computer science and engineering. Our mnist example {dense, cnn}.ipynb notebooks verify dense and convolutional NNs on MNIST (Le Cun et al., 2010). |
| Researcher Affiliation | Academia | 1 Department of Computing, Imperial College London, 180 Queen s Gate, SW7 2AZ, UK 2 Center for Computing Research, Sandia National Laboratories, Albuquerque, NM 87123, USA 3 Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA |
| Pseudocode | No | The paper describes the design and functionality of the OMLT toolkit and its integration with Pyomo. It does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The optimization and machine learning toolkit (https://github.com/cog-imperial/OMLT, OMLT 1.0) is an open-source software package enabling optimization over high-level representations of neural networks (NNs) and gradient-boosted trees (GBTs). |
| Open Datasets | Yes | Our mnist example {dense, cnn}.ipynb notebooks verify dense and convolutional NNs on MNIST (Le Cun et al., 2010). |
| Dataset Splits | No | The paper mentions using MNIST for verification examples but does not provide specific details on dataset splits (e.g., percentages for training, validation, or test sets). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or processing power) used for running experiments. |
| Software Dependencies | No | The paper mentions several software dependencies like Pyomo, ONNX, Keras, PyTorch, and TensorFlow, but does not provide specific version numbers for these components. |
| Experiment Setup | No | The paper describes the OMLT framework and different optimization formulations for neural networks and gradient-boosted trees. However, it does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes, epochs) or training configurations. |