Decision Making with Dynamic Uncertain Events

Authors: Meir Kalech, Shulamit Reches

JAIR 2015 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our algorithms theoretically and empirically and show that the quality of the decision in both approximations is near-optimal and much faster than the optimal algorithm. Also, we can conclude from the experiments that the cost function is a key factor to chose the most effective algorithm. 6. Evaluation Before presenting an empirical evaluation, we summarize the theoretical analysis of our algorithms in Table 2. ... 6.1 Experimental Settings We experimentally validated our algorithm within a systematic artificial framework inspired by the stock market. We varied the number of candidate stocks (2 30) and the time horizon of the economic events (1 5) (i.e., the timed variables). We ran each combination 25 times. In each test, the possible profits from the stocks (the utility) were randomly selected from a uniform distribution over the range [$10K . . . $100K]. Later we present experiments with additional distributions. We ran each scenario (of 25 tests) with 25 random assignments for the timed variables. Each data point in the graphs is an average of 625 tests (25 random utilities 25 random assignments).
Researcher Affiliation Academia Meir Kalech EMAIL Department of Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel Shulamit Reches EMAIL Department of Applied Mathematics, Jerusalem College of Technology, Israel
Pseudocode Yes Algorithm 1 RELATIVE EXPECTED GAIN (input: candidate trees CT = {ct1, ..., ctm} input: node nx,i output: relative expected gain of nx,i) ... Algorithm 2 EXPECTED STOPPING (input: time t) (input: candidate trees CT = {ct1, ..., ctm}) output: expected stopping ES(σt, π)
Open Source Code No The paper does not provide any statement or link regarding the availability of source code for the described methodology.
Open Datasets No We experimentally validated our algorithm within a systematic artificial framework inspired by the stock market. We varied the number of candidate stocks (2 30) and the time horizon of the economic events (1 5) (i.e., the timed variables). We ran each combination 25 times. In each test, the possible profits from the stocks (the utility) were randomly selected from a uniform distribution over the range [$10K . . . $100K]. Later we present experiments with additional distributions. We ran each scenario (of 25 tests) with 25 random assignments for the timed variables. Each data point in the graphs is an average of 625 tests (25 random utilities 25 random assignments).
Dataset Splits No The paper describes generating synthetic data for simulations rather than using predefined datasets with train/test/validation splits. It specifies running "25 tests" and "25 random assignments" to average results, but these are related to experiment runs and data generation, not dataset partitioning for model training or evaluation in a typical machine learning sense.
Hardware Specification No The paper does not provide specific hardware details used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes We set a simple cost function that grows linearly with the time CST (t) = a t and varied the coefficient of the time stamp (a) from 0.01K to 2.91K, with jumps of 0.15K. We fixed both the number of candidates and the horizon at 5. ... We experimentally validated our algorithm within a systematic artificial framework inspired by the stock market. We varied the number of candidate stocks (2 30) and the time horizon of the economic events (1 5) (i.e., the timed variables). We ran each combination 25 times. In each test, the possible profits from the stocks (the utility) were randomly selected from a uniform distribution over the range [$10K . . . $100K]. Later we present experiments with additional distributions. We ran each scenario (of 25 tests) with 25 random assignments for the timed variables. Each data point in the graphs is an average of 625 tests (25 random utilities 25 random assignments).