Work

# Quasi-Monte Carlo Methods for Stochastic Programming

Public Deposited Download PDF

In this thesis we discuss the issue of solving stochastic optimization problems using sampling methods. Numerical results have shown that using variance reduction techniques from statistics can result in significant improvements over Monte Carlo sampling in terms of the number of samples needed for convergence of the optimal objective value and optimal solution to a stochastic optimization problem . Among these techniques are stratified sampling and randomized Quasi-Monte Carlo sampling. This thesis is split into three main sections: The first section discusses deviation probabilities for Latin Hypercube sampling (LHS), which is a type of stratified sampling. A deviation probability is the probability that the sample average differs from the true expectation by more than some quantity, delta. For Monte Carlo sampling, it is known that the deviation probability approaches zero exponentially fast as the sample size grows to infinity. We show that under certain conditions, that rate of convergence is even faster for Latin Hypercube sampling. The second section deals with a padded sampling scheme. For problems in high dimension, it may be computationally inefficient to calculate Quasi-Monte Carlo point sets in the full dimension. Rather, we can identify which dimensions are most important to the convergence and implement a sampling scheme where only those important dimensions are sampled via Quasi-Monte Carlo sampling and the remaining dimensions are ``padded'' with some other type of sampling. We show that a padded sampling scheme where the padded variables are sampled with Latin Hypercube sampling (PLHS) satisfies a normal central limit theorem. In the third section, we incorporate our padded sampling scheme (PLHS) into an algorithm to solve two-stage stochastic optimization problems and show some numerical results. Additionally we show that when padded sampling incorporated into single replication and two-replication stopping procedures for our algorithm, the algorithm will stop after a finite number of samples.