This paper provides a new approach for solving a wide class of Markov decision problems including problems in which the space is general and the system can be continuously controlled. The optimality criterion is the long-run average cost per unit time. We decompose the decision processes into a common underlying stochastic process and a sequence of interventions so that the decision processes can be embedded upon a reduced set of states. Consequently, in the policy-iteration algorithm resulting from this approach the number of equations to be solved in any iteration step can be substantially reduced. Further, by its flexibility, this algorithm allows us to exploit any structure of the particular problem to be solved.
de Leve, G., Awi Federgruen, and H. C. Tijms. "A general Markov decision method I: Model and techniques." Advances in Applied Probability 9 (1977): 296-315.
Each author name for a Columbia Business School faculty member is linked to a faculty research page, which lists additional publications by that faculty member.
Each topic is linked to an index of publications on that topic.