Accession Number : AD0758652

Title :   Markov Decision Chains.

Descriptive Note : Technical rept.,

Corporate Author : STANFORD UNIV CALIF DEPT OF OPERATIONS RESEARCH

Personal Author(s) : Veinott,Arthur F. , Jr

Report Date : 20 FEB 1973

Pagination or Media Count : 50

Abstract : The report is a self contained expository development of policy improvement methods for finding optimal policies under various criteria in Markov decision chains. A (finite) Markov decision chain is a generalization of a finite Markov chain with a distinguished set of stopped states such that whenever the chain is observed in a given state, a decision maker chooses one of finitely many transition probability vectors available in that state and earns a reward depending on the given state and probability vectorchosen. No background in the subject is required, but a knowledge of elementary properties of matrices, infinite series, and finite Markov chains is assumed. The nature of Markov decision chains and their applications to gambling, search, sequential statistical decisions, and inventory control are discussed. (Author Modified Abstract)

Descriptors :   (*DYNAMIC PROGRAMMING, STOCHASTIC PROCESSES), (*DECISION THEORY, STOCHASTIC PROCESSES), SEQUENTIAL ANALYSIS, GRAPHICS, CONVEX SETS, CONTROL SYSTEMS, INVENTORY CONTROL, OPTIMIZATION

Subject Categories : Operations Research

Distribution Statement : APPROVED FOR PUBLIC RELEASE