Volume 27 • Number 2 • 2004
 

Aspects of Control for the Normal Markov Processes
Nasrollah Saebi
Abstract. The choice of optimal control policy for sequentially observed data studied in a Bayesian context is usually a dynamic programming problem that involves a backward iterative solution. In general, as in most sequential Bayes problems, optimal solutions are difficult to derive analytically in simple forms. The system of linear models examined here is, however, amongst the few cases with known explicit optimal solutions. This would allow analytical comparisons with the performance of sub-optimal control procedures. Certain sequence of myopic rules are introduced and applied to the control system. These rules, in general, will provide the user with good near-optimal control policies whenever optimal solutions are analytically difficult to determine. As the myopic rules do not involve backward iteration procedures, they are often convenient to apply, and in addition, the user has the option of improving the accuracy of any particular approximating solution by taking additional future costs into consideration. This approximation is, naturally, at its best when the complete future cost is considered and, for the Aoki (1967) linear control system, solutions are then proved to be optimal.

2000 Mathematics Subject Classification: 62C10


Full text: PDF