SIAM Journal on Control and Optimization, Vol.48, No.2, 481-520, 2009
UTILITY MAXIMIZATION WITH HABIT FORMATION: DYNAMIC PROGRAMMING AND STOCHASTIC PDEs
This paper studies the habit-forming preference problem of maximizing total expected utility from consumption net of the standard of living, a weighted average of past consumption. We describe the effective state space of the corresponding optimal wealth and standard of living processes, identify the associated value function as a generalized utility function, and exploit the interplay between dynamic programming and Feynman-Kac results via the theory of random fields and stochastic partial differential equations (SPDEs). The resulting value random field of the optimization problem satisfies a nonlinear, backward SPDE of parabolic type, widely referred to as the stochastic Hamilton-Jacobi-Bellman equation. The dual value random field is characterized further in terms of a backward parabolic SPDE which is linear. Progressively measurable versions of stochastic feedback formulae for the optimal portfolio and consumption choices are obtained as well.
Keywords:habit formation;generalized utility function;random fields;stochastic backward partial differential equations;feedback formulae;stochastic Hamilton-Jacobi-Bellman equation