Risk-sensitive reinforcement learning on partially observable Markov decision processes

The successful candidate will join a DFG funded project whose goal is to develop a mathematical framework for optimal sequential decision making in the face of economic and perceptual uncertainty. The project aims at the extension of Partially Observable Markov Decision Processes to incorporate risk, at an assessment of the existence and uniqueness of optimal risk-sensitive policies, and at the development of computationally tractable algorithms for solving the corresponding optimization problems. For previous work in the context of risk-sensitive Markov Decision Processes see: Shen, Stannat, Obermayer (2013), SIAM J. Control Optim. 51, 3652ff; Shen et al. (2014), Neural Comput. 26, 1298ff; Shen, Stannat, Obermayer (2014), ArXiv e-prints 1403.3321.

Candidates should hold a recent PhD-degree in mathematics (stochastics), computer science (machine learning), or similar fields. Candidates with research experience in optimization and control will be preferred.

Application material (CV, list of publications, abstract of the PhD thesis, copies of certificates and two letters of reference) should be sent to:

Prof. Dr. Klaus Obermayer
MAR 5-6, Technische Universität Berlin, Marchstrasse 23
10587 Berlin, Germany
email: klaus.obermayer@tu-berlin.de

preferably by email.

Job location: 
Marchstrasse 23
10587 Berlin
Contact and application information
Friday, June 30, 2017
Contact name: 
Klaus Obermayer