Postdoctoral Position (two years) in Optimization for Statistical Learning

Optimization theory is vital to modern statistical learning and is at the forefront of advances in AI. The main objective of this postdoctoral position is to develop the next generation of optimization tools to address the above challenges in the context of modern statistical learning, and potentially explore their applications in AI, including medical imaging, automated quality control, and self-driving cars, evaluated on both simulated and real data.

Within this broad framework, the successful candidate is encouraged to develop its own research agenda, in close collaboration with mentors and colleagues. Potential areas of interest include, but are not limited to:
• Training generative adversarial networks
• Nonconvex algorithms for linear inverse problems (such as compressive sensing)
• Robust optimization and defence against adversarial examples in deep neural net
• Role of over-parametrization in training and generalization of deep neural nets
• Global geometry of nonconvex problems
• Efficient and scalable algorithms for constrained nonconvex optimization
• Application of Langevin dynamics and other Monte-Carlo techniques in optimization
• Online and storage-optimal algorithms for large scale convex optimization

Job location: 
MIT Building 3rd floor
90187 Umeå
Contact and application information