Probability & Statistics Group Heidelberg-Mannheim

Workshop “Nonparametric Statistics meets Stochastic Control”

Aims and scope

One of the fundamental assumptions in stochastic control of continuous time processes is that the dynamics of the underlying stochastic process is known. The aim of this workshop is to discuss how to weaken this assumption in the most fundamental way by considering a fully nonparametric setting. To tackle the resulting problem of finding optimal purely data-driven strategies, methods from both statistics for stochastic processes and stochastic control are needed. The aim of this workshop is to first give an overview on relevant techniques from both fields and to discuss possible solutions for problems in different areas in small groups.

Speakers and Titles

  • Randolf Altmeyer (Humboldt-Universität Berlin): Nonparametric drift estimation for the stochastic heat equation from local measurements
  • Luis Alvarez Esteban (University of Turku): Ambiguity and control
  • Sören Christensen (Universität Hamburg): Nonparametric learning in Stochastic Control - Exploration vs. Exploitation
  • Sebastian Jobjörnsson (University of Gothenburg): Clinical trial design using Bayesian methods
  • Kristoffer Lindensjö (Stockholm University): Optimal stochastic control and the HJB
  • Moritz Schauer (Leiden University): Filtering and Bayesian inference for diffusions
  • Claudia Strauch (University of Mannheim): Sup-norm adaptive estimation of the characteristics of ergodic diffusions

Abstracts

Kristoffer Lindensjö: Optimal stochastic control and the HJB
Abstract: Suppose that we can control, to some extent, the evolution of a stochastic process and that we want to do this according to some optimality criterion. We then face a stochastic control problem. In this talk, I will formulate a general stochastic control problem in the setting of a controlled Itô diffusion (i.e., the process that we can control solves an SDE). The focus will be on the connection to the PDE known as the Hamilton–Jacobi–Bellman equation and how this can be used to solve the stochastic control problem. If time permits, I will also say something about filtering theory.
Luis Alvarez Esteban: Ambiguity and control
Abstract: Ambiguity is known to have an accelerating effect on the optimal timing of standard irreversible investment decisions. This effect is known to be especially pronounced in the case where the underlying is highly volatile due to the interaction between credibility and volatility via the likelihood ratio. In this talk, I’ll consider the impact of ambiguity on other stochastic control settings and analyze to which extent the conclusions obtained in an optimal stopping setting remain valid within the considered class of stochastic control problems.
Sebastian Jobjörnsson: Clinical trial design using Bayesian methods
Abstract: I will give an introduction to the Bayesian, decision theoretic approach to the optimisation
of clinical trials. First, the method is compared to the classical, frequentist approach in the canonical problem of selecting the sample size of a trial. Next, the method is applied to the problem of optimising an adaptive enrichment design, in which a second decision stage allows for exploiting the information gathered in the first. Formally, the Bayesian framework can also be used to deal with sequential problems involving any (finite) number of stages. In principle, backward induction can be used to solve such problems, but in practice the computations required typically becomes intractable as the number of stages increases. Sometimes, progress can be made towards a solution by moving to continuous time and then apply optimal stopping theory for stochastic processes. This route is demonstrated for a particular example known as Anscombe’s problem. Finally, I will (briefly) discuss a natural generalisation of Anscombe’s problem that leads to a stochastic optimal control problem. Instead of finding the optimal stopping rule for testing a new medical treatment, the objective would be to sequentially determine the shape of the unknown dose-response curve for the treatment. The talk will target an audience with some basic background in probability and stochastic processes. The focus will be on applications of the theory rather than purely mathematical results.
Moritz Schauer: Filtering and Bayesian inference for diffusions
Abstract: In the context of Bayesian inference for indirectly observed diffusion process, we derive a representation of the conditional distribution as change of measure to be embedded as step in a Monte-Carlo procedure targeting the full posterior of parameters and diffusion path. The conditional (or smoothing) distribution of a non-linear diffusion process given observations produced by an abstract observation operator up to a given time is expressed as a change of measure on the path space with respect to a diffusion process with tractable dynamics resembling the conditional process. The technique is based on solving the reverse time filtering problem for a linear approximation of the diffusion and a change of measure to correct for the difference between the linear approximation and the true process. It applies to continuous observations of coordinate processes and discrete observations of coordinate processes with and without error and even to observation of passage times.
Claudia Strauch: Sup-norm adaptive estimation of the characteristics of ergodic diffusion processes
Abstract: We consider the question of estimating the characteristics of a diffusion process X which is given as a solution of an SDE, based on continuous observations. The unknown drift b is supposed to belong to a nonparametric class of smooth functions of unknown order. Focusing on the case of ergodic diffusions where the Markov process X admits an invariant measure, we present fully data-driven procedures for estimating the underlying invariant density and the drift b. While the proposed scheme does not depend on the smoothness of the estimated function, it still yields estimators which attain the optimal rate of convergence, the quality of estimation measured by the sup-norm risk. In this talk, we further discuss weak convergence properties for estimators of the invariant density and comment on their implications.
Randolf Altmeyer: Nonparametric drift estimation for the stochastic heat equation from local
measurements

Abstract: It is well-known that parameters in the drift of a stochastic ordinary differential equation,
observed continuously on a time interval [0, T], can generally only be estimated consistently, if either T ! 1, the driving noise shrinks to zero or if a sequence of independent samples is observed. For stochastic partial differential equations, on the other hand, the situation is quite different. Let (X(t, ·))0tT be the solution of the stochastic heat equation dX(t, x) = #X(t, x)dt + dW(t, x), x 2   Rd, where #(·) = div(#r(·)) is the weighted Laplace operator for an unknown function # 2 C1(). Given a local measurement (hX(t, ·),Kh,x0i)0tT with spatial resolution h > 0, where x0 2  and Kh,x0(x) = h−d/2K(h−1(x − x0)) for a smooth kernel K with compact support, we derive an MLE-inspired estimator for #(x0), which achieves the optimal rate h−1T1/2. This means that consistent estimation of drift parameters is possible also for fixed T < 1, using only local information around x0. We also discuss central limit theorems and the question of efficiency.
Sören Christensen: Nonparametric learning in stochastic control - Exploration vs. exploitation
Abstract: One of the fundamental assumptions in stochastic control of continuous time processes
is that the dynamics of the underlying (diffusion) process is known. This is, however, usually obviously not fulfilled in practice. On the other hand, over the last decades, a rich theory for nonparametric estimation of the drift (and volatility) for continuous time processes has been developed. The aim of this talk is to make a first (small) step to bringing together techniques from stochastic control with methods from statistics for stochastic processes to find a way to both learn the dynamics of the underlying process and control good at the same time. To this end, we study a toy example motivated from optimal harvesting, mathematically described as an impulse control problem. One of the problems that immediately arises is an Exploration vs. Exploitationbehaviour as is well known in Machine Learning. We propose a way to deal with this issue and analyse the proposed strategy asymptotically.

Program

July 18th   July 19th  
09.30 welcome 09.30-10.15 Randolf Altmeyer
10.00-10.45 Kristoffer Lindensjö 10.15-11.00 Sören Christensen
10.45-11.15 Coffee break 11.00-11.30 Coffee break
11.15-12.00 Luiz Alvarez Esteban 11.30-12.30 presentations of working groups
12.00-12.45 Sebastian Jobjörnsson 12.30-14.00 lunch break and closing
12.45-14.15 Lunch break    
14.15-15.00 Claudia Strauch    
15.00-15.45 Moritz Schauer    
15.45-16.15 Coffee break    
16.15-18.00 working in small groups    
18.00- Conference dinner    

Practical Information

The workshop “Nonparametric Statistics meets Stochastic Control” will be held on Wednesday, July 18, and Thursday, July 19, 2018 at IWH (International Academic Forum Heidelberg), Hauptstraße 242, Heidelberg.
The Workshop is organized by the Research Training Group “Statistical Modeling of Complex
Systems and Processes”, Heidelberg/Mannheim.

There is no participation fee. Please announce your participation by email to Sanja Juric
(juric(at)uni-mannheim.de) but please note that the number of participants is limited and registration is on a first come first serve basis.

Organizers

  • Cathrine Aeckerle (Mannheim)
  • Sören Christensen (Hamburg)
  • Claudia Strauch (Mannheim)