KNect365 is part of the Knowledge and Networking Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

Webinar: Deep Probabilistic Programming for finance

DPP webinar

Deep probabilistic programming (DPP) combines three fields: Bayesian statistics and machine learning, deep learning (DL), and probabilistic programming. In this webinar, our expert panel discussed DPP tools and related theory relevant for Bayesian forecasting and decision making with financial time series data and other types of financial data (e.g. limit order books, news etc).

Topics included:

  • How does DPP differ conceptually from frequentist statistics and machine learning?
  • Why represent probabilistic models as a computational graph?
  • What are the DPP tools, methodologies and applications that are most important for finance?
  • Is DPP the future for risk modeling using complex datasets?

Hear it from the experts...

The panel summarize their answers to the audience Q&A.

Is deep learning a type of deep probabilistic programming method?

Strictly, deep learning by itself, as it commonly known is not a deep probabilistic programming method. Deep learning is an example of a deterministic method - it is purely algorithmic and not probabilistic. However, the types of data representations that deep learning permits are central constructs in DPP. DPP is really a combination of deep learning and probabilistic programming.

How do you model graphical relationships in financial data?

Graphical relationships are based on subjective causal relationships. X caused Y.  In many cases, this causal relationship requires fundamental knowledge of the asset. For example, the effects of increased oil pipeline maintenance costs on the price of WTI crude. See here for further examples: https://kuscholarworks.ku.edu/bitstream/handle/1808/161/CF99.pdf;jsessionid=0ACADE1EA67D04B14C25BC8060F7B0F0?sequence=1

It also possible to infer the graphical relationship (identification from a set of different structures) through a maximum likelihood estimate. This, of course, is challenging and will lead to a non-unique solution.  Riccardo Rebonata recently wrote a book on coherent stress testing covering this area: http://onlinelibrary.wiley.com/book/10.1002/9781118374719

Do you have any suggestions when choosing a suitable online platform to do deep learning?

Cloud hosted services with a jupyter notebook and tensorflow are powerful and flexible.  In general, it's important to be able to have programmatic control of the data input and manipulation, rather than using a GUI. An important aspect of machine learning is how to provide the input to the machine and this is difficult to scale with GUIs.

What are your thoughts on combining RNNs and financial time series modelling with DPP? Do you foresee an advantage in training Baysian RNNs vs classical time series models?

This is an excellent direction. The main advantage is having uncertainty estimates with the time series prediction, rather than just predicting the expected prices.  DPP enables a probabilistic representation of the time series prediction.

 How can we trust our predictive uncertainty estimates/posterior predictive distributions in a practical financial setting, given well-known problems with common approximate inference algorithms like VI and HMC ? (mode-seeking, high variance, difficulty in capturing multi-model posteriors etc.)

This is an important question and the problems that you allude to present many challenges for researchers in this area. The short answer to your question is that one must perform statistical tests to check that the posterior distribution captured the tails correctly. QQ-plots, KS, Shapiro-Wilks etc are some of the tests that can be performed on the marginals.

How is bayesian analyses which has a prior different from a stochastic simulation, which also tries to spans the search space?

Stochastic simulation samples from an unconditional probability distribution and does not assert a prior. Bayesian analysis often uses the Metropolis Hastings method to approximate the marginal distribution function and involves the use of the accept-reject test to sample from a much more complicated target distribution. There are a whole suite of so called variational Bayesian methods which are well known to be equivalent, under certain restrictive assumptions, to other techniques, e.g. particle-filter methods etc. Also Bayesian analysis isn't about using stochastic processes per se (which is often how monte carlo methods are used in quant finance), although if you formulate the stochastic process in terms of evolution of probability density functions, with the prior being chosen as the historical density function, then the problem formulation seems very similar.

Get articles like this by email