QuantMinds International heads to Lisbon, Portugal this year and will host an exploration of new ideas, evolving theories and technological development. For the past twenty five years, QuantMinds (or Global Derivatives as it has been known as previously) has been a mainstay of the quant community. What began primarily as a conference for option pricing, over the years has changed, reflecting that quants as a community are now working in a number of different areas within finance and also outside finance.
So what will I be speaking about QuantMinds this year? My presentation will be focused primarily on the use of Big Data for trading currency markets. Whilst, the term Big Data seems somewhat ubiquitous, in practice, my conversations suggest that participants in currency markets are still in the early stages of using Big Data or at least unusual datasets. It’s also key to define what we mean by Big Data. When dealing with large or unusual datasets in the context of trading, I find it important to have a hypothesis, before you even sit down to examine a dataset. I would always advocate simplicity too, trying to use the simplest statistical technique when developing a trading strategy. Only use complexity when necessary.
When looking at a particular datasets, it is important to understand what hypothesis you are testing and also what precisely do you want to use Big Data for? Do you want it to improve economic forecasts? Do you want to use it to make better price forecasts? Admittedly, a more data driven approach might uncover possible relationships between market variables. However, the challenge there is make a distinction between sometimes which may persist or a spurious relationship.
During my talk, I’ll also present a case study using machine readable news from Bloomberg News for trading FX. In a sense, news is not really “news” for traders. Traders have been using news as part of their decision making process for many years. However, the idea that a computer can read news in an automated fashion is relatively new. Whilst, the idea has begun to be used in equities, it is still a relatively unusual to use machine readable news in currency markets. My case study will show that it is possible to create trading strategies in FX using news, which have been historically profitable. My study will discuss the differences between structured and unstructured news datasets. I shall also outline the various steps we need to undertake, if we want to convert a news dataset into actual buy/sell signals for currency pairs. Another part of my study, will also look at the relationship between news volume and implied volatility, and how we can model volatility around Fed/ECB meetings using news.
A big part (excuse the pun) of Big Data analysis, or indeed any financial analysis, is the use of tools to do the number crunching. It is not sufficient to use tools like Excel to process data which is in the order gigabytes, terabytes or even petabytes. Hence, the last part of the talk will focus on how Python, which often used for financial analysis. This section will be presented by Shih-Hau Tan, who will be talking about ways of speeding up Python code.
Aside from my talk, I hope to spend the rest of the conference learning about new ideas, especially in trading space including alternative data and machine learning. I’ve come to this conference for the past few years, and without exception, I’ve found it the best quant conference not only learning about new ideas, but just as importantly for meeting fellow quants.