Following a period of relative calm in the public markets since the global financial crisis, the industry has seen significant changes recently, with a deflationary environment, new patterns in volatility, and questions about if and when interest rates should rise (and rise again). This panel discussion, held at QuantMinds Americas, tackled some of the key questions about quant research, investment strategies, and emerging technologies that support and influence firm-level investment activities and market dynamics.
Aymeric Kalife, Associate Professor, Paris Dauphine University
Gregory Pelts, Quant, Wells Fargo
Ben Bowler, Managing Director and Global Head of Equity Derivatives Research, BAML
Murad Nayal, Managing Director, Global Head of Risk Informatics, Goldman Sachs
Rahim Esmailzadeh, Head of Portfolio Analytics, Magnetar Capital
Outlier events: handling the knowns and the unknowns
For any quant firm, in-house research is critically important for strategy development while advances in tools, methodologies, and the underlying data generation process will have an impact on the directions and opportunities that emerge. Since the GFC, we have observed an increase in local volatility risk and rising fragility in markets, with some unstable characteristics, where the markets may be sitting in a fairly tranquil state and then suddenly experience sharp drawdowns, flash crashes, or so-called tantrums (e.g. the taper tantrum – which implies the kind of volatility sometimes exhibited by toddlers!). In 2018, outlier events developed with four times the frequency seen prior to 2008 and those disturbances occurred across a range of asset classes. Thus, the climate for investment and risk management is presenting substantial challenges for modelling and building profitable quantitative strategies.
As part of the answer to those challenges, quants building models to address market and liquidity risk are also looking at suitable applications for machine learning projects. ML may be of use in trying to create better forecasting models, for example, offering insights on regime changes and capturing the dynamics effectively. They are also looking at ML in terms of processing mechanisms, where automation may be possible for tasks focused on model choices, feature selection, and calibration. While the technological advances are exciting, it is important to stay grounded in the practical world. We are reminded that a good model is a consistent model that can be implemented efficiently; it should not take two days to produce a price in a derivative model, for example. Nevertheless, the promise of emerging tools and technologies is always alluring; naturally quants will continue to evaluate and employ what they have to offer.
System dynamics – what we do influences others
If we consider research initiatives, strategy deployment, and feedback loops, we find that some of the trends in quant finance become self-fulfilling prophecies. We should be more conscious of the fact that we often pave the way for trends that others will follow. One of the differences between quant finance and traditional sciences is that in the other domains, researchers would not claim to have solved a problem that was actually still an open question. In finance, however, trades must take place and even if quants or the traders themselves know that a pricing mechanism is flawed, for example, that will not stop the flows. As a corollary to that, the development of models may lead to new products and their availability will change the investor behaviour – there are impacts in the market both short and long term. ETFs are a good example of radical change over time, bringing the active-passive debate to the forefront and opening up new forms of investing, hedging, and risk mitigation.
Machine learning and data science are also shaping the environment and trading behaviours of the future. One area in particular is the effort to bring quantitative techniques, ML, and data-driven analysis into other asset classes, from credit to private equity, where both traditional and alternative data sets may provide fresh insights. Furthermore, it is recognised that the masses of data generated by investment firms are valuable in themselves – we should think not only about the data sets being offered by a growing array of vendors, but also about the data sets produced through a firm’s daily activities in the markets – this is also a competitive asset.
With data science, more generally, the problem is to identify the most promising opportunities because a great deal of data has not been captured, modelled, or used properly to date. Credit score data, for example, contains many factors that are embedded in the informational universe, but are difficult to extract and pull into the credit decision process. There are many other areas where much more data is now available, including customer behaviour, social media transaction, supply chain, and geospatial data. But it requires structure and coherent modelling before it can be leveraged properly.
Liquidity, fragility, and alpha generation
Pulling back to look at the broader market, liquidity is a major topic for research and discussion and comes in many flavors. In some cases, the focus is on illiquid instruments, where the market is not very active; here the depth, frequency, and range of fresh pricing data is limited. In such cases, we cannot build good models based on prices or trading volumes. However, we can look at other instruments with appropriate characteristics, develop approaches to detecting similarities, and build models that translate from one to the other.
Whether we are looking at liquid or illiquid markets, the real question is, “How much alpha is left?” This has been a very tough environment for alpha generation, as the substantial outflows from hedge funds have indicated in recent years. Yet, we cannot always pinpoint a lack of sophistication, or use of the wrong models in cases of failure; alpha starvation is present in the entire financial system due to heavy regulation, central bank policies, and massive crowding into the same strategies. As we have seen before, when certain strategies stop working, everyone tends to pull out around the same time.
So, as quants continue their research on specific assets and strategies, they should keep in mind the vast investment universe as well: what is the pool of alpha that we are all competing for? How is that pool ebbing and flowing, and how should we deal with it? In part, we can reflect on trends in technology. With high frequency trading over the past decade, it became an arms race in hardware, operating systems, and networks – right down to the connectors and routers – to go faster than everyone else. These days, such technology is cheaper and has become more normal across the industry. The new arms race is centered on data acquisition and analysis – and even though many players may enter, there will still be a competitive advantage for a few firms that truly get it right, along with having the size, ability to invest, and pockets deep enough to pay all of the data vendors.
Finally, comes the question of fees. As hedge funds, for example, have come under pressure both on alpha generation and fees, there has been a big push to take traditional hedge fund strategies and systematize them. There are also opportunities to apply the same methods to other asset classes. Whether the machines can really come to the rescue in these or other areas of investment remains to be seen, but quant researchers will be among the first to find out.