When talking about the progression of model risk management, the subject can be divided into two distinct phases – the “dark” pre SR11-7 period and the current “enlightened” model risk period post SR11-7. However, now that banks have spent significant time and energy developing and maturing their model risk frameworks, a new wave of models have arrived in town and eagle-eyed model risk managers cannot afford to let these evade their model risk frameworks.
SR11-7 defines a model as a quantitative method, system or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. Applying this definition strictly means new developments such as algo trading algorithms, applications of artificial intelligence (AI) and machine learning (ML) should all be classed as models. To be clear algo trading is the use of computer algorithms to execute trading strategies. By AI we are talking about the use of computer systems to perform tasks normally requiring human intelligence such as decision making and ML is the use of statistical techniques to give computer systems the ability to “learn” or progressively improve performance on a specific task using data without being explicitly re-programmed. Algo trading has developed significantly due to the advantages of recent AI technological developments and AI and ML are seeing other uses including making credit decisions or fraud detection.
In the case of trading algorithms that result in fast paced high frequency market activity, swift market activity can result in equally swift accumulation of losses.
According to an FCA review of algo trading, many firms lack a robust and comprehensive governance framework for these new models and the FCA expects existing model risk management frameworks to be continually updated for all the new models being developed at such a fast pace.
What is the risk of getting this wrong?
As for many models, also for algo models there is the risk of financial loss if they are not operating appropriately. But these unique models may have their own particular risks. For example, where AI is used to make decisions in place of people there is the risk of incorrect or inappropriate decisions due to modelling errors or inappropriate assumptions and the subsequent consequences of bad decisions.
Also, in the case of trading algorithms that result in fast paced high frequency market activity, swift market activity can result in equally swift accumulation of losses. And with multiple market participants utilising algorithms that may not have undergone sufficient testing and validation there is a potential systemic model risk which could result in an unintended large scale market abuse and financial disaster.
Where these new types of models do sneak into the model inventory and as a result face the rigorous model risk policies, there may not be relevant expertise within the model risk function to perform a sufficient review, or more often, capacity or resource to bring them the governance framework in an efficient manner as quickly as possible.
What should be done about this?
Regulators have certainly shown they are keen to try to solve the problem with both the FCA and PRA conducting reviews across firms and publishing guidance and supervisory statements earlier this year with regards to algo trading.
Firms should define a robust approval process alongside a minimum set of controls to be carried out at a frequency, with a level of rigour, within the boundaries of risk appetite
Firstly, a firm is expected to explicitly approve the governance framework, controls and policies for algo trading and other models. This includes the identification and empowerment of the specific management body to manage the model risk of such models with resources with suitable expertise. The next step should be the re-assessment of definition of a model within the bank to ensure all these new model types are covered. These new categories of models should be listed in a (dedicated) model inventory, including associated controls and other mitigants, e.g. kill-switch procedures. Further, any new and existing AI/ML/algo or in general other new types of models should be forced to follow the usual steps of testing as other more established model types. Firms indeed should define a robust approval process alongside a minimum set of controls to be carried out at a frequency, with a level of rigour, within the boundaries of risk appetite. In particular, risk controls should be designed to limit exposure to a counterparty, order attribution, message rate, frequency of order, stale data, order and position size. Further, tests specific to these types of models should be implemented. For instance, in the case of algos some additional stress testing of IT systems in dynamic testing environments may be required to understand the impact of extreme events and avoid potential future disaster in the event of IT system errors.
This does not necessarily require a whole new model risk structure. It may be possible to adapt existing frameworks to encompass new models and ensure adherence to existing policies. In this way institutions can adapt and respond to fast moving changes; for example, with Libor reforms around the corner there are potentially significant changes to existing interest rate models looming on the horizon.