For centuries futuristic visions where quants ‘rule’ (see, mathematician Condorcet’s idea of a rational society entirely shaped by scientific knowledge) via ever better mathematic and scientific solutions to societal problems have abounded. Better algorithms, data management and assessment tools including ‘big data’, ‘AI’, ‘robots’ and other tech advances have arrived. ‘Rational’ math-based assessment is on a roll. The notice that “Models Will Run the World” (1) is among recent examples of this scientist-quant as best-solution mindset.
There are two major ‘flies in that ointment’ that risk managers need to address.
The future of risk management issue
The idea of using ‘rational’ mathematics, computers and mathematician/scientist’s models to best assess and ultimately run things is attractive. It promises to replace notoriously faulty human judgments, including those of deep knowledge experts, the shallower wisdom of crowds and other human-centered solutions. Is it correct?
Experts are often wrong, they disagree, and a ‘correct’ expert in one case is often wrong in other circumstances. Alternatively, if the ‘wisdom of crowds’ approach worked well enough it is hard to see why we continue have financial panics, divided societies or power-point presentations.
The problem with the quant’s rule vision of better judgment and outcomes is that it has been only partially, often very partially, useful in real-world assessment. Specific context, scale and scope of each problem seems to matter a lot.
If the system is very well understood, tightly designed and has stable features, models, algorithms and such are good and getting better. Election polling is usually good (despite recent misses). Country-level risk modeling often is not, international risk modeling worse. Assessing a customer’s future risk of credit default is good in well-understood conditions. Assessing whole-society future change is not. Indeed, normal change assessment is only OK, but rare, large-scale and large-impact event change assessment is not even that. My overall take is that quant solutions do well unless something important or foundational changes affecting the topic being assessed (2).
In short, quant solutions fail when you need them most.
In short, quant solutions fail when you need them most. My informal queries of very experienced risk managers show self-assessment scores for their firm and industry of only 2-3 on a 5-scale (5=excellent) for the ability to interpret machine outputs to form useful normal change real-world assessment; and only 1-2 on a 5-scale for rare or large-scale change judgments.
Worse, especially since C. P. Snow’s famous 1959 REDE Lecture at Cambridge University, the fact of growing mutual incomprehension between scientists (mathematicians especially) and the rest of society is acknowledged (3). Bridging incomprehension to produce better outcomes remains a concern in many fields, risk management among them. The present impact of this fact at the math (model)-human judgment interface is profound. Mathematicians don’t usually ‘get’ holistic societal nuances and context. Experts lack ‘real world’ math output comprehension.
The Latest Proposed Best-Practice Solution
While leading media venues now advise counter quant-rule assessment solutions like “Investors, Look Up From Your Algorithms” (4) and say that “Investing’s next frontier is ‘quantamental’: A merger of computing power and human expertise…” (5), these merger solutions cannot work because of the continuing incomprehension problem: C. P. Snow’s REDE Lecture point.
‘Looking up’ won’t work unless the looker ‘gets’ that context-specific, nuanced societal reality. Mergence is not useful unless non-comprehending communities of practice merge properly. IARPA wants to solve this at ‘scale’, which I argue cannot work (6).
Best Practice Mergence at the Math-Human Judgment Interface
What is being ‘quantamental’ operationally?
Being usefully ‘quantamental’ is the better application of ‘tool’ outputs by humans with talents that are relevant to the scale and scope of each particular assessment. Thus, contextually, having the right human is as important as having the right tool.
Quantimental is neither a ‘merger’ in Kurzweil’s sense of building a synthesis of man and machine (7) nor is it taking an average person (or expert) and teaching them better interpretation of math.
Obviously, for technical or narrow scope/scale risk assessments a topic expert is most useful. Unfortunately most human affairs are not isolates. Worse, they are sloppy. Cold War diplomat George F. Kennan, speaking about his experience of human affairs, titled his autobiography “Around the Cragged Hill” (8), playing off Immanuel Kant’s famous comment “of the crooked timber of mankind, no straight thing was ever made” (9). This ‘cragged hill’ or ‘crooked timber’ perspective on reality at the machine-human judgment interface leads to seeing the best practice in most cases assessing human-involved systems’ risks.
Having the right human is as important as having the right tool.
First, every machine/model output is only one data point when seen from a holistic perspective. Second, the best judge of that data point’s relevance within the whole context of a human-involved system’s scope is the person of broad understanding. Third, these individuals are not merely experientially broad; they are syncretic thinkers in the secular sense of that term (2).
The lesson of this is that having an expert economist address economic risk is usually entirely inadequate because economic outcomes are embedded in and entangled with many other things in real systems. Ditto for politics, etc., etc. A team of disciplinary experts is inadequate due to Snow’s incomprehension issue. This is an individual talent, and building it is a main future of risk management challenge.
As an example, assessing technical risk in, say, self-driving technology is best done with technical experts. Assessing self-driving technology’s risks when it meets ‘the crooked timber of mankind’ is a whole other task.
If this is so – last question – ‘Who put the quants in charge?’ Why should models rule the world?
- Cohen, Steven and Matthew Granade. Models Will Run the World. The Wall Street Journal, August 20, 2018, A17; Patterson, Scott. 2010. The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It. New York: Random House. Contains Warren Buffett quote: “Beware of Geeks Bearing Formulas”, Front cover insert.
- Werther, Guntram. Improving Finance and Risk Management Foresight Abilities: Growing Past (THE) ‘Black Swan’ MindSET Through Integrative Assessment. Journal of Risk Management in Financial Institutions. September 2017, Vol. 10. No. 4.
- Snow, C. P. 2012. The Two Cultures. The 1959 REDE Lecture, Cambridge University. Cambridge: Cambridge University Press.
- Arbess, Daniel. Investors Look Up From Your Algorithms. The Wall Street Journal, October 29, 2018, A23.
- Watts, William. Investing’s Next Frontier is Quantimental: A meger of computer power and human expertise is lowering costs and increasing gains. The Wall Street Journal, October 29, 2018, S1.
- Werther, Guntram. Improving Finance and Risk Management Foresight Abilities: Growing Past through ‘Black Swan’ Mindhrough Integrative Assessment. Journal of Risk Management in Financial Institutions. September 2017, Vol. 10. No. 4.
- Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. New York: Viking Penguin Press.
- Kennan, George Around the Cragged Hill
- Kant, Immanuel. 1784. Idea for a General History with a Cosmopolitan Purpose; Idee zu einer allgemeinen Geschichte in weltbürgerlicher Absicht, 6. Satz (1784) in Sämtliche Werke in sechs Bänden, vol. 1, p. 230 (Großherzog Wilhelm Ernst ed. 1921)(S.H. transl.).””