There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.- Wm. Shakespeare, Hamlet, Act 1, Scene 5.
“We are building the language in which to discuss model risk,” said Boris Deychman, Director of Model, Market and Operational Risk Management at RBS Citizens Financial Group. He drew an analogy with the world of wine experts, who have developed specific vocabulary to talk about aroma and taste. “They don’t say just: this tastes like wine.” Deychman was the third and final speaker of a panel invited by the Global Association of Risk Professionals to discuss model risk, via webinar on January 29, 2013.
Deychman noted that in the aftermath of the financial crisis, people often say what failed were the models, or there was insufficient data. “I say it was the collective hubris of risk managers.” We have ever more sophisticated models so we think we’ve gained control over the future.
In 1937, Keynes wrote about “uncertain knowledge.” Sometimes there is no applicable data on which to properly form an hypothesis. Deychman pointed to the fiscal cliff negotiations which were so recently the focal point of the US politico-economic discussions. A fiscal cliff of that sort had never happened before and the markets could not price it properly when inaction on the deficit was uncertain.
Drawing on another historical example, Deychman pointed to the 1910 book “The Great Illusion” by Norman Angell, who predicted that unification of an industrialized Europe would make future wars “all but impossible.” Bond models throughout Europe were not pricing in any political uncertainty.
In response to a question from the audience on how to judge the efficacy of model risk management, Deychman said that models must constantly “be attended to.” The quality of model risk management may be measured by the governance structure. Model risk policy procedures must permeate the entire framework.
With so much turmoil in the last five years, Deychman agreed with the OCC/FRB 2012 regulatory guidance that stipulates an annual review of models. He stressed that it is important to review most thoroughly those models that have greatest risk of loss.
Another questioner suggested “chaos monkey” type of testing for financial systems. (The Chaos Monkey’s job is to randomly kill instances and services within a system, to see if the system is robust enough to recover. The rationale is that “if we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most – in the event of an unexpected outage.”) Deychman agreed this was a great idea. The model equivalent would be putting in extreme values to test the robustness of the model. “It’s better to test using your own tools of destruction,” he said. ª
The webinar presentation slides can be found at: http://event.on24.com/r.htm?e=533238&s=1&k=2165B761844CD72D3D28DD8CA8818258>