“Models are always wrong,” said Joe Pimbley, Principal at Maxwell Consulting, via webinar on January 29, 2013. He was the first of a panel of three speakers invited by the Global Association of Risk Professionals to discuss model risk, what it is, and how to assess and validate it after the financial crisis.

Models are always wrong, Pimbley clarified, because they are only simplifications. “Wrongness” he defined as some type of error or omission that materially impacts the results as understood by the user. A model can be wrong because the meaning of the result differs from that understood by the user. There are other ways for models to be wrong, such as using inaccurate or unreliable or false information.

Pimbley identified four drivers of model risk. The most significant, he said, was “lack of good faith in model building.” If someone knows a priori what they want to see from a model, and the only “goal is to get the right answer,” then that shows a lack of good faith. The other drivers are: errors in code, formulas, or concepts; unreliable or insufficient data; and deliberate or intentional misuse.

There were several best practices recommended by Pimbley. “The senior executive must set the tone,” he said, in encouraging “unbiased model construction.” The creator should clearly define the purpose of the model document it. An oversight committee should frequently require the users to explain the model. It’s important to have model validation by an independent group that does not shy away from conflict. There should be external presentations and discussions about the model. Last, he said there should be continuous validation and improvement. “Models never reach a point where they are completely done.” They are always works in progress.

Evaluation of model performance is not done as often as it could be, Pimbley said. He said companies should make a point of making measurements to compare against model predictions. It is not done often enough. “At the very least, you will learn something about the internal process.”

He closed on a sobering example. A certain company found that a model ranked a single-A bond as AAA, due to an error. However, the company had become so inured to seeing the ‘AAA’ that they had the model changed so they could keep seeing AAA bonds, instead of downgrading them.

Models can only reflect the “embedded intelligence” of the company. ª

Go to Part 2. ª

TWO NOTES ADDED IN PROOF

Pimbley clarified that it’s acceptable for a model to omit something or even have an error if the user is aware of the shortcoming. The user would then know to what extent he/she can trust results. An example of a model with an omission is a bank capital model that does not “know” to test asset-liability mismatch. Thus, the model is “blind” to asset-liability mismatch. The user who understands this point will conduct a separate analysis for asset-liability risk. However, the model is still useful for assessing capital adequacy for purposes of absorbing credit losses.

Pimbley gave an example of a “wrong” model: the application of Black-Scholes to value an option on an unlisted (hence untraded) stock. The creation of Black-Scholes requires a traded stock, so applying it to an unlisted stock is wrong. But if the user understands that the model result when applied to an unlisted stock is simply “the option value that would be appropriate if the unlisted stock were, in fact, traded” and is careful to always add this important qualification, then the model is being used appropriately. ª

The webinar presentation slides can be found at: http://event.on24.com/r.htm?e=533238&s=1&k=2165B761844CD72D3D28DD8CA8818258>

Part 1 slides alone can be found at: http://www.maxwell-consulting.com/GARP_webinar_Model_Risk_Pimbley_Jan_2013.pdf

The images used in the Part 1 posting are from http://www.eamcap.com/bank-capital-and-basel-iii-a-really-short-guide

Follow Joe Pimbley on Twitter.

Added March 3, 2023. Click here to read Joe Pimbley – “Why Lehman Brothers Failed When It Did” on Stories.Finance.