Financial Risk Management – A View from the Bridge
Comparing 2021 with 1990’s by MORS Software’s Co-founder and CEO, Mika Mustakallio
Over the years there have been many reasons for the practice of Financial Risk Management in banks to pull its socks up and evolve. From regional market shocks to the global financial crisis, currency devaluations, mortgage price drops, the stock exchange falls, and global pandemics, to name but a few.
The common outcome of all these events has been the exposure to significant financial risk. Consequently, there have been numerous attempts by supervisors and central banks to limit financial risk through improved or increased regulation.
The ‘exposure of the moment’ is Climate Risk, and it’s likely to be around for a long time and probably become an exponentially more severe threat to the stability of the financial system. We are all feeling the effects of climate change, and it’s within these effects that the sources of Financial Risk lay. Wildfires, floods, crop failure, etc, have already caused huge losses in the financial sector and that’s probably just the tip of the melting iceberg.
Taking a step back, trying to see what has actually changed, and trying to understand how the financial industry will prepare for the next trench of change brings me back to the mid-1990’s. What were the key problems then, and what are the key problems today? How were the problems solved then, and how should they be solved today?

T+? (Manual vs. Automatic)
In the mid-nineties, I worked in a bank and every morning I received 2.5 kilos of green stripey computer printout with a massive quantity of figures on it. From these ‘data sheets’, I manually picked figures or sums for certain pages and rows on my spreadsheet. After two or three hours of copy and paste, and running it all through 5 or 6 Excel macros, my report was able to say how we were positioned against the limits in various risk measures relative to the previous evening.
Naturally, with such a time-consuming data gathering exercise, my target became to radically improve the procedure, to get the results and answers immediately rather than after a 24-hour labour and machine intensive process. After a year of developing automated data gathering procedures and transferring the Excel macros into a new automated system, the risk figures were popping up on my screen ‘live’ during the day. The outcome was real-time data processing and real-time calculation of all new data into risk measurements, and further on, real-time limit monitoring. We reached this operating practice already as early as 1996.

To my constant amazement, not much has happened in most banks since 1996. Today I still see that many banks to try to move away from their time-consuming, manual data gathering exercises. Why has this not been universally automated already? What has made the banks believe that this processing cannot be done in real-time? Everything else is real-time in our world. Why not risk management in banks?
Misleading Job Titles (Data Engineering vs. Risk Analysis)
The time-consuming data processing activity described above, goes hand in hand with the role of the risk manager in banks. In fact, calling this role Risk Manager is misleading if that the person uses 90% of their time in data gathering and IT-processing. Only automated and real-time data processing enables the Risk Manager to use their time in managing risk. This is what I was already doing in mid-90’s, but it seems that not all banks have caught up, even until now.
Competition vs. Ambition
Those that believe in the private economy and free competition are naturally hopeful that the banks keep up with competition, and of the high internal ambition level for continuously improving risk management. The idea is that the better the management of risk, the less likely any accidents are to happen, accidents causing financial loss.
The more effective the measuring, and the more up to date or real-time the monitoring, banks would need lower buffers or provisions in preparation for the losses caused by the realisation of risks. It’s not clear to me how much the freedom of competition has driven risk management practice improvements, given that most regulation still seeks to impose buffers or increased provisions of some sort.
Isn’t Regulation Just Imposed Best Practice?
“Necessity is the mother of invention.” as the saying goes. In this respect, regulation has done lots of good in creating more and more requirements that force banks to improve their risk management practices. Without this necessity there wouldn’t be anywhere close to the current level of global and harmonised measures in regulatory reporting requirements. However, the fundamental question remains, have any of the waves of increased regulatory obligation actually materially improved the process of risk management in banks?
The Basel II era brought correlation-based methodologies
Basel II opened the risk management practice for modelling and especially for complicated correlation and history-based modelling.
The benefit of these kinds of models is the assumption that one can take the multidimensionality of multiple risks into consideration in the model.
However, the problem of such models is twofold. Partly, the models become too complicated to understand how to hedge against the total risk figure presented. In addition, a central weakness of the models is that the correlations are based on history. History is no guarantee of something to happen, or not to happen, in the future.
The Basel III era brought understanding-based measures
Basel III has brought a lot of understanding of the measures, but very few of the ratios are interlinked. Many of the stress tests are also concentrated on one risk event or one push at a time.
The great benefit in the development of the Basel III era is the understandability of the measures. In this sense it’s back to basics. Liquidity risk can be understandably expressed as a short-term risk to have less inflows than necessary to cover outflows. Alternatively, to balance long-term commitments and against guaranteed long-term funding.
The problem with these, although understandable, ‘one by one’ measures is that they give no hint as to their interconnectedness.
Silo vs. Holistic
When all said and done, the isolated measures and requirements discussed above are still driving improvements in risk management. But from a regulatory standpoint, it is still possible to ‘comply’ by only doing the individual/isolated analysis well. This may satisfy the regulators, but it comes without having to document any evidence or proof of the bank’s understanding of the overall interaction of various risk surfaces at the top of the house.
What are we missing?
The discussion above is relatively understandable by anybody with a basic grasp of how a bank works. Almost everyone working in or close to the practice of risk management agrees that automation and a focus on analytics, with real-time data, at a granular level, facilitates all other aspects of an effective holistic risk management process. There are software solutions available in the market that are built on these principles, and a new generation of solutions that have a low Total Cost of Ownership (TCO). So, why this is not happening? What are we missing?
Perhaps banks like to think that the problem is more difficult to solve that it actually is. A fully integrated Risk, ALM, and Treasury Management solution that’s relatively cheap to implement and maintain perhaps just seems too good to be true. Is it the case that the banking sector likes to over complicate its problems to seem smarter? Only very very expensive software solutions and associated projects are taken seriously, many of which fail to deliver in the end anyway. Lower cost of ownership has been a disruptor in many markets, but it doesn’t seem to have an effective in the Banking solutions sector. In fact, it could be observed that the cheaper the solution, the less it’s taken seriously.
Is it genuinely the case that banking, its data, and its processes are so complicated that only very few people are prepared to believe that the management of risk can be improved by having fast, automated processes at a fraction of the cost currently incurred? I personally don’t believe it is, and I’m prepared to prove it to any bank willing to listen.
Will Climate Risk be the Mother of Invention?
Current publications on the expectations of how to measure the influence of climate change in the financial services industry point to the latest challenge to effectively manage financial risk in banks. There is a silver lining in the Climate Risk cloud. Climate Risk is very complicated and multidimensional, the effect on financial risk for financial institutions is set to be measured and expressed in the tried and tested Basel framework. It will likely be addressed through the current risk measures, such as liquidity risk, market risk, credit risk and operational risk.
The opportunity we have in hands now is that climate risk becomes the multidimensional glue between the known siloed measures. If addressed as such, then banks would be able to measure any stresses and their influence in the known risk measures and how the relate to each other.

I’m looking forward to seeing whether Climate Risk management requirements can push the agenda of multidimensional scenario analysis, fully automated data gathering processes, and the use of real-time ‘live’ data, to finally establish it as acknowledged best practice in the financial services industry. Perhaps sometime soon, all banks can benefit from the paradigm shift I made in the bank in which I worked in 1996.