Basel II, the accord which regulates the operations of the world's central banks, has turned its attention to operational risk management. Under the agreement's recommendations, which will take effect in 2006, this discipline will change from effectively being a one-dimensional procedure to a highly complex analytical process. In the modern age of global e-commerce, electronic attacks, unpredictable threats to banks' security and lingering economic uncertainty, the process for determining operational risk parameters has changed out of all recognition.
The original 1988 Basel Committee (Basel I) ruled that banks have to have enough cover for potential losses from transactions (technically, a bank's total capital should never fall to a level of less than 8% of risk-weighted assets) and set out rules for calculating the risk-weighted figure. In a globalised world of interconnected financial systems, where banks are exposed to far more potential threats than ever before, it is generally accepted that a single risk measure for all banks is no longer appropriate.
Basel II is demanding active management of risk, enabling banks to control and free capital tied up in risk cover more effectively. These changed priorities demand wider and more sophisticated assessment and analysis of banks' security, operational and management procedures. Institutions will have to run the rule over their operations, analyse relevant factors and determine how the metrics which underpin such analysis can be identified and captured.
Banks will now be expected to examine a bewildering range of factors including information security, fraud, employment practices and workplace safety, business services, physical damage, business disruption, system failure, service execution-delivery-process management, and legal and reputational factors. With the accord's deadlines looming, they will expect their IT directors to take a leading role in making it all happen.
Time could be running out for those who do not get to grips with the necessary applications since Basel II demands that data capture is in place from 2004 with three years' operational data in place by the time the accord takes effect in 2006.
This raises far-reaching operational questions. While most institutions will draw upon data streams from core areas such as transactions, but how is a bank to measure reputation or predict risk from rogue employees? What is the risk from outsourcing services? Will risk be mitigated by relevant insurance? Not only does the IT department have the responsibility for providing the right data capture applications, it will have to help senior management decide how to collect that data. Management consultants and software vendors may say they have the expertise, but no one truly knows what all the practical requirements of operational risk analysis will be.
To complicate matters, distinctions between different types of risk factors aren't yet clear. Different departments will need to understand how risks flow through the organisation - what the dependencies and correlations are. An electronic attack on a bank's IT system might halt a bank's operations and damage its reputation; if the reputational impact - risk one - coupled with disruption to the bank's operations - risk two - affects the share price, there is a third risk category. How do you separate these out and measure them?