Operational risks which apply to the banking sector
The test statistic is a measure of how different the observed frequencies are from the expected frequencies. It has a Chi-squared distribution with degrees of freedom, where is the number of parameters that needs to be estimated.
Even though in practice we may not have access to a historical sample of aggregate losses, it is possible to create sample values that represent aggregate operational risk losses given the severity and frequency of a loss probability model. In our example, we took the Poisson 2 and Lognormal 1. Using the frequency and severity of loss data, we can simulate aggregate operational risk losses and then use these simulated losses for the calculation of the operational risk capital charge.
The simplest way to obtain the aggregate loss distribution is to collect data on frequency and severity of losses for a particular operational risk type and then fit frequency and severity of loss models to the data.
The aggregate loss distribution then can be found by combining the distributions for severity and frequency of operational losses over a fixed period such as a year. Let us try and explain this in a more theoretical way. Suppose is a random variable representing the number of OR events between time and , is usually taken as one year with associated probability mass function which is defined as the probability that exactly losses are encountered during the time limit and and let us define as a random variable representing the amount of loss arising from a single type of OR event with associated severity of loss probability density function ; assuming the frequency of events is independent of the severity of events, the total loss from the specific type of OR event between the time interval is The probability distribution function of is a compound probability distribution: where is the probability that the aggregate amount of losses is , is the convolution operator on the functions , and is the -fold convolution of with itself.
McNeil et al. The recursion is given by where is the probability density function of. Usually, Poisson distribution, binomial distribution, negative binomial distribution, and geometric distribution satisfy the form.
For example, if our severity of loss is the Poisson distribution seen above, then and. This shows that our severity of loss distribution, which is generally continuous, must be made discrete before it can be used.
Another much larger drawback to the practical use of this method is that the calculation of convolutions is extremely long and it becomes impossible as the number of losses in the time interval under consideration becomes large. It involves the following steps cf. Dahen [ 6 ]. Summing all the generated to obtain which is the annual loss. Now focusing on our example taking as Lognormal as the severity loss distribution and Poisson 2 as the frequency distribution and by applying Monte Carlo we arrive to calculate the VaR corresponding to the operational risk for a specific risk type let us say internal fraud.
To explain a bit the example given, we took into consideration the Poisson and Lognormal as the weekly loss frequency and severity distributions, respectively. For the aggregate loss distribution we generate number of loss each time regarding the Poisson distribution and losses according the Lognormal distribution and so by summing the losses , and repeating the same steps times we obtain which would be the one annual total loss. At the end, we repeat the same steps over and over again , times; we obtain the aggregate loss distribution see Figure 8 on which we calculate the Value at Risk at.
The programming was done using Matlab software and it resulted in the output and calculations as shown in Table Generally, not all operational losses are declared. Databases are recorded starting from a threshold of a specific amount e. This phenomenon, if not properly addressed, may create unwanted biases of the aggregate loss since the parameter estimation regarding the fitted distributions would be far from reality.
Data are said to be truncated when observations that fall within a given set are excluded. Left-truncated data is when the numbers of a set are less than a specific value, which means that neither the frequency nor the severity of such observations has been recorded cf. Chernobai et al.
In general, there are four different kinds of approaches that operational risk managers apply to estimate the parameters of the frequency and severity distributions in the absence of data due to truncation. Approach 1. For this first approach, the missing observations are ignored and the observed data are treated as a complete data set in fitting the frequency and severity distributions. This approach leads to the highest biases in parameter estimation. Unfortunately, this is also the approach used by most practitioners.
Approach 2. The second approach is divided into two steps see Figure 9. In the end, the adjusted frequency distribution parameter is expressed by where represents the adjusted complete data parameter estimate, is the observed frequency parameter estimate, and depicts the estimated conditional severity computed at threshold.
Approach 3. This approach is different from the previous approaches since the truncated data is explicitly taken into account in the estimation of the severity distribution to fit conditional severity and unconditional frequency see Figure The density of the truncated severity distribution would result in.
Approach 4. The fourth approach is deemed the best in application as it combines the second and third procedures by taking into account the estimated severity distribution and, as in Approach 2 , the frequency parameter adjustment formula. In modelling operational risk, this is the only relevant approach out of the four proposed as it addresses both the severity and the frequency of a given distribution.
The MLE method can then be applied to estimate our parameters. To demonstrate, let us define as losses exceeding the threshold so the conditional Maximum Likelihood can be written as follows: and the log-likelihood would be When losses are truncated, the frequency distribution observed has to be adjusted to consider the particular nondeclared losses. For each period , let us define as the number of losses which have to be added to , which is the number of estimated losses below the threshold, so that the adjusted number of losses is.
To reiterate, the ratio between the number of losses below the threshold, , and the observed loss number, , is equal to the ratio between the left and right severity functions: where is the truncated cumulative distribution function with parameters estimated using MLE. Finally, we have. The Kolmogorov-Smirnov KS test measures the absolute value of the maximum distance between empirical and fitted distribution function and puts equal weight on each observation.
So regarding the truncation criteria KS test has to be adapted cf. For that, let us assume the random variables iid following the unknown probability distribution. The null hypothesis related would be has a cumulative distribution , where. Let us note and so that is where,. The value associated is then calculated using Monte Carlo simulation. Extreme Value Theory EVT is a branch of statistics that characterises the lower tail behavior of the distribution without tying the analysis down to a single parametric family fitted to the whole distribution.
This theory was pioneered by Leonard Henry Caleb Tippett who was an English physicist and statistician and was codified by Emil Julis Gumbel, a German mathematician in We use it to model the rare phenomena that lie outside the range of available observations. As a result, the bank collapsed and was subsequently sold for one pound. These losses only became known when Iguchi confessed his activities to his managers in July In all areas of risk management, we should put into account the extreme event risk which is specified by low frequency and high severity.
In financial risk, we calculate the daily Value-at-Risk for market risk and we determine the required risk capital for credit and operational risks. As with insurance risks, we build reserves for products which offer protection against catastrophic losses. Extreme Value Theory can also be used in hydrology and structural engineering, where failure to take proper account of extreme values can have devastating consequences.
Now, back to our study, operational risk data appear to be characterized by two attributes: the first one, driven by high-frequency low impact events, constitutes the body of the distribution and refers to expected losses; the second one, driven by low-frequency high-impact events, constitutes the tail of the distribution and refers to unexpected losses. In practice, the body and the tail of data do not necessarily belong to the same underlying distribution or even to distributions belonging to the same family.
Extreme Value Theory appears to be a useful approach to investigate large losses, mainly because of its double property of focusing its analysis only on the tail area hence reducing the disturbance on small- and medium-sized data as well as treating the large losses by a scientific approach such as the one driven by the Central Limit Theorem for the analysis of the high-frequency low-impact losses. EVT is applied to real data in two related ways. The first approach deals with the maximum or minimum values that the variable takes in successive periods, for example, months or years.
These observations constitute of the extreme events, also called block or per-period maxima. This result is important as the asymptotic distribution of the maxima always belongs to one of these three distributions, regardless of the original distribution.
Therefore, the majority of the distributions used in finance and actuarial sciences can be divided into these three categories as follows, according to the weight of their tails cf. Smith [ 15 ]. We discuss the details of these two approaches in the following segments. Suppose are independent random variables, identically distributed with common distribution and let and.
Theorem 3. Consider where is the distribution function of the normal distribution,. If there exists suitable normalising constants , and some nondegenerate distribution function such that Then belongs to one of the three standard extreme value distributions see Figure 11 cf. As we have seen previously, observations in the block maxima method are grouped into successive blocks and the maxima within each block are selected.
The theory states that the limit law of the block maxima belongs to one of the three standard extreme value distributions mentioned before. To use the block-maxima method, a succession of steps need to be followed. First, the sample must be divided into blocks of equal length. Next, the maximum value in each block maxima or minima should be collected. Then, we fit the generalized extreme value distribution, and finally, we compute the point and interval estimates for the return level.
Determining the Return Level The standard generalized extreme value is the limiting distribution of normalized extrema. Given that in practice we do not know the true distribution of the returns and, as a result, we do not have any idea about the norming constants and ; we use the three parameter specification of the generalized extreme value: where The two additional parameters and are the location and the scale parameters representing the unknown norming constants. The log-likelihood function that we maximize with respect to the three known parameters is where is the probability density function if and.
If , the function is As defined before, the return level is the level we expect to be exceeded only once every years: Substituting the parameters , , and by their estimates, we get. The Generalized Pareto GP Distribution has a distribution function with two parameters: where , and where when and when. The value of determines the type of distribution: for , the model gives the type II Pareto distribution; for , we get the exponential distribution; for , we get a reparameterised Pareto distribution.
For , we have the following formula: We use this formula to calculate the mean. For , and : and we calculate the variance for :. Excess losses are defined as those losses that exceed a threshold. So given a threshold value for large losses, the excess loss technique can be applied to determine the amount of provisions needed to provide a reserve for large losses. We consider a distribution function of a random variable which describes the behavior of the operational risk data in a certain business line BL.
We are interested in estimating the distribution function of a value above a certain threshold cf. Medova and Kyriacou [ 16 ]. The distribution is called the conditional excess distribution function and is formally defined as We verify that can be written in terms of as For a large class of underlying distribution function the conditional excess distribution function for a large is approximated by where is the Generalized Pareto Distribution.
We will now derive an analytical expression for and. First, we define as Then, we estimate by where is the total number of observations and the number of observations above the threshold. So we have which simplifies to Inverting the last equation, we have For the calculation of the expected shortfall, we notice that Since we have and as is the shape parameter, we can immediately conclude that and now, we estimate the expected shortfall:.
The POT method considers observations exceeding a given high threshold. As an approach, it has increased in popularity as it uses data more efficiently than the block maxima method. However, the choice of a threshold can pose a problem. To use the peak over threshold methods, we first select the threshold. Then, we fit the Generalised Pareto Distribution function to any exceedances above. Next, we compute the point and interval estimates for the Value-at-Risk and the expected shortfall cf.
Selection of the Threshold While the threshold should be high, we need to keep in mind that with a higher threshold, fewer observations are left for the estimation of the parameters of the tail distribution function. So it is better to select the threshold manually, using a graphical tool to help us with the selection.
We define the sample mean excess plot by the points: where is the sample mean excess function defined as and where represent the increasing order of the observations.
Fitting the GPD Function to the Exceedances over As defined in the previous sections, the distribution of the observations above the threshold in the right tail and below the threshold in the left tail should be a generalized Pared distribution. For a sample the log-likelihood function for the GPD is the logarithm of the joint density of the observations:. The ideas behind Bayesian theory are easily applicable to operational risk, especially in the early days of measurement when data was not available.
While Bayes , an English clergyman and statistician, developed his theory long ago, it has recently enjoyed a renaissance amongst academics due to advances in computational techniques to solve complex problems and formulas. A Bayesian inference approach gives a methodic approach to combine internal data, expert opinions, and relevant external data.
The main idea is as follows. We start with external market data which determines a prior estimate. This estimate is then modified by integrating internal observations and expert opinions leading to a posterior estimate. Risk measures are then calculated from this posterior knowledge. The Basel Committee has mentioned explicitly that cf. This approach draws on the knowledge of experienced business managers and risk management experts to derive reasoned assessments of plausible severe losses.
For instance, these expert assessment could be expressed as parameters of an assumed statistical loss distribution. As mentioned earlier, the Basel Committee has authenticated an operational risk matrix of risk cells.
Each of these 56 risk cells leads to the modelling of loss frequency and loss severity distribution by financial institutions. Let us focus on a one risk cell at a time. After choosing a corresponding frequency and severity distribution, the managers estimate the necessary parameters. While needs to be estimated from available internal information, the problem is that a small amount of internal data does not lead to a robust estimation of. Therefore, the estimate needs to include other considerations in addition to external data and expert opinions.
For that, the risk profile is treated as the adjustment of a random vector which is calibrated by the use of external data from market information. The distribution of is called a prior distribution. To explore this aspect further, before assessing any expert opinion and any internal data study, all companies have the same prior distribution generated from market information only. Company-specific operational risk events and expert opinions are gathered over time.
As a result, these observations influence our judgment of the prior distribution and therefore an adjustment has to be made to our company-specific parameter vector see Table Clearly, the more data we have on and , the better the prediction of our vector and the less credibility we give to the market. So in a way, the observations and the expert opinion transform the market prior risk profile into a conditional distribution of given and denoted by cf.
Lambrigger et al. Bayes theorem gives for the posterior density of : where is the normalizing constant not depending on. At the end, the company specific parameter can be estimated by the posterior mean. Let loss severities be distributed according to a lognormal-normal-normal model for example. Given this model, we hold the following assumptions to be true cf. That is, corresponds to the density of a distribution. We let where is the standard deviation denoting expert uncertainty. Moreover, we assume expert opinion and internal data to be conditionally independent given a risk profile.
This type of undertaking has several risks involved, including:. With digitization now taking its place as a mainstay in most sectors, it is no surprise that it comes with its own set of risks. Even despite the proactive risk management protocols or cybersecurity controls in place, phishing, ransomware and other such risks are still a threat.
In fact, these risks have become more effective and occur more frequently. Data suggests that such attacks have tripled in the last 10 years and will continue to do so for as long as there is a reliance on digital finance services. To make matters worse for financial institutions, antagonistic governments are known to orchestrate hostile activity around the financial services sector.
Crippling these systems causes widespread disruptions and the losses are huge. Data privacy and its security is of key importance to the banking sector and it is also a facet that has been closely followed in the news. However, when it comes to data privacy, the problem lies with data management.
Considering that most banking entities have their data siloed, there is a gap created between this data and governance processes. This is a base-level vulnerability as AI-enabled systems face crucial data shortages that undermine its function. While banking entities have every incentive to minimize operational risks, this is difficult to sustain. If neglected, banks risk more than just the loss of capital.
In some cases, customers lose their trust in the entity and this hurts banks by restricting business or future deposits. Incorporating operational risk management into the overall enterprise risk management framework is a systematic process and is one that must have its own tools and organization.
This is where an all-in-one solution like that from VComply offers value. The platform provides a GRC suite that offers effective risk management frameworks and controls, while revolutionizing management of regulatory compliance.
This tool enables seamless digital collaboration and gives you real-time risk management solutions. Previous Article. Next Article. Login Live Demo. McPhail identified several potential operational risk problems in the Canadian banking system such as the failure of time-sensitive payment requirements and the disruption and dislocation in payment systems which could contribute to severe liquidity shortfalls in financial institutions.
A framework was identified which provided a unified and systemic perspective on operational risk. The implementation of the framework - which assisted in the assessment of operational risk management in relevant critical systems - promoted financial stability in the Canadian banking system McPhail, Principal-agent risk.
One of the most important operational risks -this is the risk that arises from agents who act on behalf of the organisation but who pursue actions not in the best interest of the stakeholders, but rather their own. Principal-agent risk was the underlying cause of two of the drivers of the global credit crisis: the sub-prime crisis and AIG's credit default swaps debacle. Under normal operating circumstances, laws and regulations monitored through legitimate, transparent metrics generally prevent the exploitation of principals by agents.
Where information asymmetries and flawed performance metrics exist, however, this is not necessarily true. In the period preceding the credit crisis, large -but ultimately spurious - profits were generously rewarded, while legitimate - but moderate in comparison - returns were criticised and in some cases penalised.
In such situations, some agents engaged in business activities that created the appearance of profitability while value was actually being destroyed and even well-meaning management structures began to disregard fiduciary responsibilities. Irresponsible behaviour at just one firm very quickly replicated itself, eventually resulting in industry-wide trends. This operational failure was a key driver of systemic risk Canadian Institute of Actuaries, External black swan events.
Extensive losses were made when four commercial aircraft were hijacked and used to crash into the World Trade Centre in New York and the Pentagon in Washington in September The destruction resulted in billions in insured property losses, the single largest insurance hit in history see Banham, This event - which caused considerable global economic and political impact -provides a compelling example of physical assets afflicted by external causes.
The management of operational risk is closely connected to the principles of Enterprise-wide Risk Management ERM as outlined by e. ERM embraces the following important steps for operational risk management:.
Any breaches, gaps or inefficiencies in the ERM process could lead to higher-than-anticipated operational losses. Closely associated with the management and measurement of operational risk is the provision of sufficient economic capital to guide against unforeseen losses due to operational risk events.
The determination and management of economic operational risk capital plays an important part in the assessment of operational risk. The LDA requires banks to organise their operational loss data in units of measure or operational risk categories ORCs. These categories are determined by a specific business line e. An important assumption is that the ORCs must be selected in such a way that all loss data observed in an ORC may be considered from independent sources.
The loss data are then modelled in each ORC by a frequency distribution typically Poisson and a severity distribution typically a combination of a Burr for the bulk of the data and a Generalised Pareto for the distribution's tail.
This value is then used to determine the economic capital for each ORC. Assuming total dependence between ORCs, the individual economic capital figures may be added to obtain an overall economic capital figure for the bank. As stated previously the economic capital estimates are very sensitive to many of the assumptions underlying the LDA approach.
Recently Embrechts and Hofert gave an overview of observed practice and supervisory issues in operational risk and Cope et al. From these the following modelling issues are highlighted as most sensitive:. The internal data collated by banks seldom cover periods of more than ten years and typically five year data sets are the norm see Cope et al.
The determination of an accurate To circumvent this problem external data banks in which several banks pool loss data -such as the ORX data have been compiled.
In practice, internal data are then augmented with external data and scenarios to improve economic capital estimates. Estimation of the operational risk capital under the loss distribution approach requires evaluation of aggregate or compound loss distributions.
Closed-form solutions are not available for the distributions typically used in operational risk; however, with modern computer processing power these distributions can be calculated almost exactly using numerical methods. Shevchenko reviews numerical algorithms that can be successfully used to calculate the aggregate loss distributions. In particular, Monte Carlo, Panjer recursion and Fourier transformation methods are presented and compared. Cope has recently proposed an alternative method for integrating information from loss data with that obtained from scenarios analyses.
The stochastic process that generates losses within an ORC is modelled as a superposition of various sub-processes that characterize individual 'loss-generating mechanisms' LGMs. Cope then provides an end-to-end method for identifying LGMs, performing scenario analysis and combining the outcomes with relevant historical loss data to compute an aggregate loss distribution for the ORC.
The assumption of total dependence is considered to be a conservative assumption since no provision is made for possible diversify-cation. Copula models have been introduced to allow for modelling the dependence structure between ORCs.
The consequences of this dependence concept for both operational risk frequencies and severities were analysed and the authors argued that instead of estimating precise frequency correlations between different cells, more effort should be directed at the more accurate modelling of loss severity distributions. Gourier, Farkas and Abbate performed an empirical study of the shortcomings of the standard methodologies for quantifying operational losses.
Extreme value theory was used to model heavy-tailed data - characteristic of operational risk losses. It was found that using Value-at-Risk as a risk measure led to misestimations of capital requirements. By introducing dependence between the business lines through copulas, the authors explored stability and coherence and related these to the degree of heavy-tailedness of the operational loss data.
Inanoglu and Ulman used aggregated weekly operational loss data to avoid synchronicity problems with sparse data and applied non-parametric estimation to operational loss sample losses.
The authors used the empirical distribution function to build a pseudo-sample matrix of probabilities which emulate drawings from identical marginals required to fit a standard copula. The empirical distribution function matrix was used to fit standard Gaussian, t, and Gumbel copulas to operational loss data. Annual losses in each event-loss type and by business line were calculated by simple summation.
The simulation results found substantially lower diversification ratios from all copula models at the The authors proposed using distributional copula approaches with larger numbers of parameters than the Gumbel and the development of a Bayesian strategy.
Quantitative techniques are not only used for the calculation of economic capital, but also for assessing and threshold calculation of key risk indicators and for monitoring and controlling these key risk indicators. Loss distributions capture outcome severity together with the probability of frequency and impact components. This is based on the assumption that enough data are available for the particular event type. However, data are frequently not available and qualitative assessments must be made.
This is usually done by gleaning information from risk experts and ascertaining their view on the likelihood of future events and associated impact. Key risk indicators are often defined and the likelihood and impact assessed. These values are then multiplied to obtain a risk rating so as to rank risk indicators. This rating, however, should not be regarded as a risk measure since the product of likelihood and severity estimates expected losses, while risk management is concerned with unexpected losses.
The assessment of likelihood and impact is better viewed in a matrix framework Jobst There exists a substantial literature on financial crises and market crashes.
The term financial crisis broadly refers to a variety of situations in which the value of financial institutions or assets reduces abruptly. Investors sell off assets or withdraw money from financial institutions with the expectation that the value of those assets will decrease further if they remain at the financial institution.
When available money is withdrawn, the financial institution is forced to sell other assets to make up any shortfall. Inflation shocks -for example - cause decreases in the real value of money and uncertainty regarding future inflation discourages investment and savings. High inflation leads to shortages of goods if consumers begin hoarding fearing future price increases.
If elevated inflation levels continue, consumer confidence and economic growth declines, resulting in recessions. The severity of the crisis is determined by the severity of the rise in inflation. Asset price bubbles arise through different circumstances. If mortgage interest rates rise, home buying is discouraged and house prices decrease. Home owners struggle with higher interest payments leading to more defaults and banks owning these mortgages simultaneously face more defaults, lower value of the collateral and more bad debt.
Depending on the size of the mortgage book, bad debt can increase considerably. This aspect is discussed in detail in the next section. This event resulted in a lack of confidence in the financial system and plunging capital markets. At this stage, the global financial system was on the verge of collapsing.
Investment banks began to collapse, including the largest global insurance company, AIG. The financial system was locked into its first systemic crisis of modern times Bessis, Failures extended to all players, insurance companies and funds. The crisis manifested itself as a systemic one, involving the collapse of the global financial system, brought about by lack of confidence amongst financial institutions and investors concerning their financial stability.
The crisis of confidence caused a credit crisis, as investors withdrew their funds from the markets and credit institutions drastically decreased lending to limit losses, producing a shortage of capital and effectively halting economic growth.
It is interesting to note that although Basel II regulations for banking credit risk were enforced from , the US banks refrained from full compliance to these new rules Bessis, at the time. Prior to June , US house prices increased steadily. This increase was mainly attributed to a flourishing sub-prime 1 mortgages industry. Sub-prime loan issuers argued that, should house prices rise, collateral would be more valuable so the sub-prime loans transform into prime mortgages.
Securitisation of mortgages allows distribution of the credit risk of lending activities to investors best equipped to bear it. Insurance companies and banks in turn issued credit default swaps CDS which meant that following a default on a loan the devaluated loan would be taken back into the balance sheet of the issuer of the swap at full value. Banks and mortgage brokers eagerly supplied clients with credit, even clients with dubious creditworthiness.
These loans were readily bought by investment banks and other investors for the purpose of securitisation which in turn bought CDSs to cover their risks. Credit risk was therefore distributed widely over the financial system because, prior to , these markets mortgage, sub-prime, CDO and CDS were highly profitable and resulted in large bonuses for entrepreneurs Andersen et al. In mid, several financial players were concerned about the house price bubble.
House prices stopped rising and interest rates on the sub-prime loans increased. Although some financial institutions expected some difficulties, it was not generally expected to trigger a system-wide crisis.
In the second half of a surge in mortgage defaults showed up and accelerated in subsequent months. This led to the devaluation of mortgage backed securities such as CDOs. The collapse of the US housing market together with the subsequent devaluation of mortgage backed securities constituted a causal mechanism to the financial crisis.
The volatility in the US mortgage market then spilled over into stock, commodity, and derivatives markets worldwide, causing a crisis of systemic proportions see Hellwig, In their studies of the financial crisis, Andersen et al. In an attempt to answer these questions, Andersen et al. This operational risk exposure was transferred into credit risk for the CDO owners. Some possible answers are considered below. Access to loans by individuals with limited ability to service these loans has been shown to increase personal bankruptcy rates.
For first-time applicants near the 20th percentile of the credit-score distribution, access to payday loans causes a doubling of bankruptcy filings over the next two years. Despite this research, banks were unconcerned because the risk had been passed on to investment banks through the sale of mortgage backed securities. Investment banks both generated and invested heavily in CDOs. Citibank warehoused mortgages for future securitisation Kregel, , an element that added to the losses as the housing and CDO markets collapsed.
The risk models of firms such as Citibank did not include scenarios in which real-estate values decreased sharply, which suggested that the risk of almost any mortgage was limited Kolb, Investment banks who failed to set up appropriate risk management measures also faced challenges from the rapid development and increasing complexity of these products.
Extraordinary profits generated by the market for securitised assets clouded the judgment of management and staff as salaries and bonuses skyrocketed in the years before the crisis. The fact that investment banks were confident buying under documented loans without requiring additional information from the loan originator, indicates that a risk management focus came second to profit generation.
Whether or not a transaction was considered sound was less an issue for risk management and more of an issue to whom the transaction was presented within the organisation Kolb, Investment banks were highly leveraged as the opportunity to increase lending compared to equity provided by deregulation was fully exploited in an attempt to realise the full potential of the CDO market. The aggregated effect of the operational risk elements put the investment banks in a position where they could only withstand minor increases in default rates before the losses became critical.
The investment banks' failure to manage operational risk was transformed into shareholder risk as the investment banks were only capitalised to handle marginal losses. Moreover, the failure of investment banks to require thorough risk assessments and documentation from loan originators resulted in operational risk being transferred to credit risk for the CDO owners.
Credit rating agencies assigned the same rating to derivatives compiled partly of sub-prime loans as those containing principally prime loans. These ratings became even more of a problem as sub-prime loans were usually under-documented making it nearly impossible to make any informed assessment of future default rates, and hence the riskiness of the securitised products.
This led to a misrepresentation of risk affecting the behaviour and decisions of financial institutions. The post-crisis investigations into the practices of the credit rating agencies uncovered alarming results concerning how these institutions operated in the period of extreme growth in credit securitisation prior to the crisis Andersen et al. The Senate revealed that the departments carrying out the CDO ratings were severely understaffed, and that the management system concerning how to conduct CDO ratings was lacking.
For instance, it was discovered that none of the examined rating agencies had a documented procedure describing how to carry out CDO ratings. There was also a lack of written procedures for surveillance of accuracy of the ratings provided. This led to the staff at rating agencies being overworked and lacking proper guidance. It should be noted that the high market demand for securitised assets led to increasingly complex CDOs, with increased fractions of sub-prime loans.
Given the number of mortgages referenced in a single CDO, deriving a generalised model for assessing the credit risk of a CDO is difficult. A major problem with the models identified both by Rajan and Kregel was the reliance on historical default correlations between groups of borrowers as a predictor of future default rates.
Subprime mortgages were at the turn of the century a fairly new invention, and had never previously been originated at the same rate and extent as during to Thus, the performance history of such loans was limited. It is not likely that the available history concerning default rates provided a remotely reliable predictor for how sub-prime loans would perform in the future. Despite apparent shortcomings in the models and severe organisational problems, the policy was that every deal should be rated, a policy that generated considerable income for the rating agencies.
All of the identified issues within the credit rating businesses fall under the category of operational risk. The problems observed in the credit rating agencies gave rise to an undervaluing of risk through ratings that did not reflect the risk of the underlying assets i. This overoptimistic assessment of risk, resulting from failed management of operational risk, was transferred into credit risk for the CDO holders.
Several insurance companies and particularly a subsidiary of American International Group AIG , issued so-called Credit Default Swaps a form of debt insurance for securitised assets.
In the CEO of AIG Financial Products said: 'It is hard for us, without being flippant, to even see a scenario within any kind of realm of reason that would see us losing one dollar in any of those transactions' Morgenson, He was referring to the CDS derivatives that would later inflict losses so great that only a government bailout could prevent AIG from going bankrupt.
The belief in low future claims made the CDSs seem highly profitable, and for a while they were. Because AIG Financial Products was not classified as an insurance company it was not subjected to requirements to report its activities to insurance regulators, and was allowed to conduct its business almost without oversight Morgenson, Failures to properly assess the risk of the assets insured and failure to properly assess the need for collateral constitute the major operational failures concerning the practices for issuing CDSs.
Based on available knowledge it seems that the insurance company AIG, represented by its subsidiary AIG Financial Products, did not carry out independent assessments of future default rates, and placed full confidence in the ratings provided by the credit rating agencies. The sentiment that default rates would remain low was reinforced by a strong belief that real estate values would continue to increase without significant variations in value US Government, The willingness of insurance companies to insure the debt contained in the CDOs contributed to escalating the market for these products by strengthening the illusion that CDOs represented a comparably low risk investment.
Hence failure to manage operational risk on the part of the insurance companies was transferred into significant risk for the shareholders and, as it turned out in the case of AIG, for American taxpayers.
In the wake of the crisis much focus has been directed towards the remuneration practices within the financial industry, and several countries are currently implementing regulations restricting the bonus potential of employees within the financial industry. There is no denying that the potential for substantial bonus payments affected the actions and behaviour of central actors in the financial organisations. However, it is also possible to trace the frailty of the financial system to the failure to ensure quality throughout the supply chain for securitised assets.
For instance, the rating agencies are not required to verify the information in the loan portfolio that was to be subjected to securitisation. There is also no requirement stating that the issuers of loans should perform due diligence. The fact that no one reacted to the extensive lack of documentation of, particularly, sub-prime loans is baffling to say the least.
Furthermore, investment banks spent vast amounts getting CDOs rated, but seemed to lack interest in whether the credit rating agencies possessed the necessary systems, tools and competence to provide reliable results.
0コメント